You are on page 1of 62

ICD

Center for Applied Systems Analysis

Innovative Methodologies in
Evolution Strategies
INGENET Project Report D 2.2
June 1998

Thomas Back, Boris Naujoks

Center for Applied Systems Analysis (CASA)


Informatik Centrum Dortmund
Joseph-von-Fraunhofer-Str. 20
D-44227 Dortmund

ii

Abstract
This INGENET report describes the state-of-the-art in research and application of evolution strategies with the goals of making this knowledge accessible to the INGENET members in a compact form and outlining the technological and economical perspectives of
evolution strategies on the European level.
Evolution strategies are one of the main paradigms in the field of evolutionary computation, focusing on algorithms for adaptation and optimization which are gleaned from the
model of organic evolution.
The report puts its emphasis on algorithmic and application-oriented aspects of evolution strategies. The algorithmic aspects include an overview of all components of a modern ( ,)-strategy and a detailed explanation of the concept of strategy parameter selfadaptation, which is considered to be the main distinguishing feature between evolution
strategies and genetic algorithms. The self-adaptation process implements and evolutionary optimization process also on the level of strategy parameters such as mutational step
sizes and therefore offers an elegant solution to the parameter tuning problem of evolutionary algorithms. The working principles of self-adaptation are explained in detail in
section 3 of this report.
A number of recent variations of the basic evolution strategy, including alternatives for
the self-adaptation method, the introduction of hierarchies of evolution strategies, and the
principle of individual aging in the ( ,,,)-strategy, are presented in section 4.
Further aspects which are of strong interest from an application-oriented point of view
include noisy and dynamic object functions as well as multiple criteria decision making
problems and constraint handling. These are discussed in section 5, clarifying the fact that
evolution strategies offer effective techniques for handling all of these additional difficulties
of practical applications.
Section 6 gives a brief overview of the parallelization possibilities of evolution strategies, which are suitable for fine-grained as well as coarse-grained parallelization.
An overview of practical applications of evolution strategies is given in section 7, where
case studies are grouped into disciplines and the corresponding literature references are
given. Due to the strong increase of the number of publications in the field of evolutionary
computation in the 1990s, the collection of case studies stops with most recent examples
from 1994, however containing more than 150 examples up to that time.
The report concludes by giving an outline of the perspectives of evolution strategies
by discussing its technological future with a focus on the economic potential by industrial
applications of these algorithms. This outline might serve as a technological roadmap for
the exploitation of these techniques within a ten year timeframe.
Thomas Back and Boris Naujoks
Dortmund, June 1998
Contact information:
Center for Applied Systems Analysis
Informatik Centrum Dortmund
Joseph-von-Fraunhofer-Str. 20
D-44227 Dortmund, Germany
Phone: +49 231 9700 366
Fax: +49 231 9700 959
Email: baeck@icd.de

iii

iv

Contents
1

A Brief History

The Algorithm
2.1 Working Principle . . . . . .
2.2 The Structure of Individuals
2.3 Mutation . . . . . . . . . . .
2.4 Recombination . . . . . . .
2.5 Selection . . . . . . . . . .
2.6 Termination Criterion . . . .

2
2
3
4
5
5
6

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

Self-Adaptation

Variations
4.1 Mutative Step-Size Control . . . . . . . . . .
4.2 Derandomized Step-Size Adaptation . . . . .
4.3 Hierarchical Evolution Strategies . . . . . . .
4.4 The ( ,,,)-Strategy: Aging of Individuals

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

11
. . . . 11
. . . . 11
. . . . 12
. . . . 13

Application-Oriented Extensions
5.1 Noisy Objective Functions . . . .
5.2 Robust Design . . . . . . . . . . .
5.3 Dynamic Environments . . . . . .
5.4 Multiple Criteria Decision Making
5.5 Constraint Handling . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

15
15
17
20
22
24

Parallel Evolution Strategies


6.1 The Master-Slave Approach . . . . . . . . . . . .
6.2 Coarse Grained Parallelism: The Migration Model
6.3 Fine Grained Parallelism: The Diffusion Model . .
6.4 A Hybrid Approach . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

25
26
26
27
27

.
.
.
.
.
.
.
.
.
.
.

28
28
29
29
31
32
32
33
33
33
34
35

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

Applications
7.1 Artificial Intelligence . . . . . . . . . . .
7.2 Biotechnology . . . . . . . . . . . . . . .
7.3 Technical Design Applications . . . . . .
7.4 Chemical Engineering . . . . . . . . . .
7.5 Telecommunications . . . . . . . . . . .
7.6 Dynamic Processes, Modeling, Simulation
7.7 Medicine . . . . . . . . . . . . . . . . .
7.8 Microelectronics . . . . . . . . . . . . .
7.9 Military . . . . . . . . . . . . . . . . . .
7.10 Physics . . . . . . . . . . . . . . . . . .
7.11 Pattern Recognition . . . . . . . . . . . .
v

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

7.12
7.13
7.14
7.15
8

Production Planning . . . . . .
Robotics . . . . . . . . . . . .
Supply- and Disposal Systems
Miscellaneous . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

Perspectives

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

35
35
35
36
37

References

39

vi

A Brief History

Evolution Strategies are a joint development of Bienert, Rechenberg and Schwefel, who did
preliminary work in this area in the 1960s at the Technical University of Berlin (TUB) in Germany. First applications were experimental and dealt with hydrodynamical problems like shape
optimization of a bended pipe [119], drag minimization of a joint plate [164], and structure
optimization of a two-phase flashing nozzle [210] 1. Due to the impossibility to describe and
solve such optimization problems analytically or by using traditional methods, a simple algorithmic method based on random changes of experimental setups was developed. In these
experiments, adjustments were possible in discrete steps only, in the first two cases (pipe and
plate) by changing certain joint positions and in the latter case (nozzle) by exchanging, adding
or deleting nozzle segments. Following observations from nature that smaller mutations occur
more often than larger ones, the discrete changes were sampled from a binomial distribution
with prefixed variance. The basic working mechanism of the experiments was to create a mutation, adjust the joints or nozzle segments accordingly, perform the experiment and measure
the quality criterion of the adjusted construction. If the new construction happened to be better
than its predecessor, it served as basis for the next trial. Otherwise, it was discarded and the
predecessor was retained. No information about the amount of improvements or deteriorations
was necessary. This experimental strategy led to unexpectedly good results both for the bended
pipe and the nozzle.
Schwefel was the first who simulated different versions of the strategy on the first available
computer at TUB, a Zuse Z23 [200], later on followed by several others who applied the simple
Evolution Strategy to solve numerical optimization problems. Due to the theoretical results of
Schwefels diploma thesis, the discrete mutation mechanism was substituted by normally distributed mutations with expectation zero and given variance [200]. The resulting two membered
ES works by creating one n-dimensional real-valued vector of object variables from its parent
by applying mutation with identical standard deviations to each object variable. The resulting
individual is evaluated and compared to its parent, and the better of both individuals survives to
become parent of the next generation, while the other one is discarded. This simple selection
mechanism is fully characterized by the term (1+1)-selection.
1 for two
For this algorithm, Rechenberg developed a convergence rate theory for n
characteristic model functions, and he proposed a theoretically confirmed rule for changing the
standard deviation of mutations (the 1=5-success rule) [166].
Obviously, the (1+1)-ES did not incorporate the principle of a population. A first multimembered Evolution Strategy or ( +1)-ES having > 1 was also designed by Rechenberg
to introduce a population concept. In a ( +1)-ES parent individuals recombine to form one
offspring, which after being mutated eventually replaces the worst parent individual if it is
better (extinction of the worst). Mutation and adjustment of the standard deviation was realized
as in a (1+1)-ES, and a recombination mechanism as explained in section 2.4 was used. This
strategy, discussed in more detail in [12], was never widely used but provided the basis to facilitate the transition to the ( +)-ES and ( ,)-ES as introduced by Schwefel 2 [201, 202, 203].
1 This experiment is one of the first known examples

of using operators like gene deletion and gene duplication,


i.e. the number of segments the nozzle consisted of was allowed to vary during optimization.
2 The material presented here is based on [203] and a number of research articles, but in the meantime an

Again the notation characterizes the selection mechanism, in the first case indicating that the
best individuals out of the union of parents and offspring survive while in the latter case
only the best offspring individuals form the next parent generation (consequently,  > is
necessary). Currently, the ( ,)-strategy characterizes the state-of-the-art in Evolution Strategy
research and is therefore the strategy of our main interest to be explained in the following. As
an introductory remark it should be noted that the major quality of this strategy is seen in its
ability to incorporate the most important parameters of the strategy (standard deviations and
correlation coefficients of normally distributed mutations) into the search process, such that optimization not only takes place on object variables, but also on strategy parameters according to
the actual local topology of the objective function. This capability is termed self-adaptation by
Schwefel [204] and will be a major point of interest in discussing the Evolution Strategy.

The Algorithm

2.1

Working Principle

In general, evolutionary algorithms mimic the process of natural evolution, the driving process
for the emergence of complex and well adapted organic structures, by applying variation and selection operators to a set of candidate solutions for a given optimization problem. The following
structure of a general evolutionary algorithm reflects all essential components of an evolution
strategy as well (see e.g. [10]):
Algorithm 1:

t := 0

initialize P (t)
evaluate P (t)
while not terminate do
P 0(t) := variation(P (t));
evaluate(P 0 (t));
P (t + 1) := select(P 0 (t)

t := t + 1

 Q)

od
In case of a ( ,)-evolution strategy, the following statements regarding the components of
algorithm 1 can be made:

 P (t) denotes a population (multiset) of

individuals (candidate solutions to the given


problem) at generation (iteration) t of the algorithm.

 The initialization at t = 0 can be done randomly, or with known starting points obtained
by any method.

 The evaluation of a population involves calculation of its members quality according to


the given objective function (quality criterion).

updated and extended edition of Schwefels book was published (i.e., [207]).

 The variation operators include the exchange of partial information between solutions (re-

combination) and its subsequent modification by adding normally distributed variations


(mutation) of adaptable step sizes. These step sizes are themselves optimized during the
search according to a process called self-adaptation.

 By means of recombination and mutation, an offspring population P (t) of 


0

date solutions is generated.

 The selection operator chooses the

candi-

best solutions from P 0(t) (i.e., Q = ) as starting


points for the next iteration of the loop. Alternatively, a ( +)-evolution strategy would
select the best solutions from the union of P 0 (t) and P (t) (i.e., Q = P (t)).

 The algorithm terminates if no more improvements are achieved over a number of subsequent iterations or if a given amount of time is exceeded.

 The algorithm returns the best candidate solution ever found during its execution.
In the following, these basic components of an evolution strategy are explained in some
more detail. For extensive information about evolution strategies, refer to [5, 169, 207].
Using a more formal notation following the outline given in [209, 208], one iteration of the
strategy, that is a step from a population P (T ) towards the next reproduction cycle with P (T +1) ,
can be modeled as follows:
P (T +1) := optES (P (T ))
(1)
I  is defined by
where optES : I 

optES := sel  (mut  rec) 


operating on an input population P (T ) according to

optES (P (T )) = sel(P (T ) t

(2)

ti=1fmut(rec(P (T )))g

(3)

(here, denotes the union operation on multisets). Equation (3) clarifies that the population
at generation T + 1 is obtained from P T by first applying a -fold repetition of recombination
and mutation, which results in an intermediate population P 0 of size , and then applying the
selection operator to the union of P (T ) and P 0 . Recall that the recombination operator generates
only one individual per application, which can then be mutated directly.
In the following, both the formal as well as the informal way of describing the algorithmic
components will be used as it seems appropriate.

2.2

The Structure of Individuals

For a given optimization problem

f : M  IRn ! IR  f (~x) ! min

an individual of the evolution strategy contains the candidate solution ~x


IR n as one part
of its representation. Furthermore, there exist a variable amount (depending on the type of
3

strategy used) of additional information, so-called strategy parameters, in the representation of


individuals. These strategy parameters essentially encode the n-dimensional normal distribution
which is to be used for the variation of the solution.
~ ) consists of up to three components ~x IR n (the
More formally, an individual ~a = (~x~ 

solution), ~ IRn (a set of standard deviations of the normal distribution), and



 ] n
(a set of rotation angles representing the covariances of the n-dimensional normal distribution),
1 : : :  n and n 0 (2n n ) (n 1)=2 . The exact meaning of these
where n
components is described in more detail in section 2.3.

2
2f

2.3

2f

;
;

2
2;

Mutation

The mutation in evolution strategies works by adding a normally distributed random vector
~z N (~0 ) with expectation vector ~0 and covariance matrix ;1, where the covariance matrix
is described by the mutated strategy parameters of the individual. Depending on the amount of
strategy parameters incorporated into the representation of an individual, the following main
variants of mutation and self-adaptation can be distinguished:

 n = 1, n = 0: The standard deviation for all object variables is identical ( ), and all
object variables are mutated by adding normally distributed random numbers with

=
exp( 0
N (0 1))
(4)
xi = xi +
Ni(0 1) 
(5)
p
where 0 / ( n) 1 . Here, N (0 1) denotes a value sampled from a normally distributed
random variable with expectation zero and variance one. The notation N i (0 1) indicates
the random variable to be sampled anew for each setting of the index i.
n = n, n = 0: All object variables have their own, individual standard deviation i,
0

which determines the corresponding modification according to

i = i
exp(
N (0 1) +
Ni(0 1))
(6)
xi = xi + i
N (0 1) 
(7)
qp
p
where / ( 2n) 1 and / ( 2 n) 1 .
n = n, n = n
(n ; 1)=2: The vectors ~ and
~ represent the complete covariance
matrix of the n-dimensional normal distribution, where the covariances are given by rotation angles
j describing the coordinate rotations necessary to transform an uncorrelated
0

mutation vector into a correlated one. The details of this mechanism can be found in [5]
(pp. 6871) or [180]. The mutation is performed according to

i = i
exp(
N (0 1) +
Ni (0 1))

j =
j + 
Nj (0 1)
~x = ~x + N (~0 C(~ 
~ ))
~ )) denotes the correlated mutation vector and  0:0873.
where N (~0 C(~ 

(8)
(9)
(10)

The amount of information included into the individuals by means of the self-adaptation
principle increases from the simple case of one standard deviation up to the order of n 2 additional parameters in case of correlated mutations, which reflects an enormous degree of freedom
for the internal models of the individuals. This growing degree of freedom often enhances the
global search capabilities of the algorithm at the cost of the expense in computation time, and
it also reflects a shift from the precise adaptation of a few strategy parameters (as in case of
n = 1) to the exploitation of a large diversity of strategy parameters.
One of the main design parameters to be fixed for the practical application of the evolution
strategy concerns the choice of n and n , i.e., the amount of self-adaptable strategy parameters
required for the problem.

2.4

Recombination

In evolution strategies recombination is incorporated into the main loop of the algorithm as the
first variation operator and generates a new intermediate population of  individuals by -fold
)
application to the parent population, creating one individual per application from % (1 %
individuals. Normally, % = 2 or % = (so-called global recombination) are chosen (but see
also section 4.4 for a generalization). The recombination types for object variables and strategy
parameters in evolution strategies often differ from each other, and typical examples are discrete recombination (random choices of single variables from parents, comparable to uniform
crossover in genetic algorithms) and intermediary recombination (arithmetic averaging). A typical setting of the recombination consists in using discrete recombination for object variables
and global intermediary recombination for strategy parameters. For further details on these
operators, see [5].
The recombination operator needs also be specified for a ( ,)-evolution strategy when
> 1 is chosen.

 

2.5

Selection

Essentially, the evolution strategy offers two different variants for selecting candidate solutions
for the next iteration of the main loop of the algorithm: ( ,)-selection and ( +)-selection.
The notation (  ) indicates that parents create  > offspring by means of recombination and mutation, and the best offspring individuals are deterministically selected to replace
the parents (in this case, Q = in algorithm 1). Notice that this mechanism allows that the
best member of the population at generation t + 1 might perform worse than the best individual
at generation t, i.e., the method is not elitist, thus allowing the strategy to accept temporary
deteriorations that might help to leave the region of attraction of a local optimum and reach
a better optimum. Moreover, in combination with the self-adaptation of strategy parameters,
( ,)-selection has demonstrated clear advantages over its competitor, the ( +) method.
In contrast, the ( +)-strategy selects the survivors from the union of parents and offspring, such that a monotonic course of evolution is guaranteed (Q = P (t) in algorithm 1).
For reasons related to the self-adaptation of strategy parameters, the ( ,)-evolution strategy
is typically preferred.

2.6

Termination Criterion

There are several options for the choice of the termination criterion, including the measurement
of some absolute or relative measure of the population diversity (see e.g. [5], pp. 8081), a
predefined number of iterations of the main loop of the algorithm, or a predefined amount of
CPU time or real time for execution of the algorithm.

Self-Adaptation

The settings for the learning rates , 0 and 0 are recommended by Schwefel as reasonable
heuristic settings (see [202], pp. 167168), but one should have in mind that, depending on
the particular topological characteristics of the objective function, the optimal setting of these
parameters might differ from the values proposed. For n  = 1, however, [26] has recently
theoretically shown that, for the sphere model

/ p


f (~x) =

n
X
i=1

(xi ; xi )2 


(11)

the setting 0 1= n is the optimal choice, maximizing the convergence velocity of the evolution strategy. Moreover, for a (1 )-evolution strategy Beyer derived the result that 0
c1= n (for  10), where c1 denotes the progress coefficient of the (1 )-strategy.
For an empirical investigation of the self-adaptation mechanism defined by the mutation
operator variants (4)(8), [204, 205, 206] used the following three objective functions which
are specifically tailored to the number of learnable strategy parameters in these cases:

1. Function

f1(~x) =

n
X

x2i

i=1
requires learning of one common standard deviation , i.e., n 
2. Function

f2(~x) =

n
X

= 1.

i
x2i

i=1
requires learning of a suitable scaling of the variables, i.e., n
3. Function

(12)

(13)

= n.

0 i 12
n X
X
f3(~x) = @ xj A
i=1 j =1

requires learning of a positive definite metrics, i.e., individual i and n


different covariances.

(14)

= n
(n ; 1)=2

As a first experiment, Schwefel compared the convergence velocity of a (1 10) and a (1+10)evolution strategy with n  = 1 on the sphere model f1 with n = 30. The results of a comparable
experiment performed for this study (averaged over ten independent runs, with the standard
6

Figure 1: Comparison of the convergence velocity of a (1 10)-strategy and a (1 + 10)-strategy


in case of the sphere model f1 with n = 30 and n = 1.

deviations initialized with a value


q of 0.3) are shown in figure 1, where the convergence velocity
or progress is measured by log( fmin(0)=fmin (g )) with fmin(g ) denoting the objective function
value in generation g . It is somewhat counterintuitive to observe that the non-elitist (1 10)strategy, where all offspring individuals might be worse than the single parent, performs better
than the elitist (1+10)-strategy. This can be explained, however, by taking into account that
the self-adaptation of standard deviations might generate an individual with a good objective
function value but an inappropriate value of for the next generation. In case of a plus-strategy,
this inappropriate standard deviation might survive for a number of generations, thus hindering
the combined process of search and adaptation. The resulting periods of stagnation can be
prevented by allowing to forget the good search point, together with its inappropriate step size.
From this experiment, Schwefel concluded that the non-elitist (  )-selection mechanism is an
important condition for a successful self-adaptation of strategy parameters. Recent experimental
findings by Gehlhaar and Fogel [56] on more complicated objective functions than the sphere
model give some evidence, however, that the elitist strategy performs as well as or even better
than the (  )-strategy in many practical cases.
For a further illustration of the self-adaptation principle in case of the sphere model f 1 , we
use a time-varying version where the optimum location ~x  = (x1 : : :  xn ) is changed every 150
generations. Ten independent experiments for n = 30 and 1000 generations per experiment
are performed with a (15,100)-evolution strategy (without recombination). The average best
objective function value (solid curve) and the minimum, average, and maximum standard deviations min, avg , and max are reported in figure 2. The curve of the objective function value
clearly illustrates the linear convergence of the algorithm during the first search interval of 150
generations. After shifting the optimum location at generation 150, the search stagnates for a
while at the bad new position before the linear convergence is observed again.
The behavior of the standard deviations, which are also plotted in figure 2 clarifies the
7

Figure 2: Best objective function value and minimum, average, and maximum standard deviation in the population plotted over the generation number for the time-varying sphere model.
The results were obtained by using a (15,100)-evolution strategy with n  = 1, n = 30, without
recombination.

2f

Figure 3: Convergence velocity on f 2 for a (  100)-strategy with


1 : : : 30 for the selfadaptive evolution strategy (dashed curve) and the strategy using optimum prefixed values of
the standard deviations i .

Figure 4: Comparison of the convergence velocity of a (15 100)-strategy with correlated mutations (solid curve) and with self-adaptation of standard deviations only (dashed curve) in case
of the function f 3 with n = n = 10, n = 45.

reason for the periods of stagnation of the objective function values: Self-adaptation of standard
deviations works both by decreasing them during the periods of linear convergence and by
increasing them during the periods of stagnation, back to a magnitude such that they have an
impact on the objective function value. This process of standard deviation increase, which
occurs at the beginning of each interval, needs some time which does not yield any progress
with respect to the objective function value. According to [25], the number of generations
needed for this adaptation is inversely proportional to 02 (that is, proportional to n) in case of a
(1 )-evolution strategy.

In case of the objective function f 2 , each variable xi is differently scaled by a factor i,


such that self-adaptation requires to learn the scaling of n different i . The optimal settings
1= i are also known in advance for this function, such that selfof standard deviations i
adaptation can be compared to an evolution strategy using optimally adjusted i for mutation.
The result of this comparison is shown in figure 3, where the convergence velocity is plotted for
(  100)-evolution strategies as a function of , the number of parents, both for the self-adaptive
strategy and the strategy using the optimal setting of i.
It is not surprising to see that, for the strategy using optimal standard deviations i , the
convergence rate is maximized for = 1, because this setting exploits the perfect knowledge in
an optimal sense. In case of the self-adaptive strategy, however, a clear maximum of the progress
rate is reached for a value of = 12, and both larger and smaller values of cause a strong
loss of convergence speed. The collective performance of about 12 imperfect parents, achieved
by means of self-adaptation, almost equals the performance of the perfect (1,100)-strategy and
outperforms the collection of 12 perfect individuals by far. This experiment indicates that selfadaptation is a mechanism that requires the existence of a knowledge diversity (or diversity of
internal models), i.e., a number of parents larger than one, and benefits from the phenomenon

/ p

of collective (rather than individual) intelligence.


Concerning the objective function f 3 , figure 4 shows a comparison of the progress for a
(15,100)-evolution strategy with n  = n = 10, n = 0 (that is, no correlated mutations) and
n = n (n 1)=2 = 45 (that is, full correlations). In both cases, intermediary recombination of object variables, global intermediary recombination of standard deviations, and no
recombination of the rotation angles is chosen. The results demonstrate that, by introducing
the covariances, it is possible to increase the effectiveness of the collective learning process in
case of arbitrarily rotated coordinate systems. Recently, [180] has shown that an approximation of the Hessian matrix could be computed by correlated mutations with an upper bound of
+  = (n2 + 3n + 4)=2 on the population size, but the typical settings ( = 15,  = 100)
are often not sufficient to achieve this (an experimental investigation of the scaling behavior of
correlated mutations with increasing population sizes and problem dimension has not yet been
performed).
The choice of a logarithmic normal distribution for the modification of the standard deviations i in connection with a multiplicative scheme in equations (6), (4) and (8) is motivated by
the following heuristic arguments (see [202], p. 168):

1. A multiplicative process preserves positive values.


2. The median should equal one to guarantee that, on average, a multiplication by a certain
value occurs with the same probability as a multiplication by the reciprocal value (i.e.,
the process would be neutral under absence of selection).
3. Small modifications should occur more often than large ones.
The effectiveness of this multiplicative logarithmic normal modification is presently also
acknowledged in evolutionary programming, since extensive empirical investigations indicate
some advantage of this scheme over the original additive self-adaptation mechanism used in
evolutionary programming [185, 184, 186], where

i = i
(1 +

N (0 1))
0

(15)

(with a setting of

0:2 [186]). Recent investigations indicate, however, that this becomes


reversed when noisy objective functions are considered, where the additive mechanism seems
to outperform multiplicative modifications [4].
The study by Gehlhaar and Fogel [56] also indicates that the order of the modifications of
xi and i has a strong impact on the effectiveness of self-adaptation: It is important to mutate
the standard deviations first and to use the mutated standard deviations for the modification of
object variables. As the authors point out in that study, the reversed mechanism might suffer
from generating offspring that have useful object variable vectors but bad strategy parameter
vectors, because these have not been used to determine the position of the offspring itself.
Concerning the sphere model f1 and a (1 )-strategy, Beyer has recently indicated that equation (15) is obtained from equation (6) by Taylor expansion breaking off after the linear term,
such that both mutation mechanisms should behave identically for small settings of the learning
rates 0 and
, when 0 =
[25]. This was recently confirmed also with some experiments for
the time-varying sphere model [15].
10

4
4.1

Variations
Mutative Step-Size Control

For a (1,)-strategy and n  = 1, the self-adaptation of strategy parameters can also be facilitated
by using the so-called mutational step size control by Rechenberg, which modifies the standard
deviations according to the following rule ([169], p. 47):


=


if u
if u

U (0 1)  1=2
U (0 1) > 1=2

(16)

A value of
= 1:3 of the learning rate is proposed by Rechenberg.
As shown in [25], this self-adaptation rule also provides a reasonable choice with a convergence velocity comparable to that achieved by equation 4 for the convex case. This result
confirms that the self-adaptation principle works for a variety of different probability density
functions for the modification of step sizes, i.e., it is a very robust technique.

4.2

Derandomized Step-Size Adaptation

In contrast to the techniques discussed so far, the derandomized mutational step size control
proposed in [146] accumulates information about the selected individuals mutation vector ~z
over the course of evolution by adding up the successful mutations. The authors claim that the
method enables a reliable adaptation of individual step sizes (i.e., n different standard deviations i ) even in small populations, namely, in (1,)-strategies with  = 10 in the experiments
reported. The proposed method utilizes a vector ~z g of accumulated mutations as well as individual step sizes i and a global step size according to [146]:

~z g = (1 ; c)~z g 1 + c~z  ~z 0 = ~0
11 
0 0 g
1
j
~
z
j
=
@exp @ p q c ; 1 + 5n AA
n 2c
0 g
1
j
z
j
i = i
@ q ic + 0:35A
;

(17)

(18)

(19)

2 c

xi = xi +
i
Ni(0 1)
;

(20)

Essentially, equation (17) captures the history of successful mutations by a weighted sum
of the mutations selected in preceding generations (i.e., ~z g;1 ) and the mutation vector ~z  of
the selected parent individual (notice that the method applies to (1,)-strategies, i.e., ~z  is the
mutation vector of the single best offspring individual produced in generation g 1). The
vector ~z g is then used to update both a global step size and individual step sizes i according
to equations (18) and (19), where ~z g in equation (18) denotes the absolute value of ~z g , while
zig in equation (19) indicates the absolute value of its i-th component.
Equation (20) then denotes the generation of offspring individuals from the single parent
(with components xi ) in a way similar to equation (6), but now using 0 and i0 . Concerning the

j j

j j

11

choice of the new learning rates c,  , and  0, both theoretical and empirical arguments are given
in [146] for the settings c = 1= n,  = 1= n,  0 = 1=n.
The experimental results presented in [146] demonstrate a clear convergence velocity improvement of the derandomized mutational step size control when compared to an (8,50)evolution strategy using the update rule given in equation (6), but the investigations focus on
unimodal objective functions.
The general idea of utilizing information from past generations as well is very convincing
and should motivate further research on the derandomized self-adaptation scheme. It should be
noted, however, that the method has to be classified at the border between adaptive and selfadaptive control methods, because equations (18) and (19) do not define a mutational variation
of step sizes involving a random variation in the sense of those defined previously. Randomness
is introduced only by means of the vector ~z g , which takes the mutation vector of the parent
individual into account, not an actually generated random variation.

4.3

Hierarchical Evolution Strategies

This kind of evolution strategy abstracts from the individual and takes genetic operators even
on the level of populations into account. It was introduced by Rechenberg [169] and denoted as

0=0 +0( = +) ] ES.


0

Here the inner brackets denote a normal ( = +)-ES (the notation = indicates a -ary
recombination operator) which runs  0 times for  generations, each. After that one got  0
populations and 0 populations are selected for the next generation on the population level.
These 0 populations run through a recombination and mutation cycle ( 0 =0 ) on the level of
populations to generate 0 new populations and then run the inner ( = +)-ES again for 
cycles. This reproduction cycle on the population level is done  0 times.
The problem to arise is the recombination and mutation on the level of populations. Recombination of populations can be done by simply taking single individuals from all  0 populations
into the succeeding population. Mutation can than be invoked by mutating each of the single individuals or by moving the centres of gravity of the populations [169]. The latter one of course
needs more computational effort.
One can recognize that there are two levels of hierarchy in the approach shown here:
1. The level of individuals, and
2. the level of populations.
The concept however can be applied to more than one level and the nesting can increase to
higher levels like sorts and families in natural evolution [77].
The benefit of these hierarchical or nested evolution strategies is the isolation of populations.
These populations can run in parallel and explore different parts of the search space. Because
this is done several times it leads to a better exploration of the search space. Rechenberg indicates that this kind of strategy is qualified for multimodal optimization [169].
This ES can also be used for multicriteria optimization (see also section 5.4) because the
objectives to select for can be different on every step of the hierarchy. This only works with independent objectives, however because e.g. the objective selected for in the level of populations
12

is not working in the level of individuals. This will destroy every good information regarding
one objective in the case of contradicting ones.
A detailed description of the implementation is given in [169] but one should have in mind
that this approach again increases the number of parameters for an evolution strategy. This does
not only need more effort in programming but also requires knowledge and experience in the
tuning of the parameters to achieve good results.

4.4

The ( ,,,)-Strategy: Aging of Individuals

In the ( + )-ES the  offspring and their parents are united, before according to a given
criterion, the fittest individuals are selected from this set of size + . Both and  can be
as small as 1 in this case, in principle. Indeed, the first experiments were all performed on the
1, the new parents are selected from
basis of a (1 + 1)-ES. In the (  )-ES, with  >
the  offspring only, no matter whether they surpass their parents or not. The latter version is
in danger to diverge (especially in connection with self-adapting variances see below) if the
so far best position is not stored externally or even preserved within the generation cycle (socalled elitist strategy). So far, only empirical results have shown that the comma version has to
be preferred when internal strategy parameters have to be learned on-line collectively. For that
to work, > 1 and intermediary recombination of the mutation variances seem to be essential
preconditions. It is not true that ESs consider recombination as a subsidiary operator.
The (  )-ES implies that each parent can have children only once (duration of life: one
generation = one reproduction cycle), whereas in the plus version individuals may live eternally
if no child achieves a better or at least the same quality. The new (    )-ES as defined
1 reproduction cycles (iterations). Now,
in [209, 208] introduces a maximal life span of 
both original strategies are special cases of the more general strategy, with  = 1 resembling
resembling the plus-strategy, respectively. Thus, the advantages
the comma- and with  =
and disadvantages of both extremal cases can be scaled arbitrarily. Other new options include:

 Free number of parents involved in reproduction (not only 1, 2, or all).


 Tournament selection as alternative to the standard (  )-selection.
 Free probabilities of applying recombination and mutation.
 Further recombination types including crossover.
In a ( ,,,)-ES, the representation of individuals is extended by a positive integer value
 2 IN 0, the remaining life span of the individual in iterations (reproduction cycles). Whenever
a new individual is created by mutation and recombination, its remaining life span  is initialized
to  = . The remaining life span is decremented by the selection operator for all individuals
which survive selection.
The remaining life span is then used to modify the traditional deterministic ES selection
operator, which can be defined formally as:

sel : I + ! I :
13

(21)

Let P (T ) denote some parent population in reproduction cycle T , P~ (T ) their offspring produced
by recombination and mutation, and Q (T ) = P (T ) P~ (T ) I + where the operator denotes
the union operation on multisets. Then

P (T +1) := sel(Q(T )):

(22)

The next reproduction cycle contains the best individuals still having a positive remaining
duration of life, i.e., the following relation is valid:
~b Q(T ) P (T +1) : ~b > ~a
~a P (T +1) : a > 0
(23)

where the relation > (read: better than) introduces a maximum duration of life, , that defines
an individual to be better than an other one if its remaining duration of life  k is still positive
and its fitness (measured by the objective function) is better.

The definition of the > - relation is given by:
~ak > ~a~` : k > 0 f (~xk ) f (~x~` ):
(24)

8 2

^ 69 2

At the end of the selection process, the remaining maximum life durations have to be decremented by one for each survivor:

k(T +1) := ~k(T ) ; 1

8 k 2 f1 : : :  g

(25)

It should be noted again that, according to the definition (24) of the better than relation, a
setting of  = 1 results in discarding the parents regardless of their quality (i.e., the ( ,)guarantees parents to be discarded
selection as in traditional evolution strategies) while  =
only if they are outperformed by offspring individuals (i.e., the ( +)-selection as in traditional
evolution strategies).
As an alternative to this variant of selection, the tournament selection is well suited for
parallelization of the selection process. This method selects times the best individual from

+  k 1 : : :  and transfers it to
a random subset Bk of size Bk =  , 2
the next reproduction cycle (note that there may appear duplicates!). The best individual within

each subset Bk is selected according to the > relation which was introduced in (24). A formal
definition of the (     ) tournament selection follows: Let

j j

 

8 2f

Bk  Q(T ) 8 k 2 f1 : : : g
be random subsets of Q(T ), each of size jBk j =  . For each k 2 f1 : : :

8 ~b 2 Bk : ~ak > ~b :

such that

Finally,

P (T +1) :=

G

(26)

g choose ~ak 2 Bk

(27)

f ~a(kT +1) g:

(28)
k=1
As an extension to the traditional recombination operator, the generalized recombination
operator rec : I 
I is defined as follows:

rec := re  co
14

(29)

co : I  ! I chooses 1   

parent vectors from I  with uniform probability, and


I creates one offspring vector by mixing characters from  parents.
P (T ) of size A =  be a subset of arbitrary parents chosen by the operator ,
Let A
^ I be the offspring to be generated. If A = ~a 1~a2 , ~a1 and ~a2 being two out of
and let ~a
parents, holds, recombination is called bisexual. If A = ~a1 : : :~a and  > 2, recombination
is called multisexual. While recombination in evolution strategies was originally proposed for
the two cases of  = 2 and  = (global recombination), and was restricted to  = 2 in

genetic algorithms, Eiben generalized the idea for an arbitrary number of parents 2
involved in the creation of either one (e.g., in case of scanning crossover) or  (e.g., in case
of diagonal crossover) offspring individuals [39, 41, 40]. This generalization is adapted here
for extending discrete and intermediary recombination in evolution strategies to an arbitrary
number of parents, but still generating one offspring only per application of the recombination
operator. First experimental results in parameter optimization indicate that the optimum value of
 is problem-dependent, but in many cases  = is the most efficient setting for recombination
of the object variables [38].
In contrast to traditional evolution strategies which always apply recombination for the cre
0 1]3
ation of offspring, we also propose here to introduce recombination probabilities p r
as a further generalization of the algorithm. A recombination probability p ri for one of the
three components of individuals that might undergo recombination is algorithmically realized
(0 1) and applying no recombination, if u > p ri ,
by sampling a uniform random variable u
or the corresponding recombination operator, if u p ri .
Finally, an offspring individual created by recombination is equipped with a remaining life
time  = .
where
: I

re

!

2

jj

f
f

co

 

Application-Oriented Extensions

5.1

Noisy Objective Functions

Originally designed for experimental optimization [166, 203], Evolution Strategies are claimed
to be of general applicability as well as robust in the presence of noise. Whereas the universality
of these algorithms was validated through lots of applications [13] little is known about the
robustness in case of pertubations. But the ability to deal with noisy functions not only is a
prerequisite for experimental optimization, e.g. because of limited precision of observations,
but also in the context of numerical optimization like in the field of computer simulation.
Despite of their simple structure Evolution Strategies show a complex dynamic behavior.
Theoretical investigations up to now were successful only for simplified strategy variants and
P
convex objective functions like the sphere model f 1(~x) = ni=1 x2i .
Here we cite a result from Beyer [24], which describes the dynamics of the (1,)-ES on the
noisy objective function f1 (~x) + N (0 ):

0
1
R = 2 @ n ; q 2Rc1 A
g
2R
2 + (2R )2

(30)

R and g denote the remaining distance to the true optimimum point (~0) and the current
15

c1

2
0.5642

3
0.8463

5
1.1630

10
1.539

50
2.249

100
2.508

Table 1: Some values for c1

generation number, respectively. The standard deviations for the mutation and the perturbation
are given by and . The model is of dimensionality n and c1 denotes the so called progress
coefficient, which is a slowly increasing function in  [24]:

c1 2 ln 
(31)
Expressions (30) and (31) hold for large n and -values, respectively.
Table 1 lists some values of c1 which are analytically derived for   5 and numerically
approximated for  > 5 from Scheel [187].
R
We will make use of equation (30) to investigate the steady state, i.e. R :
g ! 0.
= 0 we get
Assuming limg
s n
1
R = 2 c and
(32)
1
f1(R ) = 4 c n
(33)
1

!1

1

Equation (33) can be used to validate experimental results for the sphere model.
0:001 0:005 0:01 0:05 0:1 0:5 1:0 are
For the experiments, standard deviations
utilized to perturb the function values and the evolution strategies behavior is compared to the
unperturbed case ( = 0). The experiments are performed by running a (1,100)-ES as well as
a (15,100)-ES with n = 1 for the convergence velocity test. Each experiment is repeated for a
total of N = 100 independent runs in order to obtain statistically significant results. In contrast
to the standard method which assesses the quality of an optimization run by concentrating on the
individual of best (in our case, minimal) objective function value, this is not reasonable in case
of perturbed evaluations because the populations extreme values represent outliers. Instead,
the evaluations are based on the average objective function value of the offspring population,
which provides a more robust measure of the true (unperturbed) quality of the individuals.
The experiments are performed on the sphere model f 1 with n = 30. The initial population
consists of object variables chosen uniformly at random from the interval
30 30]. All initial
standard deviations are set to a value of 25.0, and n = 1 is used for all runs. Each of the N =
100 runs is terminated after 20:000 function evaluations (200 generations), and the objective
function data of all runs is averaged to obtain a result of statistical significance. (Indeed, the
data from 100 runs passes a Kolmogorov-Smirnov test for the hypothesis of normally distributed
data for a significance level of 0:01 and a confidence interval of 1% around the average.)
Figure 5 shows the behavior of a (1,100)-ES for the set of different perturbation magnitudes
as well as the unperturbed case. The average objective function value is plotted against the
number of generations.

2f

16

Figure 5: Courses of evolution for (1,100)-ES on the sphere model and standard deviations
0 0:001 0:005 0:01 0:05 0:1 0:5 1:0 for the perturbation.

2f

The courses of evolution clearly demonstrate the capability of an evolution strategy to proceed as fast as in the unperturbed case as long as the magnitude of is small in comparison to
f . If f decreases beyond a certain level the selection is based on the perturbation only and the
search process becomes a random walk thus limiting the convergence precision.
Table 2 shows a remarkable accordance between theoretical and experimental results comparing the (1,100)-ES steady states. The difference of a factor of approximately 1.3 can be
and
0 only.
explained through the fact, that equation (33) is valid for n
Increasing the parent population size to a more practical value of 15, we observe a similar
behavior (figure 6). A closer look not only shows a moderate speed up due to the influence
of recombination, but also a much better localisation of the optimum point in the steady state
by approximately a factor of 4. This effect is caused by the reduction of selection pressure
which prohibits the outliers to take over the whole population. A first analysis lets us assume
an optimal parameter value for between 10 and 15 in this configuration.

!1

5.2

Robust Design

Robustness is an important requirement for almost all kinds of products, i.e. they should keep a
good performance under varying conditions (temperature or humidity). Furthermore, the impact
of wear, as well as manufacturing tolerances, should be limited as much as possible. Consequently, the production process itself as well as the environmental influences after the product is
put to use have to be regarded during the product design. We have shown for multilayer optical
coatings (MOCs) how robust designs can be achieved by using evolutionary algorithms. MOCs
17


1.0
0.5
0.1
0.05
0.01
0.005
0.001
Table 2:
ment.

(1,100)-ES (1,100)-ES (15,100)-ES


theory
observation observation
2.990
3.8
0.975
1.495
1.9
0.469
0.299
0.4
0.091
0.150
0.2
0.047
0.030
0.038
0.009
0.015
0.02
0.005
0.003
0.004
0.001

f (R ) for the (1,100)-ES theory, (1,100)-ES experiment and the (15,100)-ES experi1

are used to guarantee specific transmission and/or reflection characteristics of optical devices.
The objective of MOC designs is to find sequences of layers of particular materials with specific thicknesses showing the desired characteristics as closely as possible. The MOC design
problem is not analytically solvable.
Let ~x = (x1 : : :  xn ) be a vector of parameters of a given design problem, e.g., the refraction
indices and thickness of the optical layers. Given a function f (~x) describing the merit of a
design feature, e.g. the color perception of the reflected light, and being a target value for
f (~x), then if disturbances are neglected the task is to find such an ~x that the difference between
f (~x) and is minimized.
On the other hand the usability of two products although manufactured under almost identical conditions might differ significantly, due to external conditions such as temperature and
humidity, or internal factors such as wear as well as manufacturing tolerances. Some of these
factors are not controllable at all. Others can only be reduced with unjustifiable effort. Thus
they are regarded as disturbances, and it is desired to reduce their influence as much as possible.
Here we focused on manufacturing tolerances, but the approach could easily be extended.
The disturbances are represented by a vector of random numbers ~ = (1 : : :  n ). If the
probability distribution of the  i are known as well as their influence on f we might rewrite
f (~x) as f~(~x ~). In our example the disturbances are assumed to be normally distributed with
zero mean and will have an additive influence on the parameter values. Thus, we define

f~(~x ~) = f (x1 + 1 : : : xn + n):


The task is now to minimize the deviations of f~(~x ~) from .

; j

(34)

This leads to the question of how to assess these deviations. The traditional approach regards
all products with f~(~x ~)
 as equally good for some predefined  and all others as offcuts. But this approach is somewhat unrealistic, since if such products are assembled to larger
units such as devices on electronic boards malfunctions might occur due to aggregations of
deviations of single elements.
The method of parameter design after Taguchi [218, 93, 179] takes these effects into account
by considering every deviation from the objective as a loss. In practical applications quadratic
18

Figure 6: Courses of evolution for (15,100)-ES on the sphere model and standard deviations
0 0:001 0:005 0:01 0:05 0:1 0:5 1:0 for the perturbation.

2f

loss functions of the form

(f~(~x ~) ; ))2

(35)

have proven to be well suited if no better alternative is known. The expected loss then becomes

L = k
E ((f~(~x ~) ; )2)

(36)

where k is some constant and E denotes the expectation value of the quadratic deviation.
In our work we follow the approach of Greiner [61, 62] who defines the objective function
as
Z
E( ;f~)2 (~x) = k ( f~(~x ~))2 P (~)d~ 
(37)

where P (~ ) denotes the the joint probability distribution of the distrubances. Since in most
applications the expectation value E cannot be calculated analytically it must be approximated.
Here we use
t
1 X
~ ~ 2
(38)
t i=1( f (~x i))

as an estimate, where ~i  i = 1 : : :  t, are vectors of normally distributed random numbers with
mean zero and standard deviation . The estimation error scales proportional to t, and since
in most applications the possible number of evaluations is very limited this approach yields a
stochastic optimization problem. As evolutionary algorithms have proven their robustness in
case of noisy objective functions [46, 24, 9, 64] they are promising candidates here.
In order to clarify the relationship between the original merit function f and the expected
loss L we investigated a rectangular function. We could show that optimla points of L do not

19

;j

necessarily correspond to optimal points of f (~x) . As already mentioned we considered as


an practical example the design of multilayer optical coatings most frequnetly used for optical
filters. During the production process the layer thickness can not be controlled with arbitrary
precision. Additionally, the refraction indices vary slightly due to pollution of the optical materials. Thus, we might observe significant variances in the quality of single filters.
Basically, we applied two modified evolution strategies (ES). A extended (25 + 50)-ES for
mixed-integer optimization after [14] and a parallel diffusion model after [199], where the individuals are located on a regular grid. We used 15 subpopulations with a size of 20x25, a
neighborhood size of 7x7 and an isolation time of 30 generations. The MOC designs found by
the evolutionary algorithms are substantially more robust to parameter variations than a reference design and therefore perform much better in the average case, although for the undisturbed
case the reference design is significantly better. This observation was expected, since sensitivity analysis shows that many local optima are not robust under parameter variations. For more
details see [230].

5.3

Dynamic Environments

The principle of self-adaptation promises to be useful not only in case of static optimization
problems, but also for dynamic optimization problems where the objective function changes
over the course of optimization. The dynamic environment requires the evolutionary algorithm
to maintain sufficient diversity for continuous adaptation to the changes of the landscape, which
should be possible by means of self-adaptation of strategy parameters. Recently, it was demonstrated that indeed the self-adaptation principle in evolution strategies provides an effective way
of tracking moving optima in case of dynamic objective functions [6].
In the general case of a dynamic environment, the goal is not only to acquire an optimal
solution but also to track its progression through the search space as closely as possible. In
min (~x M ), the dynamic optimization
contrast to the static optimization problem f (~x)
problem

!
2
f (~x t) ! min  ~x 2 M  t 2 T
depends on an additional parameter t 2 T (the time) as well, i.e., the objective function changes
6 tj , f (~x ti) =6 f (~x tj ), i.e., the objective function
with t. Generally, this implies that, for ti =

might be different after each function evaluation, in contrast to a simplified form of dynamic
behavior where the objective function remains constant within specific time intervals
t k  tk +
tk
, such that
ti tj
tk  tk + tk

f (~x ti) = f (~x tj ) :

For the investigations reported in [6], it was assumed that the dynamics of the objective
function and the dynamics of the evolutionary algorithm are synchronized by identifying t with
the generation index of the algorithm and by keeping f constant within one generation, such
1 and ti tj  tk 0 1 2 : : :  tmax . Moreover, tk =: g is also assumed to be
that tk
constant, such that the objective function changes every g generations after completing the
evaluation of the whole population in case of a generational evolutionary algorithm such as the
evolution strategy.

2f

20

Figure 7: Evolution strategy results for the linear dynamics with update frequenc y
(left), g = 5 (middle), g = 10 (right).

g = 1

Three dynamical environments derived from the sphere model

f (~x) =

n
X
i=1

x2i

(39)

are used for the experiments. The dynamical environments are generated by translating the base
function along a linear trajectory according to

f (~x t) =

n
X
(xi + i(t))2

(40)

i=1

where t IN 0 denotes the time counter (equivalent to the generation number in an evolutionary
algorithm).
1 : : :  n , and
The trajectory is defined by setting i (0) = 0 i

8 2f

t + 1) mod g = 0
i(t + 1) = i(t)(t+) s  (else
i

(41)

The algorithm used here is a standard (15,100)-evolution strategy with local discrete recombination on the object variables x i and global intermediary recombination on the strategy
parameters i . 100 offspring individuals are generated per generation, n  = n variances are
used for self-adaptation (although it is well known that one variance is optimal for the sphere
model), all object variables are uniformly initialized within the range
50 50], and 50 independent runs are performed over 500 generations, each. The experiments for the linear dynamics,
1 5 10 and severity s 0:01 0:1 0:5 are shown in figure
with update frequencies g
7.
In this figure, the left, middle, and right subfigure correspond with an update frequency of
1, 5, and 10 generations, respectively, and each of the subfigures contains the three curves for
the different levels of the severity parameter.
All results reported here give a clear impression that the self-adaptation of variances as utilized in a ( ,)-evolution strategy is an effective method for tracking dynamic environments. In

2f

2f

21

all cases, the optimization proceeds with a linear rate of convergence as predicted by the theory
of evolution strategy behavior on the sphere model, until the objective function value reaches
an order of magnitude corresponding to the squared value of the severity parameter s. With
an update frequency of g = 1, the algorithm constantly follows the dynamic environment
without any deteriorations.a With larger update frequencies g
5 10 , the objective function values oscillate with a frequency of g generations between the objective function value
achieved by a continuous update at every generation (left figures) and the further improvement
that can be achieved by holding the environment constant for g generations. This results in a
larger amplitude of the oscillation when g increases.
The direct conclusion from the three sets of experiments reported here is that the lognormal
self-adaptation rule as used in ( ,)-evolution strategies is perfectly able to track the dynamic
optima.

2f g

5.4

Multiple Criteria Decision Making

It has become increasingly obvious that the optimization under a single scalarvalued criterion
often a monetary one fails to reflect the variety of aspects in a world getting more and more
complex. Often, there are several conflicting optimization criteria (e.g., costs vs. reliability),
such that the objective function is characterized best by a multiple-criteria approach with k > 1
objectives, i.e.:
f~ : M IRk  f~(~x) = (f1(~x) : : : fk (~x))
(42)

Under such circumstances, the goal of the search is to identify solutions which can not be
improved in any combination of the objectives without degradation in the remaining, i.e., a
solution ~xi is called Pareto-optimal (nondominated):

6 9 ~xj

: f~(~xj ) <P f~(~xi) 

(43)

where

f~(~xj ) <P f~(~xi)

, 8p 2 f1 : : :  kg : fp(~xj )  fp(~xi) ^


9p 2 f1 : : :  kg : fp(~xj ) < fp(~xi)

(assuming minimization). The set of all Pareto-optimal solutions is called the Pareto-set.
A variety of approaches towards handling multiple criteria decision making problems with
evolutionary algorithms have been proposed in the literature; see [48, 87] for overviews. These
can be grouped into the following three categories:

 Plain aggregating approaches, based on combining objectives into a single objective function.

 Population-based non-Pareto approaches, characterized by the fact that different objectives affect selection of different population parts in turn and separate rankings of the
population are performed according to each objective.
22

 Pareto-based approaches, using a population ranking according to Pareto dominance.


While all of these approaches can be used in combination with an evolution strategy, we
focus here on a study which falls in the second of the above mentioned categories and utilizes the concept of polyploidy to deal with different objectives. More precisely, the following
modifications to a ( ,)-evolution strategy are made [112, 113]:

 Since the environment now consists of k objectives the selection step is provided with a
fixed userdefinable vector that determines the probability of each objective to become
the sorting criterion in the k iterations of the selection loop. Alternatively, this vector may
be allowed to change randomly over time.

 Furthermore, the extension of an individuals genes by recessive information turned out

to be necessary in order to maintain the populations capability of coping with a changing environment. The recessive genes enable a fast reaction after a sudden variation of
the probability vector. One can also observe this behaviour in nature: The younger the
environment the higher the portion of polyploid organisms.

Using these principles, the algorithm is able to generate solutions covering the Pareto front,
such that the user is provided with an idea of the tradeoffs between the objectives. It should be
noted that efficient solutions in one generation may become dominated by individuals emerging in a later generation. This explains the nonefficient points in figure 8 (left) for the two
objectives

f1(~x) =
f2(~x) =

n
X
i=1
n
X
i=1

q
(;10 exp(;0:2 x2i + x2i+1))

(44)

(jxij0:8 + 5 sin(xi)3) :

(45)

For efficiency reasons the parents of the next generation are stored provisionally in an array
that is cleaned out if there is not enough space left for further individuals. If this operation does
not result in enough free space solutions close to one another are deleted. As an important
side effect the elements of the Pareto set are forced apart thus allowing a good survey with only
a finite number of solutions. Figure 8 (right) displays the situation after tidying up.
When working with diploid individuals the inclusion of the recessive genes in the selection
step turns out to be vital. Otherwise, undisturbed by the outside world they lead such a life
of their own that an individual whose dominant genes have been freshened up with recessive
material has no chance of surviving the next selection step. The best results were achieved with
a probability of about 1=3 for exchanging dominant and recessive genes. This value also serves
as a factor when putting together the overall fitness vector. Only in this way the additional
recessive material can serve as a stock of variants. From further test runs one can also conclude
that diploid or, in general, polyploid individuals are not worth the additional computing time in
a static environment consisting only of one objective function.
Since the algorithm tries to cover the Pareto set as good as possible a probability distribution
forcing certain minimum changes during the mutation step ought to yield better results. Indeed,
the (symmetric) Weibull distribution turned out to be better than the Gaussian distribution.
23

Figure 8: Graphical visualization of the output of the algorithm.


The stochastic approach towards vector optimization problems via evolution strategies leads
to one major advantage: In contrast to other methods no subjective decisions are required during
the course of the iterations. Instead of narrowing the control variables space or the objective
space by deciding about the future direction of the search from an information vacuum the
decision maker can collect as much information as needed before making a choice which of the
alternatives should be realized. Moreover, using a population while looking for a set of efficient
solutions seems to be more appropriate than just trying to improve one current best solution.
One might exploit the algorithms capability of selfadapting its parameters even further:
The exchange rate between dominant and recessive genetic material can be adjusted online
thus providing the user with a measure of convergence. The selfadaptation property largely
depends on a selection scheme that forces the algorithm to forget the good solutions (parents)
of one generation. When accepting a possible recession from one generation to the next on the
phenotype level individuals with a better model of their environment, i.e. better step sizes i
are likely to emerge in later generations. This kind of selection seems to be lavish at first sight
but it favours better adapted settings, thus speeding up the search in the long run.

5.5

Constraint Handling

In practical application problems, the feasible region usually is only a subspace of the whole
search space , and it is defined by a set of m additional constraints:

gj (~x)  0
hj (~x) = 0

for
for

j = 1 : : :  q
j = q + 1 : : :  m :

(46)
(47)

During the optimum seeking process of ESs, inequality constraints so far have been handled
as barriers, i.e., offspring that violate at least one of the restrictions are lethal mutations. Before
the selection operator can be activated, exactly  non-lethal offspring must have been generated.
24

In case of a non-feasible start position ~x (0), a feasible solution must be found at first. This
can be achieved by means of an auxiliary objective function

X
f~(~x) = gj (~x) j (~x)
m

j =1

(48)

with j (~x) = 1 if gj (~x) < 0 and zero otherwise. Each decrease of the value of f~(~x) represents
an approach to the feasible region. As soon as f~(~x) = 0 can be stated, then ~x satisfies all the
constraints and can serve as a starting vector for the optimization proper.
Of course, this simple penalty function approach as well as the repetition of offspring creation until they satisfy all constraints are in many cases insufficient methods for constraint handling, but advanced techniques such as those discussed by Michalewicz and Schoenauer [126]
can easily be incorporated into evolution strategies (see also chapter C5 of the Handbook of
Evolutionary Computation for a detailed discussion of constraint handling methods; [8]).
Constraints in form of simple bounds
1.
2.
3.

xi  ai ,
xi  bi , or
ai  xi  bi

can be handled in a much simpler way, i.e., by transformation. One replaces variable xi by
a new one, yi , to be changed by the optimization procedure, and replaces x i in the objective
function by
1.
2.
3.

xi = ai + yi2 in case 1 of the above list,


xi = bi ; yi2 in case 2,
xi = ai + (bi ; ai)
sin2 yi or xi = ai + (bi ; ai)=(1  exp(;c
(d ; yi))) in case 3, with
arbitrary constants c > 0 and d.

This kind of handling bounds can be used with all optimum seeking methods, provided that
they are started within the feasible region. Some may have trouble with the sine-term due to the
periodicity introduced, however.

Parallel Evolution Strategies

Due to the fact, that all individuals of a population act simultaneously in nature one can speak
of an inherent parallelism in evolution. Although this was already known when the principles of
evolutionary algorithms were designed, no one could at that time imagine the power of parallel
computers, which are now available. Consequently, evolutionary algorithms have usually been
implemented sequentially.
Nowadays we are used to parallel computers and so in the last years a lot of suggestions to
parallise evolutionary algorithms have been made. The goals of parallelism are simple:
25

 Speed: Get the same results like a sequential algorithm in less time.
 Robustness: Get more robust results regarding errors or noisy information.
 Quality: Get better results in the same time as a sequential algorithm.
There are at least two different approaches to parallel evolutionary algorithms [57, 5] which
are described here next to a mixed-model approach which tries to put the best of both models
together. Before that a very simple but effective way to use parallel hardware is presented,
which does not match to the models presented afterwards.

6.1

The Master-Slave Approach

This approach is very effective if the calculation of the fitness function is time intensive, e.g. when
optimizing simulaton models where the simulation software runs a long time like in [7].
In this case the evolutionary algorithm can be divided into a master-process, where the
individuals are generated and the genetic operators are applied, and a number of slave-processes,
where the fitness function is evaluated.
Now the different processes can run on different maschines and the fitness calculation for a
whole population can be done parallel. A special kind of steady-steate selection [228] with a
( + 1) selection scheme was presented in [7, 11, 97] which nearly avoids any idle times on
the processors, because every time a fitness is calculated a new individual is send to the idle
processor without waiting for any other results from the slaves.

6.2

Coarse Grained Parallelism: The Migration Model

In the migration model a population is divided into a number of subpopulations, so-called demes
[5]. These subpopulations are still panmictic but exchange genetic information by the migration
of individuals. Two concepts are known [57, 215, 5]:
1. In the Island Model there is a random exchange of information between the subpopulations, and
2. in the Stepping Stone Model this exchange is limited to migration paths which connect
the subpopulations that are placed in a topology (e.g. a ring, or a torus etc.).
These algorithms can be scaled to a balanced usage of processing and communication resources
by tuning the local population size and the migration frequencies.
Different ways to choose the individuals to leave the local population are known. To choose
one randomly seems to be a good compromise between the danger of premature stagnation
when choosing the best individual and small chances to survive in the new subpopulation when
choosing the worst one to leave.
Another problem is the way to insert immigrants into the new population. A solution which
can cause further problems is to simply add them to the population. Another way is to keep
them alive for some number of generations which gives the immigrants a chance to establish in
the new population.
26

Experimental results show that short isolation times for the subpopulations and a high connectivity between them lead to a better exploitation of a part of the search space, tuned for speed
and accurancy. Long isolation times and low connectivity lead to the exploration of the search
space, tuned for global optimisation.
The coarse grained parallel implementation needs only minor changes to the original algorithm, while the model presented in the next section requires more modifications.

6.3

Fine Grained Parallelism: The Diffusion Model

In contrast to the migration model, where subpopulations are normally bound to one processor,
in the diffusion or Neighborhood Model there is normally only a single individual bound to one
processor. These processors are placed in a topology, again a ring, a torus or a grid etc., like in
the Stepping Stone Model and there is a neighborhood defined for every processor.
Let a ring topology consist of processors aj (j = 1 : : :  n) for example. A neighborhood
for processor ai can be defined by

NBp(i) := fa(i p) : : : a(i+p)g


;

where the indices are calculated modulo the population size n. On a grid several other neighborhoods like the Moore- or the von-Neumann- neighborhood can be defined [181].
Instead of the migration of individuals there is a local selection of parents defined on the
neighborhood to exchange genetic information. The successor of an individual is then generated
by recombination and mutation of the selected parents and substitutes the individual bound to
the processor before. The genetic operators are applied as if the neighborhood is the whole
population.
Different local selection strategies, e.g. local (  ) Selection, local ( + ) Selection,
and local mating selection are described by Sprave [215]. One can think of further strategies,
e.g. local tournament or local proportional selection [181].
Experiments state the obvious results that local selection causes clustering of individuals
heading for the same local optimum. Large neighborhoods lead to the exploitation of parts of
the search space whereas small neighborhoods lead to better exploration of the search space and
thus better global optimization properties.
In comparison to the migration model the parallelism in this model is not scaleable and the
local selection differs completely from the traditional one used in evolutionary algorithms.

6.4

A Hybrid Approach

In the hybrid approach presented in [199] a migration model as well as a neighbohood model
have been implemented. The subpopulations of the migration model are placed in a ring topology and after the isolation time certain individuals are exchanged between neighbouring populations. Immigrants remain unchanged in a population until new ones arrive to give them a
chance to establish.
The neighborhood model uses a torus as the population structure and the neighborhood itself
is defined via the maximum norm. Now slices of the torus are mapped to processors so that the
27

balance between communication and computation can be biased and the isolation mechanism
is introduced.
This results in a mixed-model approach with a migration model with isolation and a huge
neighborhood size on the slices of the torus and no isolation time and a small neighborhood size
in the neighborhood model on the torus itself.
This approach shows the following advantages:

 The best of both models s combined into one model, and


 a scaleable parallelism is achieved by tuning the isolation times.
On the other hand there are a lot of external parameters, e.g. the local population size, the exchange rates, the isolation time, etc., that have to be set. This requires knowledge and experience
in handling evolutionary algorithms from the user.

Applications

The excellent global optimization properties of evolution strategies are demonstrated in the
studies by Schwefel by using a test suite of more than 50 objective functions [202, 207]. These
are artificial test functions, however, such that one might be tempted to ask for practical applications of the evolution strategy.
In this section, relevant practical applications as reported in the literature are summarized to
provide the reader with an overview of the capabilities of these powerful algorithms. These application case studies cover a wide range of disciplines, including artificial intelligence, biotechnology, technical design, chemical engineering, telecommunications, medicine, microelectronics, military, physics, robotics, production planning, and others.
The most recent applications reported here are from the mid 90s, and the earliest ones go
back to the early 60s. Due to the enormous number of applications that have been reported after
about 1994, it was impossible to keep track of the more recent developments in the field and
therefore the collection process was stopped in 1994. The bibliography by Alander is the most
complete overview of papers published in the whole field of evolutionary computation, and also
includes a lot of evolution strategy literature [2].

7.1

Artificial Intelligence

 Intelligent control of autonomous vehicles [49].


 Weight optimization of neural networks [74].
 Configuration of synaptic parameters of a hardware-simulator of neural networks [159].
 Determination of the optimal structure of neural networks [190].
28

7.2

Biotechnology

 Simulation of protein evolution [147].


 Simulated evolution of behaviour patterns in paraonis fulgens [148].
 Scenarios for explaining the intangible variance in genetically identical populations [106].
 Simulating the evolution of optical lenses [167].
 Parameter optimization for a simulation model of genetic signal transmissions based on
RNA-transcription [29].

 Parameter identification for a model of synthesis processes inside fermentors [21].


 Parameter optimization for complex models of ecosystems [101].
 Optimization of fermentation processes [171].
 Simulated evolution of mating behaviour of a dragon fly [157].
7.3

Technical Design Applications

 Shape optimization of a 90 pipe knee with minimal drag [119].


 Minimized drag of a wrist plate [164].
 Optimal structural design of a two-phase nozzle with maximum impulse performance


[210].

 Core optimization of sodium cooled fast-breeder reactors w.r.t. the cost of fuel and return
on capital [80].

 Determination of design parameters of a pneumatic shock absorber [137].


 Volume optimization of girder constructions in view of instabilities [221, 85].
 Design and parameterization of a 4-joint drive unit [138].
 Determination of shape and parameterization of parts of apparatus [55].
 Weight-optimal positioning of joins for half-timbered constructions [86].
 Optimized geometry of perpendicularly attacked cooling vanes [105].
 Shape optimization of perpendicularly attacked cooling vanes [165].
 Minimization of the maximum circular momentum of beam-type cylindric shell constructions [66].

 Weight optimization of girder constructions [118].


29

 Optimal transformer circuits for antenna matching units [175].


 Shape optimization of lightweight half-timbered constructions [83].
 Synthesis of a 4-joint drive unit [3].
 Design of flat hyperbolic paraboloid shell constructions with minimal costs [67].
 Optimal shape of lightweight half-timbered constructions with fixed structure [84].
 Design of ball-bearings with maximum load capacity [95].
 Shape optimization of girder constructions with given topology [229].
 Weight optimization of a single-field girder construction [68].
 Optimal design of dynamically loaded girders [81].
 Optimization of the energy efficiency of a thermal water jet propulsion [122].
 Optimization method within the ASDO program package for parameter optimization of
computer-aided designs [96].

 Design of a centrifugal compressor with optimal stage power [16].


 Optimized resistance to earthquakes of steel structure buildings based on finite-elementmodelling [114, 115].

 Optimization of a freely rested girder construction with central load [44].


 Weight optimization of welded cabinets, Tension optimization of idealized flat structures:
cantilever beam, boltings, crosshead [52].

 Weight optimization of girder constructions, Drag optimization of a wrist plate [134].


 Noise reduction of radial fans [135].
 Parameterization of a car suspension w.r.t. optimal comfort (K-value) according to VDI

guide line 2057, Identification of unmeasurable vehicle parameters for a 1-track simulation model, Optimal deformation characteristics for collisions of vehicles of different
masses [145].

 Minimized drag of a body of revolution [153, 154, 155].


 Synthesis of controllers for hydraulic setting mechanisms [183].
 Design of boltings with minimal weight [231].
 Synthesis of an active suspension with optimal RMS-values [237].
 Parameter adaptation of finite element models for structure optimization [69].
30

 General purpose optimization method in a sequential multi-criteria optimization system


[234].

 General purpose optimization method in a CAD-system [70].


 Optimization of the efficiency, sensivity and bandwidth of ultrasonic converters for medical diagnoses [117].

 Optimized throughput and homogeneity of the dose rate of Gamma-radiation equipment


for food preservation [232, 233].

 Diagonalization of a stiffness matrix for the decoupling of imposed oscillations of the


elastically mounted Porsche aeroplane engine PFM 3200 [35].

 Optimization of spiral springs and damping of vibrations of an engine suspension [102].


 Data analysis of experimental data for the verification of burning off programs controlling
nuclear reactors of type WWER-440 [82].

 General purpose optimization method within the program package CARAT (Computer
Aided Reserach Analysis Tool) for structure optimization [163].

 Optimization of parametric descriptions of systems of optical lenses [224].


 Minimized weight and deflection of girder constructions [37].
 Optimization of electromagnetic flux at the ends of a dipole magnet based on a finite
element model [162, 161].

 Optimum design of trussed-beam roof structures [91].


7.4

Chemical Engineering

 Optimization of a two-component nozzle for the material balance in chemical processes


[127].

 Design and parameterization of technical catalyzers [174].


 Regression analysis of arbitrary functions, Approximate description of the pressure distribution during an explosion Classification of the tendency for firedamps of a powder bag
composition [78].

 Optimal composition of electrolytes for galvanization with tin, Optimal chrome plating
solutions to zinc-plate surfaces [173].

 Minimization of the energy in clusters of Ar3+ molecules [79].


 Parameter estimation of dynamic models for the kinetic analysis of absorption spectrums
[20].

31

 Determination of pulse successions for magnetic resonance experiments [50].


 Alignment of theoretical spectrums with measured EPR-sprectrums [99].
 Determination of clusters of noble gas ions with minimal energy [111].
 Optimal design of flow channels, optimal control, and alignment of casting machinery for
processing of plastics [124].

 Determination of optimal working points for the processing of plastics [123, 125].
 Identification of bands in NMR-spectrums [236].
 Optimal design of magnets for NMR analyses [58].
 Pre-optimization of kinetic parameters of a reaction model for the heterogeneous-catalytic
dehydrochlorination of 1,1-difluorine-1-chlorine-ethane [130].

 Identification of the competition coefficients for a simplified competition equilibrium adsorption model (SCAM) [130].

 Determination of the structure of globally energy-minimal Ne +n clusters (n  22) [45].


 Minimization of inhomogeneity in magnetic field devices used for NMR and MRI by
means of a FEM of the magnetic poles and the resulting field [160, 59].

 Least-square fitting of high-resolution electron paramagnetic resonance (EPR) spectra


[100].

 Determination of the equilibrium state in chemically reactive systems consisting of upto


25 solid, liquid, and gaseous components [178].

7.5

Telecommunications

7.6

Dynamic Processes, Modeling, Simulation

 Optimal radiation characteristic of linear unit antenna arrays [238].


 Maximized merit factors of autocorrelated bit sequences [225].
 Optimization of autocorrelated bit sequences [63].
 Parameter identification for a model of describing a controlled physical system [220].
 Optimization of a complex socioeconomic system [107]
 Optimized model of the economic and social changes in the Federal Republic of Germany
[108].

 Optimization of decision parameters in energy models [116].


32

 Parameter identification for a simulation model of the COconcentration in Madrid [142].


 Top-down computation within budgetting models [216].
 Parameter identification for a model of the progression of a viral disease in mice [170].
 Parameter identification for a model of wide-area cattle movement in Australia [152].
 Multicriteria system identification and synthesis of controllers [94].
7.7

Medicine

 Design of blood pumps with a low degree of blood damages [198].


 Optimal control of full arm protheses [31].
 Setting the electrical nerve stimulation for the individual medical treatment of angina
pectoris [104, 151, 149].

 Identification of the individual frequency and amplitude for the electrical stimulation of
the cardiac muscle in case of angina pectoris and hypertension therapy [150, 151].

 Parameter identification of non-linear regression functions applied to the concentration


pattern of antibiotics and the integrated Michaelis-Menten equation [194].

 Parameter optimization of a model of ultrasonic adsorption by amino acids [92].


 Adaptive control of the Functional Electrostimulation (FES) of paralyzed limbs [143,
144].

7.8

Microelectronics

 Partition of integrated circuits [88].


 Optimized blocking capability of a high voltage planar pn-junction with five-stepped field
plate and a channel-stopper [19].

 Solving the one-dimensional steady-state basic semiconductor equations (demonstrated


for a diode), Identification of a virtual interface trap by volume traps in the space charge
region of a MOS-system [60].

7.9

Military

 Optimized range of operation of ballistic missiles, Minimization of drift sensivity to cross


wind of short range tank defense weapons [53].

 Optimized range of operation of a gliding boost missile, Stability optimization of a ballistic missile, Optimization of the trajectory of a missile with end phase control [54].
33

7.10

Physics

 Parameter estimation of arbitrary regression functions to describe numerical time series


data [156].

 Determining configurations of dislocation groups in neutron-radiated Cu monocristals


[177].

 Determining the optimal configuration of defects in cristalline materials [176].


 Parameter estimation of an ansatz for solving the Thomas-Fermi equation of a 3-atom
system [90].

 Parameter estimation of the Thomas-Fermi-Derac-von Weizacker equation of n-atom systems [36].

 Parameter estimation of fluid-dynamic problems [140].


 Parameter identification of a time-dependent stochastic model of floods [141].
 Solving the Brachistochronen problem with constraints [158].
 Optimization of microwave circuits [191].
 Parameter estimation for solving the TFDW equation of neutral atoms [217].
 Parameter identification of a TFDW model to determine the density of atoms [188].
 Checking a model of dislocations in Er-doped GaAs [219].
 Identification of local optima in spin-glas like neural networks for pattern classification
and for associative memory purposes [18].

 Determination of stable steady states in dissipative systems [22].


 Determining the ideal control of longitudinal waves in a rod with Voigt damping [110].
 Arrangement of cavities in a linear accelerator (LINAC) for minimal beam-break-up [23].
 Adaptation of strategy parameters to reconstruct physical processes of the LEP collider
at CERN [75].

 Shape optimization of electrical conductors w.r.t. minimal power losses based on FEmodelling [98].

34

7.11

Pattern Recognition

 Fitting a contour detection algorithm to special contours of pictures [27].


 Restricting the search space for pattern recognition of overlapping workpieces [182].
 Generating reference objects for automatic pattern recognition of unordered, partially
hidden workpieces [109].

 Structure optimization of a visual filter to detect the number of coherent areas [120, 89].
7.12

Production Planning

 Solving allocation problems according to Bellman & Dreyfus, Solving the Wagner-Within
storehouse model [193].

 Optimization of the traveling salesman problem, timetable problem, knapsack problem,


and clique forming problem [76].

 Optimized throughput of logistic conveying systems [136].


 Optimized placement of working points for plastics processing [72]
 Solving the
m=n=P=Cmax] permutation flow problem [227].
 Identification of optimal machine setup for an injection molding device of thermoplastics
[30]

 Optimal design of screw conveyers for the extrusion of thermoplastics [73]


 Optimization of order sequence planning and deadline planning problems in engine production by coupling expert systems and Evolution Strategies [196, 195, 197].

7.13

Robotics

 Solving the nonlinear closure equations for the automatic computation of arbitrary spatial
mechanisms for roboter control [34].

 Geometric placement of components of assembly cells with minimized processing periods [235].

7.14

Supply- and Disposal Systems

 Optimized design of a centralized hot-water heater plant [42].


 Optimization of local water supply systems [32]
 Optimal choice of location of local waste disposal systems [43].
35

 Optimized decision parameters in energy models [192].


 Design of a cabling system with minimal costs [189].
 Optimized throughput of power plant cycles [214].
 Optimized power flow in electrical energy supply networks [51].
 Optimized power flow in electrical energy supply networks [131, 129, 132].
 Optimal load management in a network of power plants with discrete load allotments [1].
 Optimized load balancing in a network of power plants [223].
 Multi-criteria decision making for the Khlong Chao Phraya interbasis transfer in Thailand
[28].

7.15

Miscellaneous

 Solving an analytically intractable set of equations [139].


 Optimized results from a model of economy to determine future tax volumes [211].
 Identification of the optimal policy within a macroeconomic model [212].
 Optimal policy within an econometric model of the economy of the United States [213].
 Minimized interchange delays in a local traffic network [65].
 Approximate solutions of the set-cover problem [172].
 Optimal design of glas fibre optics, lamps, and heater circuits, Optimized tolerance of
gear units, Optimal recipes for porcelain manufacturing, Optimal noise reduction of axial
fans [133].

 Finding suitable coefficients to describe by means of regular and irregular continued


fractions [17].

 General method for optimizing decision parameters in a Decision Support System (DSS)
[103].

 Solving magic squares [168].


 Characterization of fissurations in cross-grain attacked rotors for determining their reusability [71].

 Regression analysis of simulation results for a conveying system [226].


36

Perspectives

As demonstrated in section 7, evolution strategies have proven their effectiveness for solving
extremely difficult optimization problems in a variety of applications. The current state of
research in evolution strategies is characterized by a solid theoretical base and particularly by
the existence of a number of efficient techniques for handling practically relevant aspects of
optimization such as

 noisy objective functions (section 5.1),


 robust design techniques (section 5.2),
 dynamic objective functions (section 5.3),
 multicriteria optimization (section 5.4), and
 constraint handling (section 5.5).

It is an immediate conclusion that evolution strategie have reached a state where they are
useful for facilitating technological breakthroughs in a variety of application fields for adaptation and optimization techniques. The industrial exploitation of evolution strategies on the level
of European industries, however, is still at the very beginning and needs to be increased in order
to exploit the economic benefits of these algorithms.
At present, we identify the following activities which can be helpful for further improving
the industrial exploitation of evolution strategies:

 Marketing activities promoting evolution strategies as a powerful development which is

independent of genetic algorithms and benefits from the feature of strategy parameter
self-adaptation. These might include technological fair activites, leaflets with case-study
overviews and seminars for industrial attendees.

 The development and marketing of a commercial evolution strategy toolbox which can
be sold to provide a first entry point of companies to the field of evolution strategies and
evolutionary algorithms.

 Consulting and software development activities for companies interested in achieving


improvements of their technological of logistic operations.

All of these activites are actively pushed by the Informatik Centrum Dortmund, which specializes on evolutionary and other optimization techniques for industrial applications.
Within the next ten years, evolutionary algorithms in general and evolution strategies in particular will certainly experience a strong further growth in their practical exploitation, because
international competition enforces companies to search for techniques that facilitate a further
improvement of their logistic, operational and technical facilities. Especially the following
emerging technological fields will require the most efficient techniques for adaptive algorithms
due to the enormous complexity of the challenging applications in these fields:

 Telecommunication technologies including wire-based and wireless techniques.


37

 Molecular biology including the genome sequencing, data analysis and drug design efforts as well treatment of inheritable diseases. Also related are molecular evolution techniques.

 Robotics dealing with the development of autonomous, learning robots.


 Internet-based information acquisition, filtering and exploitation by using adaptive agent
technology equipped with evolutionary algorithms.

 Simulation and optimization in the field of traffic and settlement structures as well as
environmental protection.

 Technical design optimization in the automobile and other industries.


We expect that evolutionary computation will be one of the techniques that deliver a major
support to the technological development in these fields over the next ten years.

38

References
[1] H.-J. Adermann. Treatment of integral contingent conditions in optimum power plant
commitment. In H.J. Wacker, editor, Applied Optimization Techniques in Energy Problems, pages 118, Stuttgart, 1985. Teubner.
[2] J. T. Alander. An indexed bibliography of genetic algorithms: Years 19571993. Art of
CAD Ltd, Espoo, Finland, 1994.
[3] U. Anders. Losung getriebesynthetischer Probleme mit der Evolutionsstrategie. Feinwerktechnik & Metechnik, 85(2):5357, March 1977.
[4] P. J. Angeline. The effects of noise on self-adaptive evolutionary optimization. In Fogel
et al. [47], pages 433440.
[5] Th. Back. Evolutionary Algorithms in Theory and Practice. Oxford University Press,
New York, 1996.
[6] Th. Back. On the behavior of evolutionary algorithms in dynamic environments. In
Proceedings of the Fifth IEEE Conference on Evolutionary Computation, pages 446
451. IEEE Press, Piscataway, NJ, 1998.
[7] Th. Back, Th. Beielstein, B. Naujoks, and J. Heistermann. Evolutionary algorithms
for the optimization of simulation models using PVM. In J. Dongarra, M. Gengler,
B. Tourancheau, and X. Vigouroux, editors, EuroPVM 95: Second European PVM
Users Group Meeting, pages 277282. Hermes, Paris, 1995.
[8] Th. Back, D. B. Fogel, and Z. Michalewicz, editors. Oxford University Press, New York,
and Institute of Physics Publishing, Bristol, 1997.
[9] Th. Back and U. Hammel. Evolution strategies applied to perturbed objective functions.
In Proceedings of the IEEE World Congress of Computational Intelligence, pages 4045,
Orlando, Florida, 1994.
[10] Th. Back, U. Hammel, and H.-P. Schwefel. Evolutionary computation: History and
current state. IEEE Transactions on Evolutionary Computation, 1(1):317, 1997.
[11] Th. Back, J. Heistermann, C. Kappler, and M. Zamparelli. Evolutionary algorithms support refueling of pressurized water reactors. In Proceedings of the Third IEEE Conference
on Evolutionary Computation, pages 104108. IEEE Press, Piscataway, NJ, 1996.
[12] Th. Back, F. Hoffmeister, and H.-P. Schwefel. A survey of evolution strategies. In R.K.
Belew and L.B. Booker, editors, Proceedings of the 4th International Conference on
Genetic Algorithms, San Mateo, CA, 1991. Morgan Kaufmann Publishers.
[13] Th. Back, F. Hoffmeister, and H.-P. Schwefel. Applications of evolutionary algorithms.
Technical report, System Analysis Research Group SYS-2/92, Universitat Dortmund,
Fachbereich Informatik, Februar 1992.
39

[14] Th. Back and M. Schutz. Evolution strategies for mixed-integer optimization of optical
multilayer systems. In J.R. McDonell, R.G. Reynolds, and D.B. Fogel, editors, Evolutionary Programming IV: Proceedings of the Fourth Annual Conference on Evolutionary
Programming, pages 3351, Cambridge, MA, 1995. MIT Press.
[15] Th. Back and M. Schutz. Intelligent mutation rate control in canonical genetic algorithms. In Z. W. Ras and M. Michalewicz, editors, Foundations of Intelligent Systems,
9th International Symposium, ISMIS 96, volume 1079 of Lecture Notes in Artificial Intelligence, pages 158167. Springer, Berlin, 1996.
[16] K. Bammert, M. Rautenberg, and W. Wittekindt. Matching of turbocomponents described by the example of impeller and diffuser in a centrifugal compressor. Transactions
of the ASME, 102:594600, July 1980.
[17] W. Banzhaf. Optimization by diploid search strategies. In H. Haken, editor, Neural
and Synergetic Computers, volume 42 of Springer Series in Synergetics, pages 155
166, Proceedings of the International Symposium at Schlo Elmau, Bavaria, June 1317,
1988. Springer.
[18] W. Banzhaf. A network of multistate units capable of associative memory and pattern
classification. Physica D, 34:418426, 1989.
[19] R.C. Bassus, E. Falck, and W. Gerlach. Application of the evolution strategy to optimize
multistep field plates for high voltage planar pn-junctions. Archiv fur Elektrotechnik,
75:345349, 1992.
[20] J. Benz, J. Polster, R. Bar, and G. Gauglitz. Program system sidys: Simulation and
parameter identification of dynamic systems. Comput. Chem., 11(1):4148, 1987.
[21] W. Berke. Kontinuierliche Regenerierung von ATP fur enzymatische Synthesen. Dissertation, Fachbereich Lebensmitteltechnologie und Biotechnologie, TU Berlin, September
1984.
[22] H.-G. Beyer. Simulation of steady states in dissipative systems by darwins paradigm of
evolution. Journal of Non-Equilibrium Thermodynamics, 15(2):4558, 1990.
[23] H.-G. Beyer. Some aspects of the evolution strategy for solving tsp-like optimization
problems appearing at the design studies of a 0:5TeVe+ e; linear collider. In R. Manner
and B. Manderick, editors, Parallel Problem Solving from Nature 2, pages 361370.
North-Holland, Amsterdam, 1992.
[24] H.-G. Beyer. Toward a theory of evolution strategies: Some asymptotical results from
the (1 + )-theory. Evolutionary Computation, 1(2):165188, 1993.
[25] H.-G. Beyer. How GAs do NOT work. Understanding GAs without Schemata and Building Blocks. Report of the Systems Analysis Research Group SYS2/95, University of
Dortmund, Department of Computer Science, 1995.
40

[26] H.-G. Beyer. Toward a theory of evolution strategies: Self-adaptation. Evolutionary


Computation, 3(3):311348, 1995.
[27] W.E. Blanz and E.R. Reinhardt. Image segmentation by pixel classification. Pattern
Recognition, 13(4):293298, 1981.
[28] J.J. Bogardi and L. Duckstein. Interactive multiobejctive analysis embedding the decsion
makers implicit preference function. Water Resources Bulletin, 28(1):7588, February
1992.
[29] J. Born and K. Bellmann. Numerical adaptation of parameters in simulation models using
evolution strategies. In K. Bellmann, editor, Molecular Genetic Information Systems
Modelling and Simulation, pages 291320. Akademie Verlag, Berlin, 1983.
[30] K. Bourdon, P. Breuer, M. Haupt, D. Hunold, M. Lauterbach, and T. Robers. Prozemodelle Der Schlussel zur Qualitatsverbesserung beim Spritzgieen. In 15. Kuststofftechnisches Kolloquium des IKV, pages 332357. Eurogress Aachen, 1416. Marz,
1990.
[31] U. Brudermann. Entwicklung und Anpassung eines vollstandigen Ansteuersystems
fur fremdenergetisch angetriebene Ganzarmprothesen. Fortschrittsberichte der VDIZeitschriften, 17(6), 1977.
[32] R.G. Cembrowicz and G.W. Krauter. Optimization of urban and regional water supply
systems. In Conference on Systems Approach for Development, pages 2628, Cairo,
November 1977.
[33] Y. Davidor, H.-P. Schwefel, and R. Manner, editors. Parallel Problem Solving from
Nature PPSN III, International Conference on Evolutionary Computation, volume
866 of Lecture Notes in Computer Science. Springer, Berlin, 1994.
[34] P. Dietmaier.
Ein Programmsystem zur automatischen Erstellung der

Ubertragungsfunktionen raumlicher Mechanismen. Robotersysteme, 8:8592, 1992.


[35] H. Dittrich and S. Maier. Entkopplung der erzwungenen Schwingungen elastisch
gelagerter Motoren. Technical report, Porsche AG, Stuttgart, 1986.
[36] R.M. Dreizler, E.K.U. Gross, and A. Toepfer. Extended thomas-fermi approach to diatomic systems. Phyics Letters, 71 A(1):4953, April 1979.
[37] H. Eggert. Gewichts- und Durchbiegeminimierung von Fachwerktragern durch Anwendung der Evolutionsstrategie. Diplomarbeit, Fachbereich Verfahrenstechnik, TU Berlin,
1990.
[38] A. E. Eiben and Th. Back. Multi-parent recombination operators in continuous search
spaces. Technical Report 97-01, Rijksuniversiteit te Leiden, Vakgroep Informatica, April
1997.
41

[39] A. E. Eiben, P.-E. Raue, and Zs. Ruttkay. Genetic algorithms with multi-parent recombination. In Davidor et al. [33], pages 7887.
[40] A. E. Eiben and C. A. Schippers. Multi-parents niche:
landscapes. In Voigt et al. [222], pages 319328.

n-ary

crossovers on NK-

[41] A. E. Eiben, C. H. M. van Kemenade, and J. N. Kok. Orgy in the computer: Multi-parent
reproduction in genetic algorithms. In Moran et al. [128], pages 934945.
[42] H. Esdorn, E. Rabe, and M. Schmidt. Wirtschaftlicher Vergleich von WarmwasserHeizungs- Systemen bei Betrieb mit zenraler Warmepumpe. In Moglichkeiten und
Grenzen der rationellen Energieverwendung, volume 275 of VDI-Berichte, pages 35
41, Dusseldorf, 1976. VDI-Verlag.
[43] K. von Falkenhausen. Optimierung regionaler Entsorungssysteme mit der Evolutionsstrategie. In Proceedings in Operations Research, volume 9 of Proceedings in Operations Research, pages 4651, DGOR Jahrestagung, Regensburg, 19.-21.09. 1979, 1980.
Physica-Verlag, Wurzburg.
[44] R. Feichtel. Optimale dimensionierung technischer konstruktionen. Diplomarbeit, Institut fur Mathematik, Johannes Keppler Universitat, Linz, September 1982.
[45] M. Fieber, A.M.G Ding, and P.J. Kuntz. A diatomics-in-molecules model for singly
ionized neon clusters. Atoms, Molecules and Clusters, 23:171179, 1992.
[46] J. Michael Fitzpatrick and John J. Grefenstette. Genetic algorithms in noisy environments. Machine Learning, (3):101120, 1988.
[47] L. J. Fogel, P. J. Angeline, and T. Back, editors. Proceedings of the Fifth Annual Conference on Evolutionary Programming. The MIT Press, Cambridge, MA, 1996.
[48] C. M. Fonseca and P. J. Fleming. Multiobjective optimization. In Back et al. [8], chapter
C4.5.
[49] R. Forsyth, editor. Machine Learning; Principles and Techniques, pages 83103. Chapman and Hall, London, 1989.
[50] R. Freeman and X. Wu. Design of magnetic resonance experiments by genetic evolution.
Journal of Magnetic Resonance, 75:184189, 1987.
[51] F. Fuchs and H.A. Maier. Optimierung des Lastflusses in elektrischen EnergieVersorungsnetzen mittels Zufallszahlen. Archiv fur Elektrotechnik, 66:8594, 1983.
[52] W. Funk. Computer aided engineering (cae) - problemlosungen fur den maschinenbau.
Der Konstrukteur, 6:816, 1982.
[53] H. Gaidosch. Optimierungsverfahren fur Probleme der Raketenballistik und der Flugmechanik von Flugkorpern, I. Teil. MBB-Bericht UA-439-78, MBB, Ottobrun, July
1978.
42

[54] H. Gaidosch. Optimierungsverfahren fur Probleme der Raketenballistik und der Flugmechanik von Flugkorpern, II. Teil. MBB-Bericht UA-503-79, MBB, Ottobrun, August
1979.
[55] T. Gast. Sonderdruck aus dem achema-jahrbuch 1971/73, band i. In Achema-Jahrbuch
1971/73, Band I: Europaische Forschung und Lehre im Chemie-Ingenieur-Wesen, pages
134136. Institut fur Me- und Regelungstechnik, TU Berlin, 1973.
[56] D. K. Gehlhaar and D. B. Fogel. Tuning evolutionary programming for conformationally
flexible molecular docking. In Fogel et al. [47], pages 419429.
[57] M. Gorges-Schleuter. Genetic Algorithms and Population Structures A Massively
Parallel Algorithm. Dissertation, Universitat Dortmund, 1990.
[58] A. Gottvald. Optimal magnet design for NMR. IEEE Transactions on Magnetics,
26(2):399401, 1990.
[59] A. Gottvald, K. Preis, C. Magele, O. Biro, and A. Savini. Global optimization methods
for computational electromagnetics. IEEE Transactions on Magnetics, 28(2):15371540,
March 1992.
[60] J. Graf and H.G. Wagemann. Evolutionsstrategie in der halbleitertechnik fur die charakterisierung von mos-bauelementen. Archiv fur Elektrotechnik, 76:155160, 1993.
[61] H. Greiner. Robust filter design by stochastic optimization. In F. Abeles, editor, Proc.
Soc. Photo-Opt. Instrum. Eng. 2253, pages 150161, 1994.
[62] H. Greiner. Robust optical coating design with evolutionary strategies. Applied Optics,
35(28):54775483, 1996.
[63] C. de Groot, D. Wurtz, and K.H. Hoffmann. Low autocorrelation binary sequences:
Exact enumeration and optimization by evolution strategies. IPS Research Report 8909, Interdisciplinary Project Center for Supercomputing, Eidgenossische Technische
Hochschule Zurich, November 1989.
[64] U. Hammel and Th. Back. Evolution strategies on noisy functions: How to improve
convergence. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Proceedings of
the International Conference on Evolutionary Computation, pages 159168, Jerusalem,
1994.
[65] C. Hampel. Ein Vergleich von Optimierungsverfahren fur die zeitdiskrete Simulation.
Dissertation, TU Berlin, 1981.
[66] D. Hartmann. Optimierung balkenartiger Zylinderschalen aus Stahlbeton mit elastischem und plastischem Werkstoffverhalten. Dissertation, Fachbereich Bauwesen, Universitat Dortmund, 1974.
[67] D. Hartmann. Optimierung flacher hyperbolischer Paraboloidschalen.
Stahlbetonbau, 9:216222, 1977.
43

Beton- und

[68] D. Hartmann. Zur Systematik und Methodik in der Tragwerksoptimierung. Habilitation,


Fachbereich Bauwesen, Universitat Dortmund, 1978.
[69] D. Hartmann. Structural optimization of discrete systems represented by finite elements.
Report UCB/SESM-84/8, University of California, Department of Civil Engineering,
Berkley, California, June 1984.
[70] D. Hartmann. Optimization in cad on the applicability of nonlinear evolution-strategies
for optimization problems in cad. In J.S. Gero, editor, Optimization in Computer-Aided
Design, pages 293305, Amsterdam, 1985. North-Holland.
[71] D.F. Hartmann.
Identifikationsstrategien zur Rissformbestimmung an Rotoren.
Zeitschrift fur angewandte Mathematik und Mechanik, 71(4):T139T141, 1991.
[72] M. Haupt. Computer-assisted optimization of working points in plastics processing.
Technical report, Institut fur Kunststoffverarbeitung, RWTH Aachen, 1988.
[73] M. Haupt, P. Heidemeyer, K. Kerres, and J. Turek. Thermoplastextrusion Schneckenauslegung und Betriebspunktoptimierung. In 15. Kuststofftechnisches Kolloquium des
IKV, pages 165196. Eurogress Aachen, 1416. Marz, 1990.
[74] J. Heistermann and H. Eckardt. Parallel algorithms for learning in neural networks with
evolution strategy. In D.J. Evans, G.R. Joubert, and F.J. Peters, editors, Parallel Computing 89, pages 275280. Elsevier Science Publishers, 1990.
[75] Andreas Hemker. Ein wissensbasierter genetischer Algorithmus zur Rekonstruktion
physikalischer Ereignisse. Doctoral dissertation, wub-dis 91-11, Gesamthochschule
Wuppertal, September 1992.
[76] A.V. Hense. Adaptionsstrategien zur Losung schwieriger Optimierungsaufgaben. Diplomarbeit, Fachbereich Informatik, Universitat Dortmund, August 1986.
[77] M. Herdy. Reproductive isolation as strategy parameter in hierarchically organized evolution strategies. In Manner and Manderick [121], pages 207217.
[78] R. Herrmann. Evolutionsstrategische Regressionsanalyse. Nobel-Hefte, 49(1/2):4454,
JanuaryJune 1983.
[79] J. Helich and P.J. Kuntz. A diatomics-in-molecules model for singly-ionized argon
clusters. Zeitschrift fur Physik D Atoms, Molecules and Clusters, 2:251252, 1986.
[80] G. Heusener. Optimierung natrium-gekuhlter schneller Brutreaktoren mit Methoden der
nicht-linearen Programmierung. Technical Report KFK 1238, Kernforschungsanlage
Karlsruhe, July 1970.
[81] P. Hilgers. Der Einsatz eines Mikrorechners zur hybriden Optimierung und Schwingungsanalyse. Dissertation, Fachbereich Maschinenbau, Ruhruniversitat Bochum, July 1978.
44

[82] J. Hoehn, E. Spoden, and G. Suschowk. Verifizierung von Abbrandprogrammen fur die
Betriebsbetreuung von WWER-440. Kernenergie, 31(9):397404, September 1988.
[83] A. Hofler. Formoptimierung von Leichtbaufachwerken durch Einsatz einer Evolutionsstrategie. Dissertation, Fachbereich Verkehrswesen, TU Berlin, 1976. ILR-Bericht 17.
[84] A. Hofler. Kraftepfadoptimierung von Leichtbaufachwerken. In E. Bubner, editor, Minimalkonstruktionen, pages 3851, Rudolf Muller Verlag, 1977.
[85] A. Hofler, U. Leyner, and J. Wiedemann. Untersuchungen zur Anwendung einer
grundlegenden Entwurfstheorie auf praktische Probleme der Leichtbaukonstruktion.
Zwischenbericht zum forschungsvorhaben wi 32/11, wi 32/16, Institut fur Luftfahrzeugbau, TU Berlin, April 1971.
[86] A. Hofler, U. Leyner, and J. Wiedemann. Optimization of the layout of trusses combining strategies based on michells theorem and on the biological principles of evolution. In Second Symposium on Structural Optimization, AGARD Conference Proceedings No. 123, pages A1 A8, Milan, Italy, April 24, 1973.
[87] J. Horn. Multicriterion decision making. In Back et al. [8], chapter F1.9.
[88] Martin Hulin. Evolution strategies for circuit partitioning. In Branko Soucek and the
IRIS Group, editors, Dynamic, Genetic, and Chaotic Programming, chapter 16, pages
413436. Wiley, New York, 1992.
[89] Martin Hulin. Structure evolution in neural systems. In Branko Soucek and the
IRIS Group, editors, Dynamic, Genetic, and Chaotic Programming, chapter 15, pages
395411. Wiley, New York, 1992.
[90] B. Jacob, E.K.U. Gross, and R.M. Dreizler. Solution of the thomas-fermi equation for
triatomic systems. J. Phys. B: Atom. Molec. Phys., 11(22):37953802, 1978.
[91] W.M. Jenkins. Towards structural optimization via the genetic algorithm. Computers &
Structures, 40(5):13211327, May 1991.
[92] K.D. Jurgens and R. Baumann. Ultrasonic absorption studies of protein-buffer interactions determination of equilibrium parameters of titratable groups. European Biophysics Journal, 12:217222, 1985.
[93] R.N. Kacker. Taguchi methods. In H.M. Wadsworth, editor, Handbook of Statistical
Methods For Engineers and Scientists, New York, 1990. McGraw-Hill.
[94] Jorg Kahlert. Vektorielle Optimierung mit Evolutionsstrategien und Anwendungen in der
Regelungstechnik. Fortschrittsberichte VDI Reihe 8, Nr. 234. VDI-Verlag, Dusseldorf,
1991.
[95] A. Kanarachos. A contribution to the problem of designing optimum performance bearings. Transactions of the ASME, pages 462468, October 1977.
45

[96] A. Kanarachos. Zur Anwendung von Parameteroptimierungsverfahren in der rechnergestutzten Konstruktion. Konstruktion, 31(5):177182, 1979.
[97] C. Kappler, Th. Back, J. Heistermann, A. van de Velde, and M. Zamparelli. Refueling
of a nuclear power plant: Comparison of a naive and a specialized mutation operator.
In H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature IV. Proceedings of the International Conference on Evolutionary Computation, volume 1141 of Lecture Notes in Computer Science, pages 829838.
Springer, Berlin, 1996.
[98] M. Kasper. Shape optimization by evolutionary strategy. IEEE Transactions on Magnetics, 28(2):15561560, March 1992.
[99] B. Kirste. Least-squares fitting of epr spectra by monte carlo methods. Journal of Magnetic Resonance, 73:213224, 1987.
[100] B. Kirste. Methods for automated analysis and simulation of electron paramagnetic resonance spectra. Analytica Chimica Acta, 265:191200, 1992.
[101] A. Knijnenburg, E. Matthaus, and V. Wenzel. Concept and usage of the interactive simulation system for ecosystems sonches. Ecological Modelling, 26:5176, 1984.
[102] E. Kobes. Entwicklung von Computer-Algorithmen zur Optimierung von Strukturkomponenten nach der Evolutionstheorie. Diplomarbeit, Institut fur Computer-Anwendungen,
Universitat Stuttgart, May 1987.
[103] B. Koch, R. Straubel, and A. Wittmuess. Interaktives Programm-System REH zur rechnergestutzten Entscheidungshilfe. Wissenschaftliche Schriftenreihe 2, TU Karl-MarxStadt, 1988. Beitrage zur Mehrkriteriellen Entscheidung.
[104] H.E. Koralewski, T. Peters, and Zerbst E. Optimization of electrical carotid sinus nerve
stimulation in man by means of the evolution strategy. Pflugers Archiv fur die gesamte
Physiologie des Menschen und der Tiere, 373:lxxvi, 1978.
[105] W. Korner et al. Optimierung der Geometrie quer angestromter Rohrrippen hinsichtlich
des Warmeubergangs. Verfahrenstechnik, 7(4), 1973.
[106] M. Kothe. Untersuchung der evolutionsstrategischen Bedeutung der nicht-genetischen
Varianz mit Hilfe der
0 0 ( = )]Evolutionsstrategie. Technical report, Tierarztliche
Hochschule Hannover, January 1979. Zwischenbericht eines Teilprojekts im SFB 146.
[107] H. Krallmann. Evolution strategy and social sciences. In G.J. Klir, editor, Applied General Systems Research Recent Developments and Trends, NATO Conference Series,
pages 891903. Plenum Press, New York, 1978.
[108] H. Krallmann. Heuristische Verfahren zur Optimierung soziookonomischer Systeme. In F.X. Bea, A. Bohnet, and H. Klimesch, editors, Systemmodelle Anwendungsmoglichkeiten des systemtheoretischen Ansatzes. Oldenbourg, Wien, 1979.
46

[109] P.B. Krause, R. Freytag, and W. Hattich. Modellgesteuerte Bildanalyse zur Erkennung
und Positionsvermessung u bereinanderliegender Werkstucke. Robotersysteme, 1:179
187, 1985.
[110] A.U. Kuhnle and J.H. Williams. Control of longitudinal waves in a rod with voigt damping. Mechanics of Structures and Machines, 18(3):335351, 1990.
[111] P.J. Kuntz and J. Valldorf. A dim model for homogeneous noble gas ionic clusters.
Zeitschrift fur Physik D Atoms, Molecules and Clusters, 8:195208, 1988.
[112] F. Kursawe. A variant of Evolution Strategies for vector optimization. In H.-P. Schwefel and R. Manner, editors, Parallel Problem Solving from Nature Proceedings 1st
Workshop PPSN I, volume 496 of Lecture Notes in Computer Science, pages 193197.
Springer, Berlin, 1991.
[113] F. Kursawe. Evolution strategies for vector optimization. In G.-H. Tzeng and P. L. Yu,
editors, Preliminary Proceedings of the Tenth International Conference on Multiple Criteria Decision Making, pages 187193. National Chiao Tung University, Taipei, 1992.
[114] M. Lawo. Automatische Bemessung fur Stochastische Dynamische Belastung. Dissertation, Fachbereich Bauwesen, Universitat-Gesamthochschule Essen, July 1981.
[115] M. Lawo and G. Thierauf. Optimal design for dynamic stochastic loading a solution by random search. In H. Eschenauer and N. Olhoff, editors, Optimization Methods
in Structural Design, Proceedings of the Euromech-Colloquium 164, pages 346352,
Oct. 1214, 1982, Universitat Siegen, FR Germany, 1983. BI Verlag.

[116] K. Leimkhler.
Some methodological problems in energy modelling. In F.E. Cellier,
editor, Progress in Modelling and Simulation, pages 6173. Academic Press, London,
1982.
[117] R. Lerch. Simulation von ultraschall-wandlern. ACUSTICA, 57:205217, 1985.

[118] U. Leyner. Uber


den Einsatz linearer Programmierung beim Entwurf optimaler Leichtbaustabwerke. PhD thesis, Fachbereich Verkehrswesen, TU Berlin, June 1974.
[119] H.J. Lichtfuss.
Evolution eines Rohrkrummers.
Stromungstechnik, TU Berlin, March 1965.

Diplomarbeit, Institut fur

[120] R. Lohmann. Structure evolution and incomplete induction. In R. Manner and B. Manderick, editors, Parallel Problem Solving From Nature 2, pages 175185. Elsevier Science
Publishers, Amsterdam, 1992.
[121] R. Manner and B. Manderick, editors. Parallel Problem Solving from Nature 2. Elsevier,
Amsterdam, 1992.
[122] P. Markwich. Der thermische Wasserstrahlanrieb auf der Grundlage des offenen
Clausius-Rankine-Prozesses. Technical Report IRL-Bericht 28, Institut fur Luft- und
Raumfahrt, TU Berlin, 1978.
47

[123] G. Menges, W. Michaeli, and M. Haupt. Computerized process optimization for plastics
processing. In 47th Annual Technical Conference of the Society of Plastics Engineers
(SPE), New York, April 14, May 1989.
[124] W. Michaeli. Materials processing a key factor. Angewandte Chemie, Advanced
Materials, 28(5):660665, 1989.
[125] W. Michaeli. Neues aus dem IKV zur Optimierung der Kunststoffverarbeitung. In
15. Kuststofftechnisches Kolloquium des IKV, pages ixxiv. Eurogress Aachen, 14
16. Marz, 1990.
[126] Z. Michalewicz and M. Schoenauer. Evolutionary algorithms for constrained parameter
optimization problems. Evolutionary Computation, 4(1):132, 1996.
[127] A.K. Mitra and H. Brauer. Optimization of a two phase co-current flow nozzle for
masstransfer. Verfahrenstechnik, 7(4):9297, 1973.
[128] F. Moran, A. Moreno, J. J. Merelo, and P. Chacon, editors. volume 929 of Lecture Notes
in Artificial Intelligence. Springer, Berlin, 1995.
[129] H. Muller. Power flow optimization in electric networks by evolutionary strategic search.
In 6th European Congress on Operations Research, Vienna, Austria, July 1922, 1983.
[130] H. Muller and H. Hofmann. Kinetische untersuchung zur heterogen-katalytischen dehydrochlorierung von 1,1-difluor-1-chlorethan. Chemiker-Zeitung, 114(3):93100, 1990.
[131] H. Muller and G. Pollhammer. Evolutionsstrategische lastfluoptimierung. Teilbericht
zum vorhaben p5068, Institut fur Elektrische Anlagen und Hochspannungstechnik,
TU Berlin, 1983.
[132] H. Muller, G. Theil, and W. Waldmann. Results of evolutional random search procedure
for load flow optimization in electric networks. In System Modelling and Optimization,
Lecture Notes in Control and Information Sciences 84, pages 628636. Springer, 1984.
[133] K.D Muller. Optimieren mit der Evolutionsstrategie in der Industrie anhand von Beispielen. PhD thesis, Fachbereich Verfahrenstechnik, TU Berlin, 1986. Dissertation.
[134] W. Nachtigall. Biotechnik und Bionik Fachubergreifende Disziplinen der Naturwissenschaft. Akademie der Wissenschaften und der Literatur, Mainz. Franz Steiner Verlag,
Wiesbaden, 1982.
[135] W. Neise. Review of noise reduction methods for centrifugal fans. Journal of Engineering for Industry, 104:151161, March 1982.
[136] B. Noche, R. Kottkamp, G. Lucke, and E. Peters. Optimizing simulators within material
flow systems. In G.C. Vansteenkiste et al., editors, Proceedings of the 2nd European
Simulation Congress, pages 651657, Belgium, 1986. Simulation Councils.
48

[137] W. Noo. Konnen Rechenautomaten durch Optimierungsprogramme Neues entdecken ?


Burotechnik + Automation, 11:214221, April 1970.
[138] W. Noo. Automatische Synthese von Viergelenkgetrieben durch Digitalrechner. Feinwerktechnik, 75(4):165168, 1971.
[139] W. Noo. Ein universell anwendbares Rechner- Unterprogramm fur Entwurf und Optimierung. Angewandte Informatik, 13:123129, March 1971.
[140] M. North. La simulation des processus hydrologiques intermittens par des modeles alternes inhomogenes. Hydrological Sciences-Bulletin-des Sciences Hydrologiques,
25(1):512, 1980.
[141] M. North. Time-dependant stochastic model of floods. Journal of the Hydraulics Division, Proceedings of the American Society of Civil Engineering, 106(HY5):649665,
May 1980.
[142] W. North, E. Hernandez, and R. Garcia. Frequency analysis of high co concentrations in
madrid by stochastic process modelling. Atmospheric Environment, 18(10):20492054,
1984.
[143] H.G. Nurnberg and G. Vossius. Evolutionsstrategie - ein regelkonzept fur die funktionelle
elektrostimulation gelahmter gliedmaen. Biomedizinische Technik (Erganzungsband),
31:5253, September 1986.
[144] H.G. Nurnberg and G. Vossius. The applicability of the evolution strategy to the control of paralyzed limbs through fes. In Selected Papers from the 10th Triennial World
Congress of the International Federation of Automatic Control, pages 3743, Munich,
July 2731, 1987.
[145] W. Oberdieck, B. Richter, and P. Zimmermann. Evolutionsstrategie Ein Hilfsmittel bei der Losung fahrzeugtechnischer Aufgaben. Automobiltechnische Zeitschrift,
84(7/8):331337, 1982.
[146] A. Ostermeier, A. Gawelczyk, and N. Hansen. Step-size adaptation based on non-local
use of selection information. In Davidor et al. [33], pages 189198.
[147] F. Papentin. A darwinian evolutionary system ii. experiments on protein evolution and
evolutionary aspects of the genetic code. Journal of theoretical Biology, 39:417430,
1973.
[148] F. Papentin. A darwinian evolutionary system iii. experiments on the evolution of
feeding patterns. Journal of theoretical Biology, 39:431445, 1973.
[149] T.K. Peters, H.-E. Koralewski, and E.W. Zerbst. The evolution strategy a search
strategy used in individual optimization of electrical parameters for therapeutic carotid
sinus nerve stimulation. IEEE Transactions on Biomedical Engineering, 36(7):668675,
July 1991.
49

[150] T.K. Peters, H.E. Koralewski, and E. Zerbst. The principle of electrical carotid sinus
nerve stimulation: A nerve pacemaker system for angina pectoris and hypertension therapy. Annals of Biomedical Engineering, 8:445458, 1980.
[151] T.K. Peters, H.E. Koralewski, and E. Zerbst. Search for optimal frequencies and amplitudes of therapeutic electrical carotid sinus nerve stimulation by application of the
evolution strategy. Artificial Organs, 13(2):133143, 1980.
[152] G. Pickup and V.H. Chewings. Estimating the distribution of grazing and patterns of
cattle movement in a large arid zone paddock. International Journal of Remote Sensing,
9(9):14691490, 1988.
[153] W.E. Pinebrook. Drag Minimization on a Body of Revolution. Dissertation, Department
of Mechanical Engineering, University of Houston, Texas, May 1982.
[154] W.E. Pinebrook. Drag minimization on a body of revolution through evolution. Computer
Methods in Applied Mechanics and Engineering, 39(2):179197, 1983.
[155] W.E. Pinebrook. The evolution strategy applied to drag minimization on a body of revolution. Mathematical Modelling, 4:439450, 1983.
[156] P. Plaschko and K. Wagner. Evolutions-Linearisierungs-Programm zur Darstellung von
numerischen Daten durch beliebige Funktionen. Technical Report DLR-FB-73-55, Institut fur Turbulenzforschung, DFVLR, March 1973.
[157] H.J. Poethke and H. Kaiser. A simulation approach to evolutionary game theory: The
evolution of time-sharing behaviour in a dragonfly mating system. Behavioval Ecology
and Sociobiology, 18:155163, 1985.
[158] J. Popplau. Die Anwendung einer ( =,)Evolutionsstrategie zur direkten Minimierung eines nicht-linearen Funktionals unter Verwendung von FE-Ansatzfunktionen
am Beispiel des Brachistochronenproblems. Zeitschrift fur Angewandte Mathematik und
Mechanik, 61:T305T307, 1981.
[159] St. Prange. Emulation of biology-oriented neural networks. In R. Eckmiller, G. Hartmann, and G. Hauske, editors, Parallel Processing in Neural Systems and Computers,
pages 7982. Elsevier Science Publishers, 1990.
[160] K. Preis, O. Biro, M. Friedrich, A. Gottvald, and C. Magele. Comparison of different
optimization strategies in the design of electromagnetc devices. IEEE Transactions on
Magnetics, 27(5):41544157, September 1991.
[161] K. Preis, G. Vrisk, A. Ziegler, O. Biro, et al. Distributed processing of fem in a local area
network. IEEE Transactions on Magnetics, 26(2):827830, March 1990.
[162] K. Preis and A. Ziegler. Optimal design of electromagnetic devices with evolution strategies. Compel The International Journal for Computations and Mathematics in Electrical and Electronic Engineering, 9(Supplement A):119122, 1990.
50

[163] E. Ramm, K.U. Bletzinger, and S. Kimmich. Strukturoptimierung. In Vorstand des


SFB 230, editor, Naturliche Konstruktionen Mitteilungen des SFB 230, volume 1,
pages 2742, Stuttgart, 1988. Sonderforschungsbereich 230, Universitat Stuttgart, Kurz
& Co., Reprographie GmbH.
[164] I Rechenberg. Cybernetic solution path of an experimental problem. Technical Report
1122, Royal Aircraft Establishment, Library Translation, Ministry of Aviation, Farnbough, August 1965.
[165] I. Rechenberg. Bionik, Evolution und Optimierung. Naturwissenschaftliche Rundschau,
11(26):465472, 1973.
[166] I. Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien
der biologischen Evolution. Frommann-Holzboog, Stuttgart, 1973.
[167] I. Rechenberg. Evolutionsstrategische Bedeutung der Plastizitat biologischer Merkmale (Restvariabilitat) und deren mogliche selektionsgenetische Reduzierung. In Wissenschaftlicher Arbeits- und Ergebnisbericht des SFB 146 Versuchstierforschung.
Fachgebiet Bionik und Evolutionstechnik, TU Berlin, February 1982.
[168] I. Rechenberg. Evolution strategy: Natures way of optimization. In H.W. Bergmann, editor, Optimization: Methods and Applications, Possibilities and Limitations, pages 106
126. Springer, Berlin, 1989.
[169] I. Rechenberg. Evolutionsstrategie 94, volume 1 of Werkstatt Bionik und Evolutionstechnik. FrommannHolzboog, Stuttgart, 1994.
[170] R.D. Recknagel, R. Amlacher, and F. Bergter. A survival model for mengo virus-infected
mice. Studia Biophysica, 115(1):4550, 1986.
[171] R.D Recknagel and W.A. Knorre. Anwendung biologischer Evolutionsprinzipien zur
Optimierung von Fermentationsprozessen. Zeitschrift fur allgemeine Mikrobiologie,
24(7):479483, 1984.
[172] R. Reppenhagen. Adaptive Suchalgorithmen. Masters thesis, Fachbereich Mathematik/Informatik, Universitat-Gesamthochschule Paderborn, May 1985.
[173] H.-J. Riedel. Einsatz rechnergestutzter Optimierung mittels der Evolutionsstrategie
zur Losung galvanotechnischer Probleme. PhD thesis, Fachbereich Verfahrenstechnik,
TU Berlin, March 1984.
[174] L. Riekert. Moglichkeiten und Grenzen deduktiven Vorgehens bei der Entwicklung technischer Katalysatoren. Chem.-Ing.Tech., 53(12):950954, 1981.
[175] J. Rinderle. Untersuchung u ber die Anwendbarkeit der Evolutionsstrategie in der
Nachrichtentechnik. Abschluarbeit, Fachbereich Elektrotechnik, Sektion Hochfrequenztechnik, Hochschule der Bundeswehr Munchen, 1974.
51

[176] R. Rodloff and H. Neuhauser. Application of an evolution strategy to calculate static


and dynamic dislocation group configurations. Physica Status Solidi A, 37:K93K96,
1976.
[177] R.K. Rodloff. Bestimmung der Geschwindigkeit von Versetzungsgruppen in neutronenbestrahlten Kupfer-Einkristallen.
Dissertation, Naturwissenschaftliche Fakultat,
TU Braunschweig, May 1976.
[178] P. Roosen and F. Meyer. Determination of chemical equilibria by means of an evolution strategy. In R. Manner and B. Manderick, editors, Parallel Problem Solving From
Nature 2, pages 411420. Elsevier Science Publishers, Amsterdam, 1992.
[179] P.J. Ross. Taguchi Techniques for Quality Engineering. Mc Graw-Hill, New York, 1988.
[180] G. Rudolph. On correlated mutations in evolution strategies. In Manner and Manderick
[121], pages 105114.
[181] G. Rudolph and J. Sprave. Significance of locality and selection pressure in the grand
deluge evolutionary algorithm. In Voigt et al. [222], pages 686695.
[182] P. Rummel and W. Beutel. Workpiece recognition and inspection by a model based scene
analysis system. Pattern Recognition, 17(1):141148, 1984.
[183] M. Ruppert. Reglersynthese mit Hilfe der mehrgliedrigen Evolutions-Strategie. Technical Report 51, Fortschrittberichte der VDI Zeitschriften, Reihe Me-, Steuerungs-und
Regelungstechnik, Reihe 8, 1982.
[184] N. Saravanan and D. B. Fogel. Evolving neurocontrollers using evolutionary programming. In Proceedings of the First IEEE Conference on Evolutionary Computation, volume 1, pages 217222. IEEE Press, Piscataway, NJ, 1994.
[185] N. Saravanan and D. B. Fogel. Learning of strategy parameters in evolutionary programming: An empirical study. In A. V. Sebald and L. J. Fogel, editors, Proceedings
of the Third Annual Conference on Evolutionary Programming, pages 269280. World
Scientific, Singapore, 1994.
[186] N. Saravanan, D. B. Fogel, and K. M. Nelson. A comparison of methods for selfadaptation in evolutionary algorithms. BioSystems, 36:157166, 1995.
[187] A. Scheel. Beitrag zur Theorie der Evolutionsstrategie. Dissertation, Technische Universitat Berlin, 1985.
[188] A. Scheidemann and R.M. Dreizler. A parametrization of the tfdw density for atoms and
diatomic quasimolecules. Zeitschrift fur Physik D Atoms, Molecules and Clusters,
2:4347, 1986.
[189] C. Schiemangk. Anwendung einer Evolutionsstrategie zum Auffinden eines optimalen
Subgraphen. In Zingert, editor, Numerische Realisierung Mathematischer Modelle, ZfR
81.16. Zentralinstitut fur Kybernetik und Informationsprozesse, AdW, 1981.
52

[190] W. Schiffmann and K. Mecklenburg. Genetic generation of backpropagation trained neural networks. In R. Eckmiller, G. Hartmann, and G. Hauske, editors, Parallel Processing
in Neural Systems and Computers, pages 205208. Elsevier Science Publishers, 1990.
[191] H. Schmiedl. Anwendung der Evolutionsoptimierung bei Mikrowellenschaltungen. Frequenz, 35(11):306310, 1981.
[192] K. Schmitz. JES-Julicher Energiemodell-System Ein Instrumentarium zur Analyse
der Entwicklungsmoglichkeiten der Energiewirtschaft in der Bundesrepublik Deutschland. In A. Voand K. Schmitz, editors, Energiemodelle fur die Bundesrepublik Deutsch Rheinland GmbH, 1980.
land, pages 6987. TUV
[193] R. Schoebel. Anwendung der Evolutionsstrategie auf deterministische Modelle der Operations Research. Studienarbeit, Fachbereich Verfahrenstechnik, Fachgebiet Bionik und
Evolutionstechnik, TU Berlin, July 1976.
[194] P. Scholz. Die Darwinsche Evolution als Strategiemodell fur die numerische Optimierung von Parametern nicht-linearer Regressionsfunktionen. EDV in Medizin und
Biologie, 13(2):3643, 1982.
[195] E. Schoneburg. Auftrags- und Montagereihenfolge-Optimierung mit Expertensystemen
und Evolutionsstrategien. In P. Mertens, H.-P. Wiendahl, and H. Wildemann, editors,
PPS im Wandel Kundenorientierung und Wirtschaftlichkeit durch innovative PPSLosungen, pages 221228. gfmt Verlags KG, Munchen, 1992.
[196] E. Schoneburg and F. Heinzmann. Industrielle Planung mit Methoden der Kunstlichen
Intelligenz. DV-Management, (1):2629, 1991.
[197] E. Schoneburg and F. Heinzmann. PERPLEX: Produktionsplanung nach dem Vorbild
der Evolution. Wirtschaftsinformatik, 2:224232, April 1992.
[198] R. Schultheis, R. Rautenbach, and G. Bindl. Entwicklung von Ventrikelmodellen nach
dem Prinzip der biologischen Evolution. In Fachtagungen Medex, volume 21 of Biomedizinische Technik, pages 197198, Basel, Schweiz, June 1976. ErganzungsBand.
[199] M. Schutz and J. Sprave. Application of parallel mixed-integer evolution strategies with
mutation rate pooling. In L.J. Fogel, P.J. Angeline, and Th. Back, editors, Proceedings of
the Fifth Annual Conference on Evolutionary Programming, pages 345354, Cambridge,
MA, 1996. MIT Press.
[200] H.-P. Schwefel. Kybernetische Evolution als Strategie der experimentellen Forschung in
der Stromungstechnik. Diplomarbeit, Technische Universitat Berlin, 1965.
[201] H.-P. Schwefel. Evolutionsstrategie und numerische Optimierung. Dissertation, Technical University of Berlin, Germany, 1975.
[202] H.-P. Schwefel. Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie, volume 26 of Interdisciplinary Systems Research. Birkhauser, Basel, 1977.
53

[203] H.-P. Schwefel. Numerical Optimization of Computer Models. Wiley, Chichester, 1981.
[204] H.-P. Schwefel. Collective phenomena in evolutionary systems. In Preprints of the 31st
Annual Meeting of the International Society for General System Research, Budapest,
volume 2, pages 10251033, 1987.
[205] H.-P. Schwefel. Optimum seeking by imitating natural intelligence. In Workshop
Adaptive Learning, pages 179182, Schlo Reisensburg (Gunzburg), July 1989.
Forschungsinstitut fur anwendungsorientierte Wissensverarbeitung FAW.
[206] H.-P. Schwefel. Imitating evolution: Collective, two-level learning processes. In U. Witt,
editor, Explaining Process and Change Approaches to Evolutionary Economics, pages
4963. The University of Michigan Press, Ann Arbor, MI, 1992.
[207] H.-P. Schwefel. Evolution and Optimum Seeking. Sixth-Generation Computer Technology Series. Wiley, New York, 1995.
[208] H.-P. Schwefel and Th. Back. Artificial evolution: how and why ? In D. Quagliarella,
J. Periaux, C. Poloni, and G. Winter, editors, Genetic Algorithms in Engineering and
Computer Science, chapter 1, pages 119. Wiley, Chichester, 1997.
[209] H.-P. Schwefel and G. Rudolph. Contemporary evolution strategies. In F. Moran,
A. Moreno, J.J. Merelo, and P. Chacon, editors, Advances in Artificial Life. Third European Conference on Artificial Life, Granada, Spain, 1995. Springer.
[210] H.P. Schwefel. Projekt MHD-Staustrahlrohr: Experimentelle Optimierung einer
Zweiphasenduse, Teil I. 11.034/68 Bericht 35, AEG Forschungsinstitut, Berlin, October 1968.
[211] C. Seeger. Simulation of economic development and the impacts on the public budget.
In L. Dekker, editor, Simulation of Systems, pages 683689. North-Holland, 1976.
[212] C. Seeger. Simulation optimaler Politiken mit linearen Modellen. In Proceedings of the
International Symposium Simulation 77, pages 421427, July 2224, Montreux, Swiss,
1977.
[213] C. Seeger and G. Wiegand. Kontrolltheorie und numerische Optimierung. In Proceedings
in Operations Research, volume 7, pages 6070, Wurzburg, 1978. Physica-Verlag.
[214] H. Sonnenschein. A modular optimization calculation method of power station energy
balance and plant efficiency. Journal of Engineering for Power, 104:255259, April
1982.
[215] J. Sprave. Linear neighborhood evolution strategy. In A.V. Sebald and L.J. Fogel, editors,
Proceedings of the 3nd Annual Conference on Evolutionary Programming, pages 4251,
Singapore, 1994. World Scientific.
54

[216] K. Sputek. Anwendung evolutorischer suchstrategien zur top-downrechnung von budgetierungsmodellen. Diplomarbeit, Fachbereich Wirtschaftswissenschaften, TU Berlin,
October 1984.
[217] W. Stich, P.M. Gross, and R.M. Dreizler. Accurate solution of the thomas-fermidirac- weizsacker variational equations for the case of neutral atoms and positive ions.
Zeitschrift fur Physik A Atoms and Nuclei, 309:511, 1982.
[218] G. Taguchi. Introduction to Quality Engineering. American Supplier Institute, 1989.
[219] K. Thonke, H.U. Hermann, and J. Schneider. A zeeman study of the 1:54 m transition
in molecular beam epitaxial gaas:er. J. Phys. C: Solid State Phys., 21:58815888, 1988.
[220] G.R. Tremmel. Analyse von Regelstrecken mit Hilfe der Evolutionsstrategie - Ergebnis
der ersten Untersuchungen. Technical report, Fa. Hartmann & Braun, Frankfurt/Main,
1974.
[221] M.-T. Unverfehrt. Untersuchung zur Anwendung eines Verfahrens der Evolutionsstrategie bei der Optimierung von Leichtbaustrukturen. Studienarbeit, Institut fur Luft- und
Raumfahrttechnik, TU Berlin, November 1970.
[222] H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, editors. Parallel Problem Solving from Nature IV. Proceedings of the International Conference on Evolutionary Computation, volume 1141 of Lecture Notes in Computer Science. Springer, Berlin,
1996.
[223] H. Wagner. Procedures for the solution of the unit commitment problem. In H.J. Wacker,
editor, Applied Optimization Techniques in Energy Problems, pages 449470, Stuttgart,
1985. Teubner.
[224] M. Walk and J. Niklaus. Some remarks on computer-aided design of optical lens systems.
Journal of Optimization Theory and Applications, 59(2):173181, 1988.
[225] Qizhong Wang. Optimization by simulating molecular evolution. Biol. Cybern., 57:95
101, 1987.
[226] F. Weiland. Evolutionsstrategien und Expertensysteme: Ein ratiomorpher Algorithmus (RA). In F.L. Wilke, editor, CAD/CAM und Expertensystems im deutschen Bergbau,
pages 224243, 1617. November, Berlin, 1989. Institut fur Bergbauwissenschaften,
TU Berlin.
[227] F. Werner. Ein adaptives stochastisches Suchverfahren fur spezielle Reihenfolgeprobleme. Ekonomicko-Matematicky Obzor, 24(1):5067, 1988.
[228] L. D. Whitley. The GENITOR algorithm and selection pressure: Why rankbased allocation of reproductive trials is best. In J. D. Schaffer, editor, Proceedings of the Third
International Conference on Genetic Algorithms, pages 116121. Morgan Kaufmann
Publishers, San Mateo, CA, 1989.
55

[229] I. Wiedemann, A. Hofler, and U. Breitling. Entwurf und Entwicklung von Fachwerken.
Interdisziplinare Forschungsprojekte an der TU Berlin, 9(2/3):222235, 1977.
[230] Dirk Wiesmann, Ulrich Hammel, and Thomas Back. Robust design of multilayer optical
coatings by means of evolution strategies. IEEE Transactions on Evolutionary Computation. (submitted).
[231] V. Wilms. Auslegung von Bolzenverbindungen mit minimalem Bolzengewicht. Konstruktion, 34(2):6370, 1982.
[232] E. Winkler. Optimum design of gamma-irradiation plants by means of mathematical
methods. Radiat. Phys. Chem., 26(5):599601, 1985.
[233] E. Winkler. A mathematical approach to the optimum design of gamma irradiation facilities. Isotopenpraxis, 22(1):711, 1986.
[234] A. Wittmus, R. Straubel, and R. Rosenmuller. Interactive multi-criteria decision procedure for macroeconomic planning. Syst. Anal. Model. Simul., 1(5):411424, 1984.
[235] C. Woenckhaus. Konzeption eines Systems zur automatischen 3D-Layoutoptimierung.
Robotersysteme, 8:239244, 1992.
[236] X.-L. Wu. Darwins ideas applied to magnetic response. the marriage broker. Journal of
Magnetic Response, 85:414420, 1989.
[237] D. Zetsche. Die Anwendung moderner regelungstheoretischer Verfahren zur Synthese einer aktiven Federung. Dissertation, Fachbereich Maschinentechnik, UniversitatGesamtHochSchule Paderborn, February 1982.
[238] A. Ziegler and W. Rucker. Die Optimierung der Strahlungscharakteristik linearer Anten 40(1):1518, 1986.
nengruppen mit Hilfe der Evolutionsstrategie. AEU,

56

You might also like