You are on page 1of 36

Multi-objective

Optimization
Review of techniques and Applications

Sambit Kumar Nandi ;


Roll: 001011701025
Department of Production Engineering,
Jadavpur University

Seminar 4th Year 1st Semester

1
Outline
What is MOO
MOO- Pareto Optimality
Methods: Articulation of preferences
Classification
Meta-heuristic Methods
MOEAs- GAs, DEMOs
Swarm Intelligence PSO, ACO, AIS
Case Study Peaks function, ZDT1
Comparison of techniques
Conclusion and Future Work
2
What is Multi-objective optimization
f(x)T = { f1(x) f2 (x) fm (x) }
min { f(x) } g(x)T = { g1 (x) g2 (x) gk (x) }
s.t.: g(x) 0
h(x) = 0 h(x)T = { h1 (x) h2 (x) hl (x) }

xT = { x1 x2 xn } X

One optimum versus


multiple optima
Requires search and
decision-making
Two spaces of interest,
instead of one

3
Pareto Optimality- Concept of dominance

Relates to the
concept of
domination
x(1) dominates x(2), if
x(1) is no worse than
x(2) in all objectives
x(1) is strictly better
than x(2) in at least
one objective
Examples:
3 dominates 2
3 does not dominate 5

4
Pareto Optimal Front
A feasible vector X* is called a
Pareto optimal if no other feasible
solution Y* that would reduce some
objective function without causing
a simultaneous increase in at least
one other objective function.

Pareto-Optimal set
= Non-
dominated(S)
A number of solutions
are optimal
5
Methods to perform Multi-objective optimization

6
Methods: No articulation of Preference
Info

Min Max Formulation.

no preference information from the decision-


maker is necessary
one point on the Pareto front
Pareto front can be obtained by
- Changing the value of p
- giving the single objectives different
weightings

7
Methods: Priori articulation of Preference Info

Weighted Sum Fuzzy logic Lexicographic


Approaches Approaches Approaches
Most easy and Based on a multi- Objectives ranked
perhaps most valued logic according to
widely used expresses the importance
method degree of Solutions are
Objective truthfulness ordered by first
function is To associate a evaluating them
formulated as a normalized value based on the
weighted Lp- to each objective foremost
metric i(fi(x)) objective
Disadvantage: All
obj. functions might
not be considered
By choosing Broadly applicable to Used jointly with
different DM Broadly applicable
Probabilistic DM to Used
other jointly with
techniques.
comes to a Probabilistic
situations DM other techniques.
E.g. Goal 8
solution situations E.g. Goal
Programming,
Programming,
Selection Algo in
Methods: Progressive articulation of Preference
Info
There is no need for a priori preference information, only
local preference information is needed,
It is a learning process where the DM gets a better
understanding of the problem,
As the DM takes an active part in the search it is more
likely that he/she accepts the final solution.
The disadvantages
The solutions are depending on how well the DM can
articulate his preferences
A high effort is required from the DM during the whole
search process
The required computational effort is higher then in the
previous methods.
STEM Method Steuer Method
9
Methods: Posteriori articulation of Preference
Info

Some cases, it is difficult for a decision-maker to


express an explicit approximation of the preference
function.
Effective to allow the decision-maker to choose from a
palette of solutions
Algorithm for Pareto Optimal Set
Various methods of obtaining Pareto set are: Physical
Programming, Nominal Boundary Intersection,
Nominal Constraint method
Some of these methods suffer from a large
computational burden
Disadvantage might be that the DM has too many
solutions to choose from
10
General optimization methods
Study Focuses on Meta-
heuristic methods.
Most suitable for complex
engineering problems.
Also known as the Black
Box methods

advantage are more likely


to find a global optima, and
not stuck on local optima
11
Meta-heuristic Methods
Rando Randomly Picked x
m Values.
Search Converges toward
optimum.
Easy to implement
Distinguishing
global from local
maxima is easy

Tabu
Search Always move to the best
available neighbourhood
solution point, even if it is
worse than the current
solution
Maintain a list of solution
points
Update this list based on 12
some memory structure
Meta-heuristic Methods
Simulated Annealing
A random-search technique -
exploits an analogy between
the way in which a metal cools
and freezes into a minimum
energy crystalline structure
Temperature control
parameter reduces with time
SA's major advantage - ability
to avoid becoming trapped in
local minima.
Algorithm employs a random
search - not only accepts
changes in positive direction;
also in the opposite one with a
probability. 13
Evolutionary Multi-objective Optimization
Inspired by Darwinian Evolution- Survival of the Fittest

1. Population with a
random genome
2. Evaluate objectives
3. Study features to Assign
Fitness.
4.Select interesting individuals - inserts
them into the mating pool.
5. A new using - mutation and/or
recombination.
6. Offspring integrated into the population14
Evolutionary Multi-objective Optimization
Genetic Differential
Algorithms Evolutions
Parents are stochastically selected Main Difference in applying
and copied to an offspring vector selection, mating operators.
Recombination and mutation several individuals to create a
operators in order to form new candidate offspring to
offspring compete against its parent.
Parents and offspring are joined DE enhances elitism by applying
Population size always varies from further rules after having
2N to N during truncation created the candidate offspring
NSGA II, SPEA DEMONS2, DEMOSPE

15
Evolutionary Multi-objective Optimization
Non-dominated Sorting GA (NSGA II)

16
Swarm Intelligence Approaches
Particle Swarm
Optimization Ant Colony Optimization
Form of swarm intelligence finding optimal paths in graphs
in which the behaviour of a ACO is based on the metaphor
flock of birds or a school of of ants seeking food.
fish is simulated. First, a set of ants performs
Positions and velocities of randomized walks through the
graphs.
all individuals are randomly
initialized. Proportional to the goodness of
the solutions, pheromones are
For best position it can laid out on paths Over time,
make use of memory or the pheromone trail starts to
communication with other evaporate
particles A short path gets marched
Communication method over more frequently, and thus
the pheromone density
slows convergence speed
becomes higher
but reaching global optima
more likely
17
Swarm Intelligence Approaches
Artificial Immune System
Proposed by Dasgupta, in 1999
Artificial Immune algorithm is based 1. Initialization of
on clonal selection principle and antibodies (potential
is a population based algorithm solutions to the problem).
Inspired by the human immune Antigens : objective function
system f(x) to be optimized.
exhibits the following strengths: 2. Cloning fitness of each
immune recognition, antibody is determined.
reinforcement learning, feature Based on this fitness the
extraction, immune memory, antibodies are cloned; that is
diversity and robustness. the best will be cloned the
The artificial immune system (AIS) most.
combines these strengths and has 3. Hyper mutation the best
been gaining significant attention antibodys clones are
due to its powerful adaptive mutated lesser and worst
learning and memory antibodys clones are
capabilities.
mutated most.
evaluated along with their 18
original antibodies out of
which the best N antibodies
Hybrid Methods
Pipeline Asynchronous
Implemented Work on different subsets of the
Sequentially solution space
If the individuals from such a subset
To speed up
optimization outperform the ones in the shared
population, they are allowed to
First An algorithm immigrate into it
finds promising
regions of solution. A method that converges slowly could
be combined with one that converges
switch to a method faster
with higher
convergence rate in A method that performs well on one
order to speed up the subset of the search space could be
search combined with method that performs
well on another subset
GA with gradient-
based algorithm. Genetic algorithms with multiple 19

populations
Hybrid Methods
Hierarchical Additional Operators
Different techniques Operators from one method are
at different levels added to or even replacing the
standard operators of the other
On an overall level - a method.
robust optimization
strategy to find an E.g.
optimal layout hybrids of SA and GA algorithms for
instance where among other things
At lower level- system the replacement is done according to
less complex and a simulated annealing scheme
sensitive Appropriate
to employ for instance Hybrid of Differential Evolution with
pure analytical Immune algorithm (Yildiz, 2012)
methods

20
Case Study: Example Peaks problem

To establish the utility of multi objective


optimization
Consider bi-objective two conflicting peaks
functions

OBJECTIVE FUNCTION:

Z = max [ ,

21
Case Study: Example Peaks problem

Optimum J1 alone Optimum J2 alone

Both the functions are of conflicting nature and optimized 22


individually
Case Study: Example Peaks problem

Optimum J1 alone Optimum J2 alone

X1* = 0.0532 X2* = -1.5808


1.5973 0.0095
J1 = 8.9280 J1 (X2*) = -6.4858
J2 = 8.1118
J2(X1*) = -4.8202

Each point x1* and x2* optimizes objectives J1


and J2 individually. Unfortunately, at these
points the other objective exhibits a low
objective function value. There is no single point
that simultaneously optimizes both objectives J1
and J2!
23
Case Study: Example Peaks problem
For better performance
of the concerned MOP

Let us consider a
combined function J

Jtot = J1 +
J2

24
Case Study: Example Peaks problem
For better performance
of the concerned MOP

Let us consider a
combined function J

Jtot = J1 +
J2

25
Case Study: Example Peaks problem
For better performance
of the concerned MOP

Let us consider a
combined function J

Jtot = J1 +
J2
Result:
Xtot* =0.8731
0.5664
Previous
J(xtot*) Solution
J1 = 3.0173
J2 = 3.1267 Trade-off
Solution
Jtot* =
6.1439
*An array of optimal points may be obtained by adding weights while 26
addition according to user.
Case Study: Example ZDT1

Convex, To test pareto optimality (Zitzler et al.,


2000)
Parameter domain [0, 1]
m = 30; [0, 1]
used for benchmark MOO

27
Case Study: Example ZDT1
GA initial
Conditions

Population: 100
Generations:
1000

28
Case Study: Example ZDT1
GA initial
Conditions

Population: 100
Generations:
1000

29
Case Study: Example ZDT1
GA initial
Conditions

Population: 100
Generations:
If single
1000
objective
was run
on both
the
functions
individuall
Hence
y MOO
provides better
flexibility

30
Comparison of Different Techniques

A suitable optimization method depends on


the nature of the problem.
The parameters to compare SPEED,
Computational Complexity, Convergence
GAs good for multimodal problems but
higher computational cost
SA and Tabu efficient Combinatorial
problem
These are of less computational complexity

31
Comparison of Different Techniques
GA VS. DEMO GA VS. PSO
DE mostly shows PSO is easier to implement and
there are fewer parameters to adjust
better result than PSO has a more effective memory
capability than GA
GA PSO is more efficient in

Despite Multi-modal, maintaining the diversity of the


swarm
Multi-dimensional, GA the population evolve around
a subset of the best individuals
Noisy Spaces Similarities.

CR and F does not Both of them initialize solutions


and update generations
require the same In a PSO, particles try to reach the
optimum by following the current
fine tuning in DE global optimum in stead Mutation
Crossover.

32
SUMMARY
Techniqu Referenc Operator Principle Application
e es
Goal Charnes. and Goal programming combines Form of multi-objective optimization problems in data
Cooper, 1961 the logic of optimization in optimization mining,
Programmin
mathematical programming scheduling problems,
g with the decision makers desire assignment problems, flight
to satisfy several goals control system design, pattern
recognition
RSM Box and Draper, Design expert software (DX6)
Based on a machining Design of Experiment Fast and
1987 model developed by simple
mathematical and
statistical techniques
Scatter Chen (2003) A program designed by Laguna A generalized including neural networks,
Search and Marti in C code optimization methodology multi and mono-objective
for machining problems routing problems, graph
that has no restrictive drawing, scheduling, and
assumptions colouring problems
about objective function,
parameter set and
constraint set
Taguchi Taguchi and Wu Design of experiments, Based on actual Engineering optimization,
Technique (1979) Orthogonal arrays, ANOVA experimental work and Application in quality control,
determination of optimum biotechnology, marketing and
conditions using advertising
statistical tools
Genetic Rachenberg, A CGI (common gateway Based on a machining State Assignment Problem,
Algorithms 1973; interface) program model developed from TSP, Economics, Scheduling,
Holland, 1975 theoretical analysis, CAD
experimental database
and numerical methods
Simulated Kirkpatrick, Euclidian distance between Simulates annealing to multiobjective optimization of
Annealing 1983 Solution and Utopian point, metals to obtain intelligent structures,
Boltzmann's Probability minimized atomic energy optimization of optical 33
stage, Efficiently avoids multilayer thin-film devices
local optima
SUMMARY
Techniqu Referenc Operator Principle Application
e es
NSGA II K. Deb, 2002 Non-dominated Sorting, Before selection is Manufacturing, Machining
Crowding Distance performed, the population Parameter optimization,
is ranked on the basis of Complex MOO
nondomination
PSO Kennedy and D dimensional vector for Computational multimodal biomedical
Eberhert, 1995 position ,speed best state intelligence oriented, image registration
Initializer, updater and stochastic, population Edge detection in noisy
evaluator. based global images, finding optimal
optimization technique machining parameter
assembly line balancing
problem in production and
operations management
ABC Teodorovic Represented as D-Dimensional Based on the behaviour Scheduling problems,
vector, of the bees in nature image segmentation,
Reproduction, replacement of capacitated vehicle routing
bee, selection problem ,
(WSNs),assembly line
balancing problem, Solving
reliability redundancy
allocation problem

AIS Dasgupta,1999 attribute string( a real-valued Artificial Immune computer security
vector), integer string , binary algorithm is based on ,anomaly detection,
string, symbolic string clonal selection principle clustering /classification,
Initialization, Cloning, and is a population numeric function
Hypermutation based algorithm optimization
multi- modal optimization,
job shop scheduling
ACO Dorigo and Represented as Unidirected a meta heuristic inspired TSP Problem, Quadratic
DiCarno, 1999 Graph; by the foraging Assignment problem (QAP)
34
Pheromone Update and behaviour of ants in the Job-Shop Scheduling
Measure, trail evaporation wild, and moreover, the problem. dynamic problem
phenomena known as of data network routing
Conclusion
The study discusses various methods to perform
MOOs
A variety of techniques have been discussed
also methods of Hybridisation
Study witnessed that the applications and growth
of natural computing in the last years is very
drastic and has been applied to numerous
optimization problems
However, further investigation is needed to
consider more kinds of test problems with
different MO optimization difficulties, such as the
heavy function evaluation, heavy constraint,
dynamic search space
35
References
[1] Andersson, J., A survey of multiobjective optimization in engineering design.
Reports of the Department of Mechanical Engineering, 2000.
[2] Deb, K., Multi-objective optimization using evolutionary algorithms. 2001: John
Wiley & Sons.
[3] Deb, K., et al., A fast and elitist multiobjective genetic algorithm: NSGA-II.
Evolutionary Computation, IEEE Transactions on, 2002. 6(2): p. 182-197.
[4] Eberhart, R.C., Y. Shi, and J. Kennedy, Swarm intelligence. 2001: Access Online
via Elsevier.
[5] Goldberg, D.E., Genetic Algorithms in Search, Optimization, and Machine
Learning. 1989: Addison-Wesley.
[6] Zitzler, E., et al., SPEA2: Improving the strength Pareto evolutionary algorithm,
2001, Eidgenssische Technische Hochschule Zrich (ETH), Institut fr Technische
Informatik und Kommunikationsnetze (TIK).
[7] Xu, X. and Y. Li. Comparison between particle swarm optimization, differential
evolution and multi-parents crossover. in Computational Intelligence and Security,
2007 International Conference on. 2007. IEEE
[8] Holland, J.H., Adaptation in natural and artificial systems: an introductory
analysis with applications to biology, control, and artificial intelligence. 1975:
University of Michigan Press.
[9] Dorigo, M. and L.M. Gambardella, Ant colonies for the travelling salesman
problem. BioSystems, 1997. 43(2): p. 73-81.
36

You might also like