You are on page 1of 36

Numerical Methods for Civil Engineers

Lecture 11 OPTIMIZATION
- Basic Ideas
- One-dimensional Unconstrained Optimization
- Golden-Section Search
- Quadratic Interpolation
- Newtons Method
- Multidimensional Unconstrained Optimization
- Direct Methods
- Gradient Methods

Suranaree
UNIVERSITY OF TECHNOLOGY Mongkol JIRAVACHARADET
NAKORNRATCHSIMA School of Civil Engineering
BASIC IDEAS

f (x) f ( x) = 0
Maximum
f ( x) < 0

f ( x) = 0
Root Root
0 x
Root

f ( x) = 0
Minimum
f ( x) > 0
Examples of Optimization Problems in Engineering

- Design aircraft for minimum weight and maximum strength.


- Optimal trajectories of space vehicles.
- Design civil engineering structures for minimum cost.
- Predict structural behavior by minimizing potential energy.
- Material-cutting strategy for minimum cost.
- Design pump and heat transfer equipment for maximum efficiency.
- Shortest route of salesperson visiting various cities during one sales trip.
- Optimal pipeline network.
- Optimal planning and scheduling.
- Statistical analysis and models with minimum error.
- Minimize waiting and idling times.
One-dimensional Unconstrained Optimization
Global & Local Optima

f (x) Global
maximum
Local
maximum

0 x

Local
Global minimum
minimum
GOLDEN-SECTION SEARCH

- Similar to Bisection root finding method

- Define interval that contains single answer

- Need 3 points to detect maximum or minimum

- Pick 4th points then choose the first or last three points.
Intermediate
Point
Choice of Intermediate Points

Maximum
f (x)

0 x
xL xU

First L0
iteration L0 = L1 + L2
L1 L2
Second
iteration L1 L2
L2 =
L0 L1
L1 L2
From two condition, =
L1 + L2 L1
Set R = L2 / L1,
1
1+ R = or R2 + R 1 = 0
R
Solving for positive root,

1 + 1 4(1) 5 1
R= = = 0.61803
2 2
R = 0.61803 = Golden ratio since it allows optima to be
found efficiently
The Parthenon in Athens

0.61803 x

x
This golden ratio were considered aesthetically pleasing
by the Greeks.
Initial Step of the Golden-section Search
Eliminate
f (x) f (x1) Maximum
f (x2)

d
d
xL xU x
x2 x1
1) Guess initial bracket xL and xU
2) Choose two interior points x1 and x2 according to golden ratio,
5 1
d= ( xU xL )
2
x1 = xL + d , x2 = xU d
3) If f (x1) > f (x2), eliminate [ xL , x2 ] and set x2 = xL for next round
Next Step to Complete the Algorithm
f (x) Maximum

xU x
xL x2 x1

Old x2 Old x1
4) Only new x1 need to be determined,

5 1
x1 = xL + ( xU xL )
2
Example: Golden-Section Search to find maximum
x2
f ( x) = 2sin x , xL = 0 and xU = 4
10
Solution: (1) Create two interior points
5 1
d= (4 0) = 2.472
2
x1 = 0 + 2.472 = 2.472
x2 = 4 2.472 = 1.528
(2) Evaluate function at interior points,
2.4722
f ( x1 ) = f (2.472) = 2sin(2.472) = 0.63
10
f ( x2 ) = f (1.528) = 1.765
(3) Because f(x2) > f(x1), eliminate upper part,
New xU = x1 = 2.472 OLD
New x1 = x2 = 1.528
xL x2 x1 xU
(4) Compute new x2
5 1 NEW
d= (2.472 0) = 1.528
2
x2 = 2.472 1.528 = 0.944 xL x2 x1 xU

(5) Evaluate function at x2


New f ( x1 ) = Old f ( x2 ) = 1.765
New f ( x2 ) = f (0.944) = 1.531
Because f(x1) > f(x2), eliminate lower part, . . .
Tabulated results:

i xL x2 f(x2) x1 f(x1) xU d

1 0 1.5279 1.7647 2.4721 0.6300 4.0000 2.4721


2 0 0.9443 1.5310 1.5279 1.7647 2.4721 1.5279
3 0.9443 1.5279 1.7647 1.8885 1.5432 2.4721 0.9443
4 0.9443 1.3050 1.7595 1.5279 1.7647 1.8885 0.5836
5 1.3050 1.5279 1.7647 1.6656 1.7136 1.8885 0.3607
6 1.3050 1.4427 1.7755 1.5279 1.7647 1.6656 0.2229
7 1.3050 1.3901 1.7742 1.4427 1.7755 1.5279 0.1378
8 1.3901 1.4427 1.7755 1.4752 1.7732 1.5279 0.0851
Quadratic Interpolation
2nd polynomial provides good approximation near optimum
True maximum Quadratic
f (x) approximation
of maximum

x0 x1 x3 x2 x

f ( x0 )( x12 x22 ) + f ( x1 )( x22 x02 ) + f ( x2 )( x02 x12 )


x3 =
2 f ( x0 )( x1 x2 ) + 2 f ( x1 )( x2 x0 ) + 2 f ( x2 )( x0 x1 )
Example: Quadratic Interpolation to find maximum
x2
f ( x) = 2sin x , x0 = 0, x1 =1 and x2 = 4
10
Solution: (1) Evaluate function values,
x0 = 0 f ( x0 ) = 0
x1 = 1 f ( x1 ) = 1.5829
x2 = 4 f ( x2 ) = 3.1136

0(12 42 ) + 1.5829(42 02 ) + ( 3.1136)(02 12 )


x3 = = 1.5055
2(0)(1 4) + 2(1.5829)(4 0) + 2( 3.1136)(0 1)
(2) Because f(1.5055) = 1.7691, employ golden-section search,
eliminate lower guess x0 = 0,

x0 = 1 f ( x0 ) = 1.5829
x1 = 1.5055 f ( x1 ) = 1.7691
x2 = 4 f ( x2 ) = 3.1136

1.5829(1.50552 42 ) + 1.7691(42 12 ) + ( 3.1136)(12 1.50552 )


x3 =
2(1.5829)(1.5055 4) + 2(1.7691)(4 1) + 2( 3.1136)(1 1.5055)
= 1.4903

After 5 iterations, x = 1.4276 gives maximum value of 1.7757


Newtons Method

f ( xi )
f ( x) = 0 xi +1 = xi
f ( xi )

Define new function, g ( x ) = f ( x )

At optimal point x*, f ( x*) = g ( x*) = 0

g ( xi ) f ( xi )
g ( x) = 0 xi +1 = xi xi +1 = xi
g ( xi ) f ( xi )
Example: Use Newtons method to find maximum of
x2
f ( x) = 2sin x , Initial guess: x0 = 2.5
10
Solution: (1) Evaluate 1st and 2nd derivatives of the function,
x
f ( x) = 2 cos x
5
1
f ( x) = 2sin x
5
(2) Substitute into Newtons formula,

2 cos xi xi / 5
xi +1 = xi
2sin xi 1/ 5
(3) Substituting initial guess,
2 cos 2.5 2.5 / 5
x1 = 2.5 = 0.99508, f (0.99508) = 1.57859
2sin 2.5 1/ 5

(4) Second iteration,


2 cos 0.995 0.995 / 5
x2 = 0.995 = 1.46901, f (1.46901) = 1.77385
2sin 0.995 1/ 5

i x f(x)
0 2.5 0.57194
1 0.99508 1.57859
2 1.46901 1.77385
3 1.42764 1.77573
4 1.42755 1.77573

After 4 iterations, result converges rapidly to the true value.


Multidimensional Unconstrained Optimization
Example: 2-D Topographic Map of 3-D Mountain

y Line of constant f

x
2D searches: Ascending a mountain (maximum) or
Descending into a valley (minimization)
DIRECT METHODS
Random Search
Univariate and Pattern Searches

GRADIENT METHODS

Gradients and Hessians


Steepest Ascent Method
Random Search
Repeatedly evaluates function at randomly selected points

Example: f ( x , y ) = y x 2 x 2 2 xy y 2

Domain boundaries: x = -2 to 2 and y = 1 to 3


Write an M-file objfun1.m
function f = objfun1(x,y)
f = y-x-2*x^2-2*x*y-y^2;

>> rand % Uniformly distributed random numbers


% on the interval (0.0,1.0).
ans =
0.2311
Random search algorithm: rndsrch.m

function [maxx,maxy,maxf] = rndsrch(objfun,xmin,


xmax,ymin,ymax)

iter = 100;
maxf = -inf;

for i = 1:iter
x = xmin + (xmax - xmin)*rand;
y = ymin + (ymax - ymin)*rand;
fn = feval(objfun,x,y);
if fn > maxf
maxf = fn;
maxx = x;
maxy = y;
end
end
>>[maxx,maxy,maxf]=rndsrch('objfun1',-2,2,1,3)

iter maxx maxy maxf


50 -0.9024 1.5913 1.2048
100 -0.8050 1.3092 1.2120
200 -1.1409 1.7708 1.2133
500 -1.0006 1.4983 1.2500
1000 -1.0253 1.5030 1.2489

TRUE -1.0000 1.5000 1.2500


Univariate and Pattern Searches
- More efficient and still no require derivative evaluation
- Change one variable at time while others constant
y

Pattern directions
4 3

1
2 x
From (1) move along x axis with y constant to max at (2)
Powells Method
Use pattern directions to find optimum efficiently
- Pt. 1 and 2 are obtained by 1-D
y
search from different starting pts.

- Line formed by 1 and 2 will be


directed toward maximum
2 Conjugate direction
1

x
GRADIENT METHODS
Use derivative to locate optima

Gradient : y

f
h
x
f
= Slope along axis h
h
f f
f = i + j = Steepest direction
x y

The vector is called del f = directional derivative of f (x , y)


Hessian
To determine whether maximum, minimum or saddle

2
f f f
2 2 2
H = 2
x y x y
2

2 f
If H > 0 and > 0 local minimum
x 2

2 f
If H > 0 and < 0 local maximum
x 2

If H < 0 saddle point


Steepest Ascent Method
Climbing a hill with maximum slope

Repeat the process until


the maximum is reached

1 Follow steepest ascent path


until f (x, y) stop increasing

0 : Starting point (x0, y0)


x

Problems: 1) Determine the best direction of steepest ascent


2) Determine the best value along that search direction
Example: Steepest ascent method

f ( x, y ) = xy 2 , Starting point (2, 2)

>> contour(x,y,fxy)
4
x=0:0.1:4;
y=0:0.1:4; 3
fxy=zeros(length(y),length(x));
for i=1:length(y)
2

y
for j=1:length(x)
fxy(i,j)=x(j)*y(i)^2;
1
end
end
0
0 1 2 3 4
x
>> surf(x,y,fxy)
>> mesh(x,y,fxy)
Determine elevation: f (2, 2) = 2 22 = 8

Evaluate partial derivatives,


f
= y 2 = 22 = 4
x
f
= 2 xy = 2(2)(2) = 8
y

Determine the gradient, f = 4i + 8 j 63.40

8
1
Direction, = tan = 63.40 relative to x axis
4
Magnitude of f is 42 + 82 = 8.944

We will gain 8.944 units of elevation rise for a unit distance


advanced along this steepest path.

63.4o
y 2

0
0 1 2 3 4
x
Relationship between steepest direction and x-y coordinate

Starting at x0, y0 any point in the


y f = 4i + 8 j
gradient direction can be computed by
10 f
x = x0 + h
1
h= x
f
y = y0 + h
y
2
0

Then, go along h axis to find maximum


h=

x
2 6 f f
Define g (h) = f ( x0 + h, y0 + h)
x y
Set g (h*) = 0 to find h *
Example: Developing 1-D function along a gradient direction

f ( x, y ) = 2 xy + 2 x x 2 2 y 2 , Starting point (1,1)


Partial derivative at starting point:
f
= 2 y + 2 2 x = 2(1) + 2 2(1) = 6
x
f
= 2 x 4 y = 2(1) 4(1) = 6
y
Function along h axis:
f f
f ( x0 + h, y0 + h) = f (1 + 6h,1 6h)
x y
= 2(1 + 6h)(1 6h) + 2(1 + 6h) (1 + 6h) 2 2(1 6h) 2
g (h) = 180h 2 + 72h 7
Locate the maximum,

g (h*) = 360h * +72 = 0


h* = 0.2
Locate (x, y) coordinates according to h = 0.2,

x = 1 + 6(0.2) = 0.2

y = 1 6(0.2) = 0.2

Set new starting point to (0.2, -0.2) and repeat the process.
MATLAB Optimization Toolbox
fminunc : the unconstrained optimization function

f ( x, y ) = e x (4 x 2 + 2 y 2 + 4 xy + 2 y + 1), Starting point (1, 1)

Write an M-file objfun2.m


function f = objfun2(x,y)
f = exp(x)*(4x^2+2*y^2+4*x*y+2*y+1);

>>x0 = [-1,1]; % Starting point


>>options = optimset(LargeScale,off);
>>[x,fval,exitflag,output]=fminunc(objfun2,x0,options);

x =
0.5000 -1.0000

You might also like