Professional Documents
Culture Documents
NONLINEAR
PROGRAMMING
THIS PAGE IS
BLANK
PREFACE
THIS PAGE IS
BLANK
CONTENTS
1.
2.
3.
4.
5.
Preface
Introduction
1.1
Applications
1.2
1.3
1.4
Classification
12
16
2.1
Unrestricted Search
17
2.2
21
2.3
Quadratic Interpolation
27
2.4
29
32
3.1
Dichotomous Search
33
3.2
Univariate Method
35
3.3
39
3.4
Industrial Application
41
Geometric Programming
47
4.1
Introduction
47
4.2
Mathematical Analysis
49
4.3
Examples
52
Multi-Variable Optimization
60
5.1
Convex/Concave Function
60
5.2
Lagrangean Method
66
5.3
70
5.4
74
5.5
Inventory Application
77
THIS PAGE IS
BLANK
INTRODUCTION
1.1 Applications
In the industrial/business scenario, there are numerous
applications of NLP. Some of these are as follows:
1
(a) Procurement
The industries procure the raw materials or input items/
components regularly. These are frequently procured in suitable
lot sizes. Relevant total annual cost is the sum of ordering cost
and inventory holding cost. If the lot size is large, then there
are less number of orders in one year and thus the annual
ordering cost is less. But at the same time, the inventory
holding cost is increased. Considering the constant demand
rate, the total cost function is non-linear and as shown in
Fig. 1.1.
There is an appropriate lot size corresponding to which
the total annual cost is at minimum level. After formulating
the nonlinear total cost function in terms of the lot size, it is
optimized in order to evaluate the desired procurement
quantity. This quantity may be procured periodically as soon
as the stock approaches at zero level.
Total Cost
Lot size
(b) Manufacturing
If a machine or facility is setup for a particular kind of
product, then it is ready to produce that item. The setup cost
may include salaries/wages of engineers/workers for the time
period during which they are engaged while the machine is
being setup. In addition to this, the cost of trial run etc., if
any, may be taken into consideration. Any number of items
INTRODUCTION
Annual
setup cost
Quantity
Inventotry
holding cost
Quantity
INTRODUCTION
Total cost
Sales revenue
Total
cost/
Sales
revenue
q1
q2
Quantity
Fig. 1.5: Interaction between nonlinear total cost and sales revenue.
(d) Logistics
Logistics is associated with the timely delivery of goods
etc. at desired places. A finished product requires raw materials,
purchased and in-house fabricated components and different
input items. All of them are needed at certain specified time
and therefore transportation of items becomes a significant
issue. Transportation cost needs to be included in the models
explicitly. Similarly effective shipment of finished items are
important for the customer satisfaction with the overall
objective of total cost minimization. Incoming of the input
items and outgoing of the finished items are also shown in
Fig. 1.6.
Logistic support plays an important role in the supply
chain management (SCM). Emphasis of the SCM is on
integrating several activities including procurement,
Input items
Industrial organization
Finished items(s)
Customers
Fig. 1.6: Incoming and outgoing of the items.
INTRODUCTION
1.2
f(x)
x*
f(x)
2
3
1
5
4
x
B
Fig. 1.8: Local and global maxima/minima.
df ( x)
=0
dx
...(1.1)
and
1.3
d2 f ( x)
dx2
...(1.2)
...(1.3)
...(1.4)
INTRODUCTION
...(1.5)
y = f(x)
N
x1
x2
10
f(x)
Convex function
Q
N
P
L
M
x1
x2
INTRODUCTION
11
Concave function
L
f(x)
P
M
x2
x1
Concave function
(1) Line joining any two above Line joining any two points
will never be below this
will never be above this
function.
function.
(2) f(x1) + (1 ) f(x2) >
f[x1 + (1 ) x2], 0 < < 1
In order to maximize a
d
d
f(x) = 0 concave function,
f(x) = 0
dx
dx
yields the optimal solution. yields the optimal solution.
convex function,
18 104
x
+ , is a convex
x
2
function for positive values of x. Also find out the optimal
solution by minimizing it.
Example 1. Test whether f(x) =
Solution.
Second order derivative,
d2 f ( x)
36 104
=
x3
dx2
12
df ( x)
18 104 1
+ =0
=
dx
2
x2
or optimal value of x = 600
Substituting x* = 600 in f(x), optimal value of
f(x) =
18 104 600
+
600
2
= 600
Example 1.2. Show that f(x) = (45000/x) 2x, is a
concave function for positive values of x and also obtain the
optimal value of x.
Solution.
90000
d2 f ( x)
=
2
x3
dx
As this is negative, f(x) is a concave function. In order to
maximize this function.
df ( x)
=0
dx
or
or
1.4
45000
2 = 0
x2
x* = 150
CLASSIFICATION
INTRODUCTION
13
14
i=1
INTRODUCTION
15
16
2.1
17
UNRESTRICTED SEARCH
x*
x = 0, f(x) = 4x 8x2 = 0
At
18
0.4992
f(x)
0.5
0.72
0.4
0.896
0.3
0.912
0.2
0.768
19
At
f(x)
0.31
0.9176
0.32
0.9216
0.33
0.9240
0.34
0.9248
0.35
0.9240
20
f(x)
0.46
1072.47
0.47
1072.41
0.48
1072.83
0.469
f(x): 1072.40
0.468
0.467
0.466
1072.39
1072.3819 1072.3807
0.465
1072.384
500
d2 f ( x)
= 3 > 0 for positive x and therefore it
2
x
dx
is a convex function.
Using the convexity property, optimum may be obtained
by differentiating f(x) with respect to x and equating to zero,
250
d f ( x)
= 1150 2 = 0
dx
x
or
x* =
250
= 0. 466 year
1150
and optimum value of total cost, f(x*) = Rs. 1072.38. The results
are similar to those obtained using the search procedure.
21
2.2
22
(ii)
f(x)
x1
xL
xR
x*
x2
Fig. 2.3: xL and xR on one side of the optimum [f(xL) < f(xR)].
Now take the second case i.e. when f(xL) > f(xR).
This may be true in the three situations as shown in Fig.
2.4.
Fig. 2.4(a) xL and xR on either side of the optimum and
f(xL) = f(xR).
Fig. 2.4(b) xL and xR on either side of the optimum and
f(xL) > f(xR).
Fig. 2.4(c) xL and xR on one side of the optimum and
f(xL) > f(xR).
Following statement is applicable in all the three situations:
Optimum should lie in the range [x1, xR] if f(xL) = f(xR).
Observe this and the previous statement. Out of xL and
xR, only one is changing for the range in which an optimum
23
(b) xL and xR on either side of the optimum and f(xL) > f(xR)
(c) xL and xR on one side of the optimum and f(xL) > f(xR)
Fig. 2.4: Various situations for f(xL) > f(xR)
24
2.2.1 Algorithm
The algorithm for method of golden section is illustrated
by Fig. 2.5 where [x1, x2] is the range in which maximum x*
lies, and
r = 0.5 ( 5 1) = 0.618034 ~ 0.618
This number has certain properties which may be observed
later.
After initialization as mentioned in Fig. 2.5, following steps
are followed:
Initialization: X1 = x1, X2 = x2, M + (X2 X1)
Initialization:
2
xL = xX
and
Mr2, ,and
xR =xXR1+=MrX 1 + Mr
L =X
1 + Mr
1 +
Is
Is
f(xL) < f(xR)
f(xL) <? f(x R)
No
Yes
X1 =X1x=Lx, L,M
M ==X2X2X1 X1
x
L = previous xR, xR = X1 + Mr
xL = previous xR, xR = X 1 + Mr
xR
X2 =X2 x=Rx,R, M
M ==X 2X
2X1 X1
xR = previous xL, xL = X1 + Mr 2
= previous xL, xL = X1 + Mr2
if Misis considerably
small small
Stop Stop
if M
considerably
25
Iteration 2:
Step 1
X1 = 0.12 and M = X2 X1
= 0.31 0.12 = 0.19
xL = 0.19 and xR
= 0.12 + (0.19 0.618) = 0.24
26
Step 2 :
Iteration 4:
Step 1 :
Step 3 :
Iteration 5:
Step 1 :
Step 2 :
27
2.3
QUADRATIC INTERPOLATION
...(2.1)
ax22
ax32
...(2.2)
+ bx2 + c
...(2.3)
+ bx3 + c
...(2.4)
FG ( x
H
x2 ) f ( x3 ) + ( x2 x3 ) f ( x1 ) ( x3 x1 ) f ( x2 )
( x1 x2 ) ( x2 x3 ) ( x3 x1 )
IJ
K
...(2.5)
b=
F (x
GH
2
1
I
JK
...(2.6)
In order to obtain minimum of equation (2.1),
d f ( x)
=0
dx
or
2ax + b = 0
or
Minimum x* = b/2a
Substituting the values of a and b from equations (2.5)
28
F
GH
1 ( x1 x2 ) f ( x3 ) + ( x2 x3 ) f ( x1 ) ( x3 x1 ) f ( x2 )
x* =
( x1 x2 ) f ( x3 ) ( x2 x3 ) f ( x1 ) ( x3 x1 ) f ( x2 )
2
2
I
JK
...(2.7)
This minimum x* is used in the iterative process. Three
points x1, x2 and x3 as well as their function values are needed
to determine x*.
An initial approximate point x1 is given,
x2 = x1 +
and
(ii)
f(x3) = 1200
f(x*) = 1201
Iteration 2:
x* = 0.48 may replace the initial value x1
Now
29
f(x3) = 1245.47
From equation (2.7), x* = 0.508
And
f(x*) = 1200.15
2.4
h =
f 1 ( a)
f 11 ( a)
Next value of x = a
f 1 ( a)
f 11 ( a)
(2.8)
f (x) = 600/x
Now
From (2.9),
a = 0.1
1
f (a) = 2133.33
(2.9)
(2.10)
30
From (2.10),
f11(a) = 22222.22
2133. 33
22222. 22
= 0.396
This value is used as a in the next iteration.
Iteration 2:
Now
a = 0.396
713. 07
9661. 97
= 0.47
Iteration 3:
Now
a = 0.47
1
f (a) = 158.08,
11
f (a) = 5779.07
New value of x = 0.47 + 0.027 = 0.497
The process is continued to achieve any desired accuracy.
In a complex manufacturing-inventory model, all the
shortage quantities are not assumed to be completely
backordered. A fraction of shortage quantity is not backlogged.
Total annual cost function is formulated as the sum of
procurement cost, setup cost, inventory holding cost and
backordering cost. After substituting the maximum shortage
quantity (in terms of manufacturing lot size) in the first order
derivative of this function with respect to lot size and equated
to zero, a quartic equation is developed, which is in terms of
single variable, i.e., lot size. Newton-Raphson method may be
successfully applied in order to obtain the optimum lot size
using an initial value as the batch size for the situation in
which all the shortages are assumed to be completely
backordered.
31
h =
f ( a)
f 1 ( a)
(2.11)
f ( a)
f 1 ( a)
(2.12)
32
!
UNCONSTRAINED AND
CONSTRAINED OPTIMIZATION
One variable optimization problems were discussed in the
previous chapter. There are many situations in which more
than one variable occur in the objective function. In an
unconstrained multivariable problem, the purpose is to optimize
f(X) and obtain,
Fx I
GG x JJ
X =G .J
GG . JJ
GH x JK
1
2
Fx I
GG x JJ
X =G .J
GG . JJ
GH x JK
1
2
33
3.1
DICHOTOMOUS SEARCH
x1
xL
xR
x2
M = xR x1 = x2 xL
...(3.1)
= xR xL
...(3.2)
x1 + x2
2
...(3.3)
x1 + x2 +
...(3.4)
2
Value of is suitably selected for any problem and xL as
well as xR are obtained. After getting the function values f(xL)and
f(xR),
xR =
(i) If f(xL) < f(xR), then the maximum will lie in the range
[xL, x2].
(ii) If f(xL) > f(xR), then the maximum will lie in the range
[x1, xR].
34
(iii) If f(xL) = f(xR), then the maximum will lie in the range
[xL, xR].
Using appropriate situation, the range for the search of a
maximum becomes narrow in each iteration in comparison
with previous iteration. The process is continued until the
range, in which optimum lies, i.e. (x2 x1), is considerably
small.
Example 3.1. Maximum f(x) = 24x 4x 2 using
dichotomous search. Initial range [x1, x2] = [2, 5]. Consider
= 0.01 and stop the process if considerably small range (in
which maximum lies) is achieved, i.e. (x2 x1) = 0.1.
Solution.
Iteration 1:
From equations (3.3) and (3.4),
2 + 5 0. 01
= 3.495
2
2 + 5 + 0. 01
xR =
= 3.505
2
f(xL) = 35.0199
xL =
f(xR) = 34.9799
As f(xL) > f(xR), then the maximum lies in the range
[x1, xR] = [2, 3.505].
For the next iteration [x1, x2] = [2, 3.505].
Iteration 2:
Again using equations (3.3) and (3.4),
and
xL =
2 + 3. 505 0. 01
= 2.7475
2
xR =
2 + 3. 505 + 0. 01
= 2.7575
2
f(xL) = 35.7449
f(xR) = 35.7648
35
3.2
UNIVARIATE METHOD
36
Initial point
1
m
(x1, x2)
n
2
3
1
2
3
(x1*, x2*)
x1
Fig. 3.2 Changing one variable at a time.
3x12
3
+
+ 27 x2
x1
x2
37
f(1.05, 1) = 42.61
3
+ 3(1 m)2 + 27
(1 m)
df
= 0 shows
dm
0. 75
27n
(1 n)
df
= 0 shows,
dn
n = 0.83
38
3
3( 0. 5 m2 )
+
+ 4. 59
( 0. 5 m )
0.17
df
= 0 shows
dm
1
...(3.5)
8. 88 = 0
( 0. 5 m ) 2
Value of m needs to be obtained which will satisfy above
equation. As we are interested in positive values of variables,
m < 0.5. Considering suitable fixed step size as 0.1, values of
the L.H.S. of equation obtained in terms of m are as follows:
11. 76 m +
0.4
95.824
0.3
19.648
0.2
4.58
0.1
1.45
0.11
1.01
0.12
0.54
0.13
0.05
0.14
0.48
39
0. 41
+ 27 (0.17 n)
(0.17 n)
df
= 0 shows
dn
n = 0.05
and new value of x2 = 0.17 0.05 = 0.12
f(x1, x2) = f(0.37, 0.12) = 18.10
The procedure is continued for any number of cycles till
improvement in function values are observed by varying x1
and x2 each. The exact optimum is [x1*, x2*] = [1/3, 1/9].
3.3
Fx I
GG x JJ
X =G .J
GG . JJ
GH x JK
1
2
40
(X, r) = f(X) r
p (X)
i =1
...(3.6)
( 4 + x2 )3
3
subject to x1 > 1
x2 > 4
Use the initial value of penalty parameter r as 1 and
multiplication factor as 0.1 in each successive iteration.
Solution. As the constraints need to be written in
pi(X) < 0 form, these are
1 x1 < 0
4 x2 < 0
In order to obtain a new function , equation (3.6) is
used. Sum of the reciprocal of L.H.S. of the constraints, is
multiplied with penalty parameter r, and then this is deducted
from function f(X).
Now
FG
H
IJ
K
(4 + x2 )3
1
1
r
+
3
(1 x1 ) (4 x2 )
...(3.7)
r
1
=0
=
x1
(1 x1 )2
or
x 1* = 1
...(3.8)
41
r
2
x2 = ( 4 + x2 ) ( 4 x )2 = 0
2
or
x2* = [16
r ]1/2
...(3.9)
f(x1, x2)
3.87
153.79
162.48
3.96
165.99
168.79
Iteration, i
x1*
x2*
0.1
0.68
0.01
0.9
3.99
169.83
170.93
0.001
0.97
3.996
171.10
171.38
0.0001
0.99
3.999
171.48
171.59
3.4
INDUSTRIAL APPLICATION
42
43
2448979.6
x22
+ 0.147x1 2x2 +
x1
x1
where
and
= 1370.45
x22
2448979 .6
+ 0.147x1 2x2 +
x1
x1
r
F 1 I
GH x 4 JK
2
...(3.10)
44
(4086, x2, 1)
1.5
1198.1553
1197.8423
2.5
1197.7636
1198.0193
(x1, 2.5, 1)
4087
1197.7634
4088
1197.7634
4089
1197.7633
4090
1197.7634
45
Cycle 2:
For the variation of x2,
(4089, 2.4,1) = 1197.7574
(4089,2.6,1) = 1197.7819
Negative direction is selected, and values are
x 2:
2.4
2.3
1197.7574
1197.7632
Now
(4089, 2.4, 1) = 1197.7574
For the variation of x1, no improvement is there in both
the directions. The search is completed with respect to r = 1.
As the multiplication factor = 0.01,
next value of r = 1 0.01 = 0.01
r = 0.01:
Cycle 1:
Using r = 0.01, from (3.10), (4089, 2.4, 0.01) = 1197.1387.
Function value, f(x1, x2) is obtained as equal to 1197.1324
In order to minimize , for the variation of x2, positive
direction is chosen and the values are:
x2: 2.5
2.6
2.7
2.8
2.9
3.1
: 1197.0284 1197.0339
Now
(4089, 3, 0.01) = 1197.0284
Positive direction is selected for variation of x1 and the
values are:
x1: 4090
4091
4092
4093
: 1197.0282 1197.0281 1197.028
1197.0281
At the end of cycle 1, (4092, 3, 0.01) = 1197.028
Cycle 2:
For the variation of x2
(4092, 3.1, 0.01) = 1197.033
and (4092, 2.9, 0.01) = 1197.029
46
Starting
point
Number of
cycles to
minimize by
univariate
method
x1 = 4086
x2 = 1
0.01
x1 = 4089
x2 = 2.4
Optimum
x1 and
x2
f*
"
GEOMETRIC PROGRAMMING
Geometric programming is suitable for a specific case of
nonlinear objective function, namely, posynomial function.
4.1 INTRODUCTION
In some of the applications, either open or closed
containers are fabricated from the sheet metal, which are
used for transporting certain volume of goods from one place
to another. Cross-section of the container may be rectangular,
square or circular. A cylindrical vessel or a container with
circular cross-section is shown in Fig. 4.1.
48
...(4.1)
...(4.2)
90
r2h
90
360 r 2h1
=
...(4.3)
r2h
360 2 1
r h
...(4.4)
GEOMETRIC PROGRAMMING
49
360 2 1
r h
and
f = M1 + M2 + M3 =
i=1
f=
Mi =
i =1
C . x
i
a1i
1
...(4.5)
i =1
Similarly
f
1
=
x2
x2
2i
i=1
. Mi = 0
1i .Mi
i =1
=0
50
1
f
=
xn
xn
ni
. Mi = 0
i=1
To generalize,
f
1
=
xk
xk
ki
. Mi = 0, k = 1, 2, ... n
...(4.6)
i =1
Let x1*, x2*, ... xn* be the optional values of the variables
corresponding to minimization of equation (4.5).
M1*, M2* , ....MN* are the values after substituting the
optimum variables in each component of the O.F.
From (4.5), optimum value of the O.F.,
N
f* =
*
i
i =1
M1* M*2
M*N
f*
+
+
...
+
=
=1
f*
f*
f*
f*
or w1 + w2 + ... + wN = 1
where w1, w2, ..., wN are the positive fractions indicating the
relative contribution of each optimal component M1*, M2, ...,
MN* respectively to the minimum f *,
N
To generalize,
w = 1
i
...(4.7)
i=1
where
wi =
M*i
f*
Mi* = i . f *
or
...(4.8)
...(4.9)
*
ki . Mi
i =1
GEOMETRIC PROGRAMMING
51
f*
ki . wi
=0
i=1
N
ki . wi
or
= 0, k = 1, 2, ..., n
...(4.10)
i =1
wi
Now,
FM I FM I
=G
H w JK GH w JK
*
1
w1
*
2
w2
...
FM I
GH w JK
*
N
wN
M*i
M* M*
M*
or f * = 1 = 2 = ... = N
wi
w1
w2
wN
a
a
a
As Mi* = ci . x1* 1i . x2* 2i ... x2* ni ,
f =
LM C
Nw
1
1
LM C
Nw
2
2
OP
Q
i = 1, 2, ..., N ...(4.11)
w1
. x1*
a12
. x2*
a22
... xn*
an2
OP
Q
w2
...
52
LM C
Nw
LM C OP LM C OP ... LM C OP
Nw Q Nw Q Nw Q
w1
or f =
w2
a1 N
x1*
a2 N
x2*
anN
... xn*
wN
OP
Q
wN
( x2* )a21w1 + a22w2 + ... + a2 NwN ... ( xn* )an1w1 + an2w2 + ... + anNwN
L
L
L
C O LC O
C O M
(x )
= M P M P ... M
N w Q N w Q N w PQ MMN
N
w1
or f *
w2
wN
*
1
i=1
a1iwi
a2iwi
.( x2* )i = 1
aniwi
...( xn* )i = 1
OP
PP
Q
LC O
= M P
Nw Q
LC O
= M P
Nw Q
1
w1
or
LM C OP
Nw Q
LM C OP
Nw Q
w1
w2
L C OP
... M
Nw Q
L C OP
... M
Nw Q
w2
wN
wN
...(4.12)
4.3 EXAMPLES
As discussed before, the geometric programming can be
effectively applied it any problem can be converted to a
posynomial function having positive coefficient in each
GEOMETRIC PROGRAMMING
53
f = 0.15 x1 x1.1
2 +
...(4.13)
From (4.10),
ki . wi
= 0, k = 1, 2
i =1
For k = 1,
1i . wi
i =1
...(4.14)
For k = 2, a21w1 + a22w2 + a23w3 = 0
...(4.15)
LM 1
MM a
Na
11
21
1
1
a12 a13
a22 a23
OP LMw OP LM1OP
PP MMw PP = MM0PP
Q Nw Q N0 Q
1
2
3
...(4.16)
54
where a11, a12 and a13 are the exponent of x1 in each component
of the O.F. Similarly a21, a22 and a23 are the exponent of x2 in
each component.
Now exponents of x1 are 1, 1 and 0.6 in first, second and
third component respectively. Similarly exponents of x2 are
1.1, 1 and 1 respectively in the first, second and third
component of the function f.
Using (4.13), (4.14), and (4.15), or a set of equations
represented by (4.16),
w1 + w2 + w3 = 1
...(4.17)
w1 w2 + 0.6 w3 = 0
...(4.18)
1.1 w1 + w2 w3 = 0
...(4.19)
LM a
Na
11
as
21
a12 a13
a22 a23
OP = L 1
Q MN1.1
OP
1 Q
1 0. 6
1
LM C OP LM C OP LM C OP
Nw Q Nw Q Nw Q
w1
From (4.12),
w2
w3
as N = 3
f* =
0.4
0.504
= 427.37
From (4.9), Mi* = wi.f *
M1* = 0.15 x1*, x2*1.1 = w1.f * = 41.03
or
M2* =
63158 x2*
= w2 . f *
x1*
....(4.20)
GEOMETRIC PROGRAMMING
55
63158 x2*
= 0.4 427.37 = 170.95
x1*
or
5.75 x1*
=
x2*
M*3
Similarly
...(4.21)
0 .6
= w3 . f * = 215. 39
...(4.22)
x2* = 0.87
360 2 1
r h
Now
or
LM1
MM2
N0
1 1
1 2
1 1
OP LM w OP LM1 OP
PP MMw PP = MM0PP
Q N w Q N0 Q
1
2
w1 + w2 + w3 = 1
2w1 + w2 2w3 = 0
w2 w3 = 0
On solving, w1 = 0.2
w2 = 0.4
w3 = 0.4
Further the coefficients of the components of the O.F.
360
, and
LC O LC O LC O
= M P M P M P
Nw Q Nw Q Nw Q
w1
w2
w3
56
or
or
r = 0.637
M2* = 460rh = w2.f * = 0.4 1274.69
or
rh = 0.3528
As
r = 0.637, h = 0.554
and
Solution.
Let L = Length of the box in cm
W = Width of the box in cm
GEOMETRIC PROGRAMMING
and
cm2
57
Cost of transportation =
90 106
,
LWH
Exponents of L
Exponents of W
Exponents of H
and therefore,
LM1
MM1
MN10
1
0
1
1
OP
PP
PQ
1 1
1 1
1 1
0 1
LM w OP
MM w PP
MNww PQ
1
2
LM1 OP
MM0PP
MN00PQ
58
or
w1 + w2 + w3 + w4 = 1
w1 + w3 w4 = 0
w2 + w3 w4 = 0
w1 + w2 w4 = 0
On solving,
w1 = w2 = w3 = 0.2
and
w4 = 0.4
Now
C1 = 0.0942
C2 = 0.0942
C3 = 0.0471
C4 = 540 106
and
LM C OP LM C OP LM C OP
Nw Q Nw Q Nw Q
LM 0. 0942 OP LM 0. 0942 OP
N 0. 2 Q N 0. 2 Q
w1
f* =
w2
0.2
w3
0.2
LM C OP
Nw Q
w4
4
4
LM 0. 0471 OP LM 540 10 OP
N 0. 2 Q N 0. 4 Q
0.2
0.4
= 2487.37
Cost component
Mi* = wi.f*, i = 1, 2, 3, 4
M1* = 0.0942 LH = 0.2 2487.37
or
LH = 5281.04
M2*
or
WH = 5281.04
M3*
or
...(4.24)
LW = 10562.08
M4* =
or
...(4.23)
*
540 106
= w4 f * = 0.4 2487.37
LWH
LWH = 542741.93
Substituting equation (4.25),
H =
...(4.25)
542741. 93
= 51. 39 cm
10562. 08
...(4.26)
GEOMETRIC PROGRAMMING
From (4.24),
W = 102.77 cm
From (4.23),
L = 102.77 cm
f* = Rs. 2487.37
59
60
#
MULTI-VARIABLE OPTIMIZATION
Convex and concave functions were discussed in Chapter
1 in the context of a single variable problem. For the multivariable problem also, optimum results can be achieved by
differentiating the O.F. partially with respect to each variable
and equating to zero, if it can be ascertained whether the
function is convex or concave.
2 f
, where i and j are row and column
xi x j
LM f
M x
J =M
MM x fx
N
2
(a)
2
1
2
2
2 f
x1x2
2 f
x22
60
OP
PP
PP
Q
MULTI-VARIABLE OPTIMIZATION
LM f
MM x
f
J =M
MM x x
MM f
N x x
2
1
2
2 f
x22
f
x3 x2
LM f
MM x
MM x fx
J =M
MM f
MM x x
MN x fx
2 f
x2 x3
f
x3 x2
f
x32
f
x4 x2
f
x4 x3
2
3
2 f
x22
2 f
x1 x3
2 f
x1 x2
2
1
2 f
x1 x3
(c)
OP
P
f P
P
x x P
PP
f
PQ
x
2 f
x1 x2
(b)
61
OP
PP
f P
x x P
P
f P
x x P
P
f P
P
x Q
2 f
x1 x4
2
2
4
f
5
= 9 2
x1
x1
2 f
2 f
=0
=
x2 x1
x1x2
2 f
10
= 3
x12
x1
62
2 f
10 + 8 x2 = 8
=
2
x2
x2
LM 10
x
J= M
NM 0
3
1
OP
P
8 QP
0
10
3
second principal minor = x1
0
=
0
8
80
80
0 = 3
3
x1
x1
10
. First principal minor is 10 , which is positive. Second
x13
x13
80
is also positive for positive x1. The
x13
If
2 f
A = x2 ,
1
B=
2 f
,
x22
LMA
MMHG
N
C=
H
B
F
2 f
x32
G
F
C
OP
PP
Q
MULTI-VARIABLE OPTIMIZATION
H =
2 f
,
x1 x2
63
F=
2 f
,
x2 x3
and
G=
2 f
x3 x1
...(5.1)
H
> 0
B
AB > H2
or
...(5.2)
A
H
G
B
F
H
B
F
F
H
H
C
G
G
F > 0
C
F
H
+G
C
G
B
>0
F
or
or
or
...(5.3)
f
= 0,
x1
f
=0
x2
and
f
=0
x3
64
10
f
= 7 + 2
x1
x1
2 f
20
= 3
x12
x1
2 f
2 f
=0
=
x2 x1
x1x2
2 f
= 12
x22
Now
LM 20
x
J= M
MN 0
3
1
OP
P
12 PQ
0
20
x13
LM 20
Second principal minor = M x
MN 0
3
1
OP
P
12 PQ
0
240
x13
f
= 0 and
x1
f
=0
x2
MULTI-VARIABLE OPTIMIZATION
65
2 f
A = x2 ,
1
H =
B=
2 f
,
x22
C=
2 f
x32
2 f
2 f
, F=
, and
x1 x2
x2 x3
Hessian matrix =
LMA
MMHG
N
H
B
F
G
F
C
G=
2 f
x3 x1
OP
PP
Q
A<0
...(5.4)
H
>0
B
AB > H2
or
...(5.5)
A
H
G
or
H
B
F
G
F <0
C
...(5.6)
66
hi ( x1 , x2 , ..., xn )
...(5.7)
i=1
and
x2 + x3 = 12
x2 + x3 12 = 0
Lagrange function
L (x1 , x2 , x3 , 1 , 2 ) =
250
x1 +
1090 x1 7 x2 5 x3 +
7 x22
5 x32
+
40 x1 16 x1
+ 1 ( x1 + x3 4) + 2 ( x2 + x3 12)
MULTI-VARIABLE OPTIMIZATION
67
Now,
7 x22
5 x32
250
L
+ 1090
+ 1 = 0
=
2
2
x1
40 x1 16 x12
x1
...(5.8)
L
14 x2
+ 2 = 0
= 7 +
x2
40 x1
...(5.9)
L
10 x3
= 5 +
+ 1 + 2 = 0
x3
16 x1
...(5.10)
L
= x1 + x3 4 = 0
1
...(5.11)
L
= x2 + x3 12 = 0
2
...(5.12)
...(5.13)
From (5.11), x3 = 4 x1
...(5.14)
14 x2
10 x3
= 5
1
40 x1
16 x1
...(5.15)
3
41
10 x1 40
...(5.16)
14 x2
= 0. 94
40 x1
68
f ( x1 , x2 ) = 6 x1 +
96 4x2 x1
+
+
x1
x1
x2
subject to x1 + x2 = 6
...(5.17)
...(5.18)
L (x1 , x2 , ) = 6 x1 +
Now
96 4 x2 x1
+
+
+ ( x1 + x2 6)
x1
x1
x2
L
96 4 x
1
+ = 0
= 6 2 22 +
x1
x1
x1
x2
...(5.19)
L
4 x1
+ = 0
=
x2
x1 x22
...(5.20)
L
= x1 + x2 6 = 0
...(5.21)
form (5.21)
4
x1
(6 x1 )2 x1
or
x1
96 4 ( 6 x1 )
1
4
+
+
=0
x12
x12
(6 x1 ) (6 x1 )2 x1
6
x1
120
1
+
+
=0
x12
(6 x1 ) (6 x1 )2
MULTI-VARIABLE OPTIMIZATION
120(6 x1 )2
+ (6 x1 ) + x1 = 0
x12
6(6 x1 )2
or
69
(6 x1 )2
or
20(6 x1 )2
+1 = 0
x12
96 4 x2 x1
+
+
x1
x1
x2
subject to x1 + x2 = 7
Following the similar procedure, an equation in terms of
x1 is obtained as follows
or
or
x1
124
1
+
+
=0
x12
(7 x1 ) (7 x1 )2
LM
N
OP
Q
x1
124
1
1+
+
=0
2
x1
(7 x1 )
(7 x1 )
6
124
7
+
=0
x12
(7 x1 )2
* = 0.39
70
f = 52.38
7 x22
5 x32
250
+ 1090 x1 7 x2 5 x3 +
+
x1
40 x1 16 x1
subject to x1 + x3 4 0
x2 + x3 12 0
The constraints may be converted into equality form as
follows
x1 + x3 4 + S12 = 0
x2 + x3 12 + S22 = 0
where S12 and S22 are slack variables and
S12 > 0
S22 > 0
Square of S1 and S2 are used in order to ensure the nonnegative slack variables.
Lagrange function,
L =
7 x22
5 x32
250
+ 1090 x1 7 x2 5 x3 +
+
x1
40 x1 16 x1
+ 1 ( x1 + x3 4 + S12 ) + 2 ( x2 + x3 12 + S22 )
MULTI-VARIABLE OPTIMIZATION
71
1 > 0
with
2 > 0
The optimum solution is obtained by differentiating
partially with respect to each variable and equating to zero.
These variables are
(i) Decision variables x1, x2, x3
(ii) Lagrange multipliers 1, 2
(iii) Slack variables S1, S2
L
= 21S1 = 0
S1
1S1 = 0
or
Similarly, 2S2 = 0
In order to generalize, let the problem be
Minimize f ( x1 , x2 , ..., xn )
subject to hi ( x1 , x2 , ..., xn ) 0 , i = 1, 2, ..., m
Now, using slack variables
hi ( x1 , x2 , ..., xn ) + S2i = 0, i = 1, 2, ..., m
= f ( x1 , x2 , ..., xn ) +
hi ( x1 , x2 ,..., xn ) + S2i
i=1
L
f
=
+
x
j
x j
i=1
hi
,
x j
j = 1, 2, ..., n
f
+
x j
hi
= 0,
x j
j = 1, 2, ..., n
...(5.22)
ihi = 0,
i = 1, 2, ..., m
...(5.23)
hi < 0,
i = 1, 2, ..., m
...(5.24)
i > 0,
i = 1, 2, ..., m
...(5.25)
i=1
72
and
iSi = 0
...(5.26)
Si2
...(5.27)
=0
4
x
5 x1
+ 2 +
x1 40 x1 8 x2
L = x1 +
and
4
5x
x
+ 2 + 1 + ( x1 + x2 11)
x1 40 x1 8 x2
x
4
5
2 +
+ =0
x12 40 x12 8 x2
...(5.28)
1
5x
1 + = 0
40 x1 8 x22
...(5.29)
( x1 + x2 11) = 0
...(5.30)
x1 + x2 11 0
...(5.31)
> 0
...(5.32)
MULTI-VARIABLE OPTIMIZATION
73
(b) = 0 and x1 + x2 11 = 0
(c) 0 and x1 + x2 11 = 0
Each of them are analyzed
(a) Substituting = 0 in (5.29),
1
5x
12 = 0
40 x1 8 x2
or
x2 = 5x1
Putting this and = 0 in (5.28),
x1 = 2
x2 = 5x1 = 10
and
= 0 and x2 = 11 x1
Substituting in (5.29),
x1 = 11/6 and x2 = 11 x1 = 55/6
Using these values, as the (5.28) is not satisfied, it is not
an optimal solution.
(c)
x2 = 11 x1
From (5.29), =
5 x1
1
8(11 x1 )2 40 x1
...(5.33)
From (5.28),
1
or
4 (11 x1 )
5
5 x1
1
+
+
=0
2
2
2
x1
40 x1
8 (11 x1 ) 8 (11 x1 )
40 x1
40 x12 171 +
275x12
=0
(11 x1 )2
74
Tota l reve nu e
Fu nction
value s
Tota l C ost
Q1
Q2
Q ua n tity
MULTI-VARIABLE OPTIMIZATION
75
P = RC
P = 25Q [45 + Q + 2Q2]
76
P = 24Q 45 2Q2
or
2Q2 24Q + 45 = 0
or
Q =
24
242 360
4
24
216
4
Q = 2.33 or 9.67
dP
= 0
dQ
or
24 4Q = 0
or optimum Q = 6 units
optimality may be ensured by getting negative second order
derivative.
Example 5.10. Let, the revenue function, R = 55Q 3Q2
and total cost function, C = 30 + 25Q.
Solve the problem by obtaining breakeven points and
maximizing the profit.
Solution. The present case refers to the linear total cost
and nonlinear revenue function.
Now profit P = R C = 30Q 3Q2 30
To compute the breakeven points,
30Q 3Q2 30 = 0
or
or
3Q2 30Q + 30 = 0
Q=
30
540
6
The breakeven points are 1.13 and 8.87.
dP
=0
dQ
MULTI-VARIABLE OPTIMIZATION
or
77
30 6Q = 0
or
optimum Q = 5 units.
R = 71Q 3Q2
C = 80 + Q + 2Q2
At
and
P = 0,
Q =
5Q2 70Q + 80 = 0
70 3300
10
dP
= 70 10Q = 0
dQ
and the quantity corresponding to maximum profit is equal to
7 units.
A multivariable problem occurs when profit function
consists of more variables related to multiple items.
78
(Q J) J
+
D
D
Q
D
...(5.34)
(Q J) / D
Q/D
(Q J)
Q
MULTI-VARIABLE OPTIMIZATION
79
(Q J) (Q J)
.
.I
2
Q
( Q J)2 . I
2Q
...(5.35)
J/D
Q/D
= J/Q
Annual shortage (or backordering) cost,
=
J J
. .K
2 Q
KJ2
2Q
...(5.36)
D
Q
D
.C
Q
..(5.37)
(Q J)2 I KJ2 DC
+
+
...(5.38)
2Q
2Q
Q
Now the objective is to minimize equation (5.38), and to
obtain the optimum values of Q, J, and finally E. The problem
can be solved by using any of the suitable methods discussed
in the present book.
E =
80
REFERENCES
1.
2.
3.
4.
5.
Sanjay Sharma. Mathematical Modeling of ProductionInventory Systems. Asian Books, New Delhi, 2004.
6.
7.