You are on page 1of 17

Weighted Residual Methods

Consider the problem


d
2
T
dx
2
+ f(x) = 0 (1)
with boundary conditions T = T
1
at x = 0, T = T
2
at x = 1, for
which an approximate solution is required. Assume the solution
will be found by using a sequence of functions of the form:
T
a
= +
1

1
+
2

2
+ ... +
N

N
(2)
in which is a known function, included to satisfy non-zero
boundary conditions,
i
(i = 1, ..., N) are parameters to be
determined and
i
(i = 1, ..., N) are known functions corre-
sponding to the adopted approximation. The functions
i
are
identically zero at the boundary points x = 0 and x = 1 and are
also linearly independent, i.e.

1
+
2

2
+ ... +
N

N
= 0
only if all
i
are equal to zero.
Since the solution given by expression (2) is approximate, its
substitution into equation (1) will produce an error or residual
given by
R =
d
2
T
a
dx
2
+ f(x) 6= 0
The basic idea of all weighted residual methods is to minimise
the residual R by distributing it within the region of denition
of the problem. The way the residual is distributed gives rise to
dierent weighted residual methods.
The methods to be discussed here satisfy the boundary condi-
tions exactly and distribute the residual along the region accord-
ing to the expression
Z
1
0
R
i
dx = 0 (3)
for i = 1, ..., N. The functions
i
are called weigthing functions.
The most common types of weighted residual methods will be
discussed in what follows.
Point Collocation Method
In the point collocation method, we choose as many points to
collocate the equation as there are unknown coecients. The
coecients are then calculated so that the residual vanishes at
each of these locations.
The collocation method amounts to using as weighting functions

i
= (x x
i
)
for i = 1, ..., N, where N is the number of unknown coecients
and (xx
i
) is the Dirac delta function which vanishes every-
where but at x = x
i
, where it equals . The Dirac delta function
also presents the following property:
Z
1
0
R
i
dx =
Z
1
0
R(x)(x x
i
)dx = R(x
i
)
Therefore, the point collocation method implies that
R(x
i
) = 0
for i = 1, ..., N.
Example 1
Solve the dierential equation
d
2
T
dx
2
+ T + x = 0
with boundary conditions T = 0 at x = 0, T = 0 at x = 1,
using the point collocation method.
Solution
The following second-order approximating function will be adopted:
T
a
= +
1

1
+
2

2
Taking = 0 (since all boundary conditions are zero),
1
=
x(1x) and
2
= x
2
(1x) (since all
i
are zero at the boundary
points), we can write:
T
a
= x(1 x)
1
+ x
2
(1 x)
2
It can be seen that the above approximation satises the bound-
ary conditions for any values of
1
and
2
.
The derivatives of T
a
are as follows:
dT
a
dx
= (1 2x)
1
+ (2x 3x
2
)
2
d
2
T
a
dx
2
= 2
1
+ (2 6x)
2
The mathematical expression of the residual function is given by
R =
d
2
T
a
dx
2
+ T
a
+ x =
(2 + x x
2
)
1
+ (2 6x + x
2
x
3
)
2
+ x
Applying the above expression at the points x = 1/4 and x =
1/2 gives:

29
16

1
+
35
64

2
+
1
4
= 0

7
4

7
8

2
+
1
2
= 0
Solving the above equations for
1
and
2
gives:

1
= 0.19355
2
= 0.18433
Substituting these values into the expression for T
a
gives the
approximate solution
T
a
= 0.19355 x (1 x) + 0.18433 x
2
(1 x)
The approximate solution at some points can be compared to
the exact solution, given by T = (sin x/ sin 1) x, in the table
below.
x Exact Approximate
0.2 0.0361 0.0369
0.4 0.0628 0.0641
0.6 0.0710 0.0730
0.8 0.0525 0.0546
The accuracy of the solution is aected by the choice of collo-
cation points. It is generally better to choose collocation points
which are symmetric within the region of denition of the prob-
lem. Taking, for example, points x = 1/4 and x = 3/4 gives:

29
16

1
+
35
64

2
+
1
4
= 0

29
16

151
64

2
+
3
4
= 0
the solution of which is

1
= 0.18984
2
= 0.17204
The new approximate solution is compared to the exact solution
in the table below.
x Exact Approximate
0.2 0.0361 0.0359
0.4 0.0628 0.0621
0.6 0.0710 0.0703
0.8 0.0525 0.0524
Subdomain Collocation Method
In the subdomain collocation method, the interval is divided into
as many segments or subdomains as there are unknown coe-
cients. The coecients are then calculated so that the average
value of the residual is zero in each subdomain. Thus, for each
subdomain the weighting function is equal to 1 and equation (3)
becomes
Z
x
i
x
i1
Rdx = 0
for i = 1, ..., N, where x
i1
and x
i
are the end points of the
subdomain.
Example 2
Solve the problem described in Example 1 using the subdomain
collocation method. Use the same approximating function as
before.
Solution
The approximating function is of the form
T
a
= x(1 x)
1
+ x
2
(1 x)
2
and the expression of the residual function is given by
R = (2 + x x
2
)
1
+ (2 6x + x
2
x
3
)
2
+ x
Dividing the interval into two subdomains, the following equa-
tions are obtained:
Z
0.5
0
Rdx =
Z
0.5
0
[(2+xx
2
)
1
+(26x+x
2
x
3
)
2
+x]dx =
=
11
12

1
+
53
192

2
+
1
8
= 0
Z
1
0.5
Rdx =
Z
1
0.5
[(2 +xx
2
)
1
+(2 6x+x
2
x
3
)
2
+x]dx =
=
11
12

229
192

2
+
3
8
= 0
Solving the above equations gives:

1
= 0.18762
2
= 0.17021
The approximate solution using the subdomain collocation method
is compared to the exact solution in the table below.
x Exact Approximate
0.2 0.0361 0.0355
0.4 0.0628 0.0614
0.6 0.0710 0.0695
0.8 0.0525 0.0518
Galerkins Method
Galerkins method uses the approximating functions as weight-
ing functions

i
=
i
for i = 1, ..., N. This means that the residual is distributed
according to
Z
1
0
R
i
dx = 0
Example 3
Solve the problem described in Example 1 using the Galerkin
method with the same approximating function as before.
Solution
The approximating function is of the form
T
a
=
1

1
+
2

2
= x(1 x)
1
+ x
2
(1 x)
2
and the expression of the residual function is given by
R = (2 + x x
2
)
1
+ (2 6x + x
2
x
3
)
2
+ x
The equations for the Galerkin method are generated as follows:
Z
1
0
R
1
dx =
Z
1
0
[(2 + x x
2
)
1
+ (2 6x + x
2
x
3
)
2
+ x]x(1 x)dx =
=
3
10

3
20

2
+
1
12
= 0
Z
1
0
R
2
dx =
Z
1
0
[(2 + x x
2
)
1
+ (2 6x + x
2
x
3
)
2
+ x]x
2
(1 x)dx =
=
3
20

13
105

2
+
1
20
= 0
Solving the above equations gives:

1
= 0.19241
2
= 0.17073
The approximate solution using the Galerkin method is com-
pared to the exact solution in the table below.
x Exact Approximate
0.2 0.0361 0.0362
0.4 0.0628 0.0626
0.6 0.0710 0.0708
0.8 0.0525 0.0526
Two important points should be noticed from the above results.
The rst is that the accuracy of the Galerkin method is supe-
rior to both collocation methods; in fact, the Galerkin method
is the most accurate of all weighted residual methods. The sec-
ond point is that the system matrix generated by the Galerkin
method is symmetric. Because of the above points, Galerkins
method is the method normally employed for the formulation of
the nite element method.
Weak Formulations
In the previous example, the equations for the Galerkin method
were of the form
Z
1
0
R
1
dx =
Z
1
0
_
_
_
d
2
T
a
dx
2
+ T
a
+ x
_
_
_
1
dx = 0 (4)
Z
1
0
R
2
dx =
Z
1
0
_
_
_
d
2
T
a
dx
2
+ T
a
+ x
_
_
_
2
dx = 0 (5)
Since T
a
is also given in terms of the approximating functions
1
and
2
, it is necessary that these functions are at least of second-
order for the second derivative of T
a
in the above equations to
be non-zero. However, this requirement can be relaxed by using
integration by parts.
Integrating by parts the above equations give
Z
1
0
_
_
_
d
2
T
a
dx
2
+ T
a
+ x
_
_
_
1
dx =
Z
1
0
_
_

dT
a
dx
d
1
dx
+ (T
a
+ x)
1
_
_
dx +
_
_
dT
a
dx

1
_
_
1
0
= 0
Recalling that, by denition, the functions
i
are identically zero
at the boundary points, the above equation reduces to
Z
1
0
_
_

dT
a
dx
d
1
dx
+ (T
a
+ x)
1
_
_
dx = 0
and similarly for equation (5)
Z
1
0
_
_

dT
a
dx
d
2
dx
+ (T
a
+ x)
2
_
_
dx = 0
The above expressions, which are mathematically equivalent to
equations (4) and (5), are called the weak form of the weighted
residual statement for the problem under consideration. Their
use has the advantage that linear approximating functions are
now admissible.
Example 4
Solve the problem described in Example 1 using the weak for-
mulation and Galerkins method with the same approximating
function as before.
Solution
Substituting the expressions for T
a
,
1
,
2
and their derivatives
gives
Z
1
0
R
1
dx =
Z
1
0

[(1 2x)
1
+ (2x 3x
2
)
2
](1 2x)+
[(x x
2
)
1
+ (x
2
x
3
)
2
+ x](x x
2
)

dx =
=
3
10

3
20

2
+
1
12
= 0
Z
1
0
R
2
dx =
Z
1
0

[(1 2x)
1
+ (2x 3x
2
)
2
](2x 3x
2
)+
[(x x
2
)
1
+ (x
2
x
3
)
2
+ x](x
2
x
3
)

dx =
=
3
20

13
105

2
+
1
20
= 0
The above equations are the same as in Example 3.
The main diculty of applying weighted residual methods to
more complex practical problems is the requirement of global
approximating functions. These functions have to identically
satisfy the boundary conditions of the problem (given values of
the dependent variable), and to adequately represent the geom-
etry, physical properties of the medium and the variation of the
dependent variable over the region of denition of the problem.
In computational terms, in order that the concept can be used
systematically, it is necessary to employ local, rather than global,
approximating functions.
Using local functions, the region is initially divided into a certain
number of sub-regions or elements and a local approximation to
the dependent variable employed within each element. Assem-
bling all elements at a later stage generates a system of equations
which is related to the discrete system, with a nite number of
unknowns. Specication of boundary conditions allows the sys-
tem to be solved for the remaining unknowns.
The above constitutes the basis of the nite element method to
be discussed next.

You might also like