You are on page 1of 185

Lectures 8-10 Lectures 8 10

Dr A.I. Delis TUC 2012


Part 4
1
CURVE FITTING
Part 4 (chapter 5)
D ib t h i t fit ( fitti ) t di t Describes techniques to fit curves (curve fitting) to discrete
data to obtain intermediate estimates.
There are two general approaches to curve fitting:
Data exhibit a significant degree of scatter. The strategy is to derive a
single curve that represents the general trend of the data. single curve that represents the general trend of the data.
Data is very precise. The strategy is to pass a curve or a series of
curves through each of the points.
In engineering two types of applications are encountered:
Trend analysis. Predicting values of dependent variable, may include
extrapolation beyond data points or interpolation between data points extrapolation beyond data points or interpolation between data points.
Hypothesis testing. Comparing existing mathematical model with
measured data.
Dr A.I. Delis TUC 2012 Part 4
2
Curve Fitting
L t S Least-Squares
Regression
Interpolation
Linear regression Linear regression
Polynomial regression
Multiple linear
Newton polynomial
Lagrange polynomial
Multiple linear
regression
Splines interpolation
Dr A.I. Delis TUC 2012 Part 4
3
Motivation
In all practical engineering cases, the sampling
data are acquired at discrete points.
y
sampling points
interpolation points
That means the function values at points other than
these sampling points are undefined; but they are
Curve fitting tries to fit a continuous curve
wanted in many applications.
x
x x = c
Curve fitting tries to fit a continuous curve
through the sampling data that can then
define the function value at any point by
interpolation.
y
In many cases, it is not required to find a
curve that fit exactly every sampling point;
instead a curve (e.g. the blue line) that
shows the trend of the function is wanted.
This is called regression.
Dr A.I. Delis TUC 2012 Part 4
4
x
Least-square regression
Visually sketch a line that conforms to the data Visually sketch a line that conforms to the data
(inaccurate)
Linear interpolation
Connect the data points consecutively by lines segments Connect the data points consecutively by lines segments
(significant errors if the data are not evenly spaced or the
underlying relationship is highly curvilinear)
Polynomial interpolation y p
Connect the data points consecutively by simple curves
(too tedious and difficult to do manually)
Dr A.I. Delis TUC 2012
Part 4
5
Mathematical Background Mathematical Background
Simple Statistics
In course of engineering study, if several
measurements are made of a particular quantity,
ddi i l i i h b i d b i i h additional insight can be gained by summarizing the
data in one or more well chosen statistics that convey
as much information as possible about specific as much information as possible about specific
characteristics of the data set.
These descriptive statistics are most often selected to These descriptive statistics are most often selected to
represent
The location of the center of the distribution of the data The location of the center of the distribution of the data,
The degree of spread of the data.
Dr A.I. Delis TUC 2012 Part 4
6
Simple Statistics
y
y mean arithmetic
i

= : 6 6
24
4 158
.
.
= = y
( )
1 1
2


= =
y y S
s derivation standard
i t
y
:
n
24
097133 0
1 24
2170 0
.
.
= =
y
s
1 1 n n
y
( )
2 2
2

= =
n y y S
s variance
i i t
y
/
:
1 24
y
009435 0 097133 0
2 2
. . = =
y
s
1 1 n n
y
% . . : 100
y
s
v c variance of t coefficien
y
=
y
% . %
.
. . 47 1 100
6 6
097133 0
= = v c
Dr A.I. Delis TUC 2012
Part 4
7
y
.6 6
Arithmetic mean. The sum of the individual data
points (yi) divided by the number of points (n).
n
y
y
i
=

St d d d i ti Th t f
n i
n
, , 1 =
Standard deviation. The most common measure of a
spread for a sample.
or
=
1 n
S
S
t
y
( ) /
2
2
2

=

n y y
S
i i

=

2
) (
1
y y S
n
i t
y
1
=
n
S
y
Dr A.I. Delis TUC 2012 Part 4
8
V i R i f d b h f Variance. Representation of spread by the square of
the standard deviation.
1
) (
2
2

=

y y
S
i
y Degrees of freedom
Coefficient of variation Has the utility to quantify
1 n
y g
Coefficient of variation. Has the utility to quantify
the spread of data.
% 100 . .
S
v c
y
=
y
Dr A.I. Delis TUC 2012
Part 4
9
The Normal Distribution
normal distribution normal distribution
histogram
In most engineering applications, the sampling data set conforms to the
normal distribution if the size of the data set is sufficiently large.
For the normal distribution, the range defined by and will
encompass approximately 68 percent of the total measurement. Similarly,
th b t d ill i t l 95%
y
s y +
y
s y
Dr A.I. Delis TUC 2012 Part 4
10
the range between and will encompass approximately 95%.
y
s y 2
y
s y 2 +
What is Regression? g
What is regression? Given n data
1 1 2 2 ( , ), ( , ), ... , ( , ), n n x y x y x y
best fit
) (x f y
t th d t Th b t fit i ll b d
) (x f y =
to the data. The best fit is generally based on
minimizing the sum of the square of the residuals, r S .
Residual at a point is
) ( i i i x f y =
) , ( n n y x
) ( i i i x f y
) (x f y =
Sum of the square of the residuals

=
=
n
i
i i r x f y S
1
2
)) ( (
) , ( 1 1 y x
Figure Basic model for regression
11
= i 1
Figure. Basic model for regression
Least Squares Regression q g
Section 5.2
Linear Regression Linear Regression
Fitting a straight line to a set of paired
observations: (x
1
, y
1
), (x
2
, y
2
),,(x
n
, y
n
).
y=a
0
+a
1
x+e
0 1
a
1
- slope
a - intercept a
0
- intercept
e- error, or residual, between the model and the
b ti (measured y) observations (measured y)
x a a y e
1 0
=
Dr A.I. Delis TUC 2012 Part 4
12
y
1 0
(Inadequate) Criteria for a Best Fit/
Minimize the sum of the residual errors for all
available data (criterion #1):

= =
=
n
i
i o i
n
i
i
x a a y e
1
1
1
) (
n = total number of points
However, this is an inadequate criterion, so is the sum , q ,
of the absolute values (criterion #2)

n n
Minmax criterion: minimizes the max distance that a

= =
=
i
i i
i
i
x a a y e
1
1 0
1
Minmax criterion: minimizes the max distance that a
point falls from the line, inadequate as well.
Dr A.I. Delis TUC 2012 Part 4
13
Linear Regression-Criterion#1 Linear Regression-Criterion#1
) , ( , ... ), , ( ), , ( 2 2 1 1 n n y x y x y x
Given n data points best fit
x a a y
1 0
+ =
to the data.
i i
y x ,
y
, y x
n n
y x ,
0 1 i i i
e y a a x =
x a a y =
2 2
, y
3 3
, y x
n
x
i i i
x a a y
1 0

1 1
, y x
Figure. Linear regression of y vs. x data showing residuals at a typical point, x
i
.
14
Does minimizing
1
n
i
i
e
=

work as a criterion, where


0 1 ( ) i i i e y a a x = +
Example for Criterion#1 Example for Criterion#1
Example: Given the data points (2,4), (3,6), (2,6) and
(3,8), best fit the data to a straight line using Criterion#1
10
x y
2 0 4 0
Table. Data Points
6
8
y
2.0 4.0
3.0 6.0
2.0 6.0
2
4
y
3.0 8.0
0
0 1 2 3 4
x
15
Figure. Data points for y vs. x data.
Linear Regression Criteria#1 Linear Regression-Criteria#1
Using y=4x 4 as the regression curve Using y=4x-4 as the regression curve
Table. Residuals at each point for
regression model y = 4x 4.
8
10
x y y
predicted
= y - y
predicted
2.0 4.0 4.0 0.0
3.0 6.0 8.0 -2.0
4
6
8
y
4

2.0 6.0 4.0 2.0


3.0 8.0 8.0 0.0
0
2
0 1 2 3 4
0
1
=

= i
i

Figure. Regression curve for y=4x-4, y vs. x data


x
16
Linear Regression-Criteria#1
Using y=6 as a regression curve
x y y
predicted
= y - y
predicted
8
10
Table. Residuals at each point for y=6
2.0 4.0 6.0 -2.0
3.0 6.0 6.0 0.0
2.0 6.0 6.0 0.0
4
6
8
y
3.0 8.0 6.0 2.0
4
1
0
i
i
e
=
=

0
2
0 1 2 3 4
1 i=
x
Figure. Regression curve for y=6, y vs. x data
17
Linear Regression Criterion #1 Linear Regression Criterion #1
4
1
0
i
i
e
=
=
for both regression models of y=4x-4 and y=6.
1 i=
The sum of the residuals is as small as possible, that
is zero, but the regression model is not unique.
Hence the above criterion of minimizing the sum of Hence the above criterion of minimizing the sum of
the residuals is a bad criterion.
18
Linear Regression-Criterion#2 Linear Regression-Criterion#2
Will minimizing

n
k b tt ?
Will minimizing

= i
i
1

work any better?
i i
y x ,
e y a a x =
y
2 2
, y x
y x
n n
y x ,
0 1 i i i
e y a a x =
, y x
3 3
, y x
Figure. Linear regression of y vs. x data showing residuals at a typical point, x
i
.
x
1 1
, y x
19
Linear Regression-Criteria 2
Using y=4x-4 as the regression curve
10
Table. The absolute residuals
employing the y=4x-4 regression model
x y y
predicted
|| = |y - y
predicted
|
2 0 4 0 4 0 0 0 4
6
8
y
p y g y g
2.0 4.0 4.0 0.0
3.0 6.0 8.0 2.0
2.0 6.0 4.0 2.0
3 0 8 0 8 0 0 0
0
2
4
0 1 2 3 4 3.0 8.0 8.0 0.0 0 1 2 3 4
x
Figure. Regression curve for y=4x-4, y vs. x data
4
4
1
=

= i
i

20
Figure. Regression curve for y 4x 4, y vs. x data
Linear Regression-Criteria#2
Using y=6 as a regression curve
Table. Absolute residuals employing
the y=6 model
10
x y y
predicted
|| = |y
y
predicted
|
2.0 4.0 6.0 2.0
the y=6 model
4
6
8
y
3.0 6.0 6.0 0.0
2.0 6.0 6.0 0.0
0
2
4
3.0 8.0 6.0 2.0
4
4
=
i

0 1 2 3 4
x
21
1 = i
Figure. Regression curve for y=6, y vs. x data
Linear Regression Criterion#2 Linear Regression-Criterion#2
for both regression models of y=4x-4 and y=6.
4
4
1
=

= i
i

The s m of the errors has been made as small as possible that The sum of the errors has been made as small as possible, that
is 4, but the regression model is not unique.
Hence the above criterion of minimizing the sum of the absolute
4
g
value of the residuals is also a bad criterion.
Can you find a regression line for which
4
4
1
<

= i
i

and has unique


regression coefficients?
22
Best strategy is to minimize the sum of the squares of Best strategy is to minimize the sum of the squares of
the residuals between the measured y and the y
calculated with the linear model: calculated with the linear model:

= = =
n n
i i i i
n
i
x a a y y y e S
2
1 0
2 2
) ( ) model measured (

= = = i i
i i i i
i
i r
x a a y y y e S
1 1
1 0
1
) ( ) model , measured , (
Yields a unique line for a given set of data.
Dr A.I. Delis TUC 2012 Part 4
23
Least-Squares Fit of a Straight Line/
S
[ ]

= =

1
0 ) ( 2
i o i
o
r
S
x a a y
a
S
[ ]

=
= =

1
1
0
0 ) ( 2
i i o i
r
x a a y
x x a a y
a
S
N l ti b


=
=
2
1
0
1
0
0
0
i i i i
i i
x a x a x y
x a a y
Normal equations, can be
solved simultaneously
( ) y a x na
na a
0 0
= +
=

( ) y a x na
i i 1 0
= +

( ) x x n
y x y x n
a
i i
i i i i
2
2
1

=


M l
See example 5 1
Dr A.I. Delis TUC 2010
Part 4
24
( )
x a y a
i i
1 0
=

Mean values
See example 5-1
The error or residual represent the vertical distance p
between the measured data and the straight line.
Dr A.I. Delis TUC 2012 Part 4
25
The reduction in spread of data indicated p
by the bell-shaped curves shows the
improvement of the linear regression improvement of the linear regression.
(a) Spread of data around the mean (b) Spread of data around the best-fit line
Dr A.I. Delis TUC 2012
Part 4
26
Linear regression with small and large errors.
Dr A.I. Delis TUC 2012 Part 4
27
The common measure of spread of data is
standard deviation:
S
1
=
n
S
S
r
y
To calculate the standard error which estimate
the spread of data around the regression line,
S
S
r
2
/

=
n
S
r
x y

= = =
= = =
n
i
n
i
i i i i
n
i
i r
x a a y y y e S
1 1
2
1 0
2
1
2
) ( ) model , measured , (
Dr A.I. Delis TUC 2012 Part 4
28
= = = i i i 1 1 1
If:
a) The magnitude for spread of the points a) The magnitude for spread of the points
around straight line are similar.
b) The distribution is normal the least square b) The distribution is normal, the least-square
regression will give the best estimates of a
0
and
a
1
.
At this condition,
Standard deviation: Standard error of estimate
Dr A.I. Delis TUC 2012 Part 4
29
S is the total sum squares of the errors/residual S
t
is the total sum squares of the errors/residual
between data points and the mean of the data
points points.
( )
2

S
Whil S i h f h f h
( )

= y y S
i t
While S
r
is the sum of the squares of the
residuals (error) between the measured y ( ) y
and the y calculated with the linear model:

= = =
= = =
n
i
n
i
i i i i
n
i
i r
x a a y y y e S
1 1
2
1 0
2
1
2
) ( ) model , measured , (
Dr A.I. Delis TUC 2012 Part 4
30
Goodness of our fit/
If
Total sum of the squares around the mean for the Total sum of the squares around the mean for the
dependent variable, y, is S
t
Sum of the squares of residuals around the Sum of the squares of residuals around the
regression line is S
r
ifi h i d i S
t
-S
r
quantifies the improvement or error reduction
due to describing data in terms of a straight line rather
h l than as an average value.
r t
S S
2
t
r t
S
r =
2
r
2
-coefficient of determination
Dr A.I. Delis TUC 2012
31
r correlation coefficient
Part 4
F f t fit For a perfect fit
S
r
=0 and r=r
2
=1, signifying that the line
r
g y g
explains 100 percent of the variability of the
data. data.
For r=r
2
=0, S
r
=S
t
, the fit represents no
iimprovement.
Dr A.I. Delis TUC 2012
Part 4
32
Example 1 Example 1
The torque, T needed to turn the torsion spring of a
mousetrap through an angle, is given below.
Find the constants for the model given by
k k T + =
Find the constants for the model given by

2 1
k k T + =
0 4
Angle,
Torque, T
Table: Torque vs Angle for a
torsional spring
0.3
0.4
(
N
-
m
)
Radians N-m
0.698132 0.188224
0.2
T
o
r
q
u
e

(
0.959931 0.209138
1.134464 0.230052
1.570796 0.250965
0.1
0.5 1 1.5 2
(radians)
33
1.919862 0.313707
Figure. Data points for Angle vs. Torque data
Example 1 cont Example 1 cont.
The following table shows the summations needed for the calculations of The following table shows the summations needed for the calculations of
the constants in the regression model.

2
T
Table. Tabulation of data for calculation of important
Using equations described for
summations
T
1 a

2
T
Radians N-m Radians
2
N-m-Radians
0.698132 0.188224 0.487388 0.131405
5 = n
g q
5 5 5

i i i i
T T n
0 a
T
and with
0.959931 0.209138 0.921468 0.200758
1.134464 0.230052 1.2870 0.260986
1.570796 0.250965 2.4674 0.394215
2
5
1
5
1
2
1 1 1
2

=

= =
= = =
i
i
i
i
i i i
n
k

1.919862 0.313707 3.6859 0.602274
6.2831 1.1921 8.8491 1.5896

=
=
5
1 i
( ) ( )( )
( ) ( )
2
2831 6 8491 8 5
1921 1 2831 6 5896 1 5
. .
. . .

=
2
/
34
2
10 6091 9

= .
N-m/rad
Example 1 cont Example 1 cont.
Use the average torque and average angle to calculate
1 k
5
n
T
T
i
i
=
=
5
1
_
n
i
i
=
=
5
1
_

n
5
1921 . 1
=
5
2831 . 6
=
1
10 3842 . 2

=
2566 . 1 =
Using,
_
2
_
1
k T k =
) 2566 . 1 )( 10 6091 . 9 ( 10 3842 . 2
2 1
=
1
35
1
10 1767 . 1

= N-m
Example 1 Results Example 1 Results
Using linear regression, a trend line is found from the data
Figure. Linear regression of Torque versus Angle data
36
Figure. Linear regression of Torque versus Angle data
Can you find the energy in the spring if it is twisted from 0 to 180 degrees?
Example 2 Example 2
To find the longitudinal modulus of composite, the following data is
collected. Find the longitudinal modulus,
Table Stress vs Strain data
E using the regression model
E =
and the sum of the square of the
id l
Strain () Stress ()
(%) (MPa)
Table. Stress vs. Strain data
3.0E+09
residuals.
0 0
0.183 306
0.36 612
0 5324 917
2.0E+09


(
P
a
)
0.5324 917
0.702 1223
0.867 1529
1 0244 1835
1.0E+09
S
t
r
e
s
s
,

1.0244 1835
1.1774 2140
1.329 2446
1 479 2752
0.0E+00
0 0.005 0.01 0.015 0.02
Strain (m/m)
37
1.479 2752
1.5 2767
1.56 2896
Strain, (m/m)
Figure. Data points for Stress vs. Strain data
Example 2 cont Example 2 cont.
Residual at each point is given by
i i i
e E =
es dua at eac po t s g e by
Th f th f th id l th i The sum of the square of the residuals then is
2
n
r i
S e =

( )

=
n
i i
E
2

1 i=

= i 1
Differentiate with respect to
E
( ) 0 ) ( 2
1
= =

=
i
n
i
i i
r
E
E
S

=
=
n
n
i
i i
E
1

Therefore
38

=
n
i
i
1
2

Example 2 cont.
i
2

Table. Summation data for regression model
With
1 0.0000 0.0000 0.0000 0.0000
2 1.830010
3
3.060010
8
3.348910
6
5.599810
5
3 3.600010
3
6.120010
8
1.296010
5
2.203210
6

=
12
1
3 2
10 2764 . 1
i
i

and
4 5.324010
3
9.170010
8
2.834510
5
4.882110
6
5 7.020010
3
1.223010
9
4.928010
5
8.585510
6
6 8.670010
3
1.529010
9
7.516910
5
1.325610
7

=
=
12
1
8
10 3337 . 2
i
i i

12
7 1.024410
2
1.835010
9
1.049410
4
1.879810
7
8 1.177410
2
2.140010
9
1.386310
4
2.519610
7
9 1.329010
2
2.446010
9
1.766210
4
3.250710
7
Using

=
=
12
2
1
i
i
i i
E


10 1.479010
2
2.752010
9
2.187410
4
4.070210
7
11 1.500010
2
2.767010
9
2.250010
4
4.150510
7
12 1.560010
2
2.896010
9
2.433610
4
4.517810
7
12

=1 i
3
8
10 2764 . 1
10 3337 . 2

=
39
1.276410
3
2.333710
8

=
12
1 i
GPa 84 . 182 =
Example 2 Results Example 2 Results
84 . 182 = The equation describes the data.
40
Figure. Linear regression for Stress vs. Strain data
Adequacy of the Linear Regression Models
65
y vs x
6
6.5
5
5.5
4
4.5
y
3
3.5
2
2.5
41
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
x
Adequacy of the Linear Regression Models
6.5
7
y vs x
5.5
6
4.5
5
y
3
3.5
4
2
2.5
3
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
x
I thi d t ?
42
Is this adequate?
Quality of Fitted Data Quality of Fitted Data
Does the model describe the data
adequately? q y
H ll d th d l di t th How well does the model predict the
response variable predictably?
3 Checks 3 Checks
1. Plot the data and the model.
2 Fi d t d d f ti t 2. Find standard error of estimate.
3 Calculate the coefficient of 3. Calculate the coefficient of
determination.
Example: Check the adequacy of the
straight line model for given data straight line model for given data
T T
(F)

(in/in/F)
-340 2.45
260 3 58
T a a
1 0
+ =
-260 3.58
-180 4.52
T a a
1 0
+
180 4.52
-100 5.28
-20 5.86
60 6 36 60 6.36
1. Plot Data and model
T T 0096964 . 0 0325 . 6 ) ( + =
T
(F)

(in/in/F)
6.5
7
(F)
( )
-340 2.45
5.5
6
-260 3.58
-180 4.52
4.5
5

180 4.52
-100 5.28
3.5
4
-20 5.86
60 6 36
2.5
3
60 6.36
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
T
2 Find Standard error of estimate 2. Find Standard error of estimate
=
S
S
r
2
/

=
n
S
T

n
T a a S
2
) (

=
=
i
i i r
T a a S
1
1 0
) (
Standard Error of Estimate
T T 0096964 . 0 0325 . 6 ) ( + =
i
T
i

i
T a a
1 0
+
i i
T a a
1 0

-340 2 45 2 7357 -0 28571
i i

i 1 0
i i 1 0
-340
-260
2.45
3.58
2.7357
3.5114
-0.28571
0.068571
-180
100
4.52
5 28
4.2871
5 0629
0.23286
0 21714 -100
-20
5.28
5.86
5.0629
5.8386
0.21714
0.021429
60 6.36 6.6143 -0.25429
Standard Error of Estimate
25283 0 = S 25283 . 0 =
r
S
2
/
=
S
S
r
T
2
/
n
T
2 6
25283 . 0
=
2 6
25141 0 = 25141 . 0 =
Standard Error of Estimate Standard Error of Estimate
8
6
7
5
6

4
2
3
T
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
T
T
i i
S
T a a
/
1 0
Residual Scaled


=
Scaled Residuals Scaled Residuals
E ti t f E St d d
Residual
Residual Scaled =
Estimate of Error Standard
i i
T a a
1 0
Residual Scaled

=
T
s
/
95% f th l d id l d 95% of the scaled residuals need
to be in [-2 2] to be in [-2,2]
Scaled Residuals
25141 . 0
/
=
T
s

T R id l
Scaled
T
i

i
Residual
Residual
340 2 4 0 28 1 1 1364 -340
-260
2.45
3.58
-0.28571
0.068571
-1.1364
0.27275 60
-180
100
3.58
4.52
5 28
0.06857
0.23286
0 21714
0. 7 75
0.92622
0 86369 -100
-20
5.28
5.86
0.21714
0.021429
0.86369
0.085235
60 6.36 -0.25429 -1.0115
3 Coefficient of determination 3. Coefficient of determination
( )

n
2
( )

=
i t
S
2

n

= i 1
( )

=
i i r
T a a S
2
1 0

= i 1
S S
r t
S
S S
r

=
2
t
S
Sum of square of residuals between
data and mean
( )

n
S
2
( )

=
=
i
i t
S
1
2

) , (
n n
y x
( )
y
( )
i i
y x ,
y
_
y y
_
( )
( )
2 2
, y x
y y =
( )
1 1
, y x
( )
3 3
, y x
( )
2 2
, y x
x
Sum of square of residuals between
observed and predicted
( )

n
2
( )

=
=
i
i i r
T a a S
1
2
1 0

= i 1
) ( ) , (
n n
y x
( )
i i
y x ,
i i i
x a a y E
1 0
=
y
( )
2 2
, y x
( )
( )
3 3
, y x
( )
1 1
, y x
x
Limits of Coefficient of
Determination
S S
2
t
r t
S
S S
r =
2
t
1 0
2
r
Calculation of S
tt
T

340 2 45 2 2250
i
T
i


i
-340
-260
2.45
3 58
-2.2250
-1 0950
6750 . 4 =
-260
-180
3.58
4.52
-1.0950
0.15500
783 . 10 =
t
S
-100 5.28 0.60500
-20
60
5.86
6 36
1.1850
1 6850 60 6.36 1.6850
Calculation of S
rr
T
T +
T a a
340 2 45 2 7357 0 28571
i
T
i

i
T a a
1 0
+
i i
T a a
1 0

-340
-260
2.45
3.58
2.7357
3.5114
-0.28571
0.068571 260
-180
3.58
4.52
3.5114
4.2871
0.068571
0.23286
-100
-20
5.28
5 86
5.0629
5 8386
0.21714
0 021429 -20
60
5.86
6.36
5.8386
6.6143
0.021429
-0.25429
25283 . 0 =
r
S
Coefficient of determination Coefficient of determination
r t
S S
r

=
2
t
S
25283 . 0 783 . 10
=
783 . 10
97655 . 0 =
Caution in use of r
2
Caution in use of r
2
Increase in spread of regressor variable
(x) in y vs. x increases r
2
( ) y
Large regression slope artificially yields
high r
2
high r
2
Large r
2
does not measure
appropriateness of the linear model
Large r
2
does not imply regression model Large r does not imply regression model
will predict accurately
Data and model Data and model
T 0093868 . 0 0248 . 6 + =
7
5 5
6
6.5
4 5
5
5.5

3.5
4
4.5

2.5
3
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
T
Data that is ill-suited for linear least squares regression (a)
Data indicates that a parabola is preferable (b) Data indicates that a parabola is preferable (b)
Dr A.I. Delis TUC 2012 Part 4
62
What polynomial model to choose What polynomial model to choose
if one needs to be chosen?
First Order of Polynomial
10
-6
Polynomial Regression of order 1
6.5
7
x 10
6
Polynomial Regression of order 1
6
x
m
5
5.5
.
.
.
.
.
+
a
m
*
x
4
4.5
*
x
+
a
2
*
x
2
+
3.5
4

=

a
0
+
a
1
*
2.5
3
y
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
2.5
Second Order Polynomial
6 5
7
x 10
-6
Polynomial Regression of order 2
5 5
6
6.5
x
m
5
5.5
2
+
.
.
.
.
.
+
a
m
*
x
4
4.5
+
a
1
*
x
+
a
2
*
x
2
3
3.5
y

=

a
0
+
2
2.5
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
x
Which model to choose? Which model to choose?
6.5
7
y vs x
5.5
6
4.5
5
y
3
3.5
4
2
2.5
3
-350 -300 -250 -200 -150 -100 -50 0 50 100
2
x
Optimum Polynomial
x 10
-14
Optimum Order of Polynomial
5
4
m
+
1
)
]
2
3
S
r

[
n
-
(
m
1
2
0 1 2 3 4 5 6
0
Order of Polynomial, m
Polynomial Regression
(Section 5.4)
Some engineering data is poorly represented
by a straight line. For these cases a curve is y g
better suited to fit the data. The least squares
method can readily be extended to fit the data method can readily be extended to fit the data
to higher order polynomials (Sec. 5.4).
Dr A.I. Delis TUC 2012
Part 4
68
Same set of data points and polynomial of
different degrees
Dr A.I. Delis TUC 2010
Part 4 (Chapters 17-18)
69
The least-square procedure is extended to The least square procedure is extended to
a higher-order polynomial. For the second-
order polynomial or quadratic order polynomial or quadratic,
2
e x a x a a y + + + =
2
2 1 0
Hence, sum of the squares of the residuals
(error) S :
( )

n
2
2
(error), S
r
:
( )

=
=
i
i r
x a x a a y S
1
2
2
2 2 0
Dr A.I. Delis TUC 2012 Part 4
70
From
( ) ( )

( ) ( )

= + +
i i i i
y a x a x na
2
2
0
( ) ( ) ( ) ( ) ( ) ( )

= + +
i i i i i
y x a x a x a x
2
3
1
2
0
( ) ( ) ( ) ( ) ( ) ( )

= + +
i i i i i
y x a x a x a x
2
4
1
3
0
2
Calculate the a
0
,a
1
and a
2
using Gauss elimination.


i i i
y a x x n
0
2


i i i i i
y x a x x x
1 1
4
3
3
2
2
Dr A.I. Delis TUC 2010
Part 4
71


i i i i i
y x a x x x
2
4 3 2
Part 4
72
Similarly the total sum squares of the Similarly, the total sum squares of the
errors between data points and the mean
of the data points S
( )
2

= y y S
i t
of the data points, S
t
( )

y y S
i t
While the standard error which estimate the
spread of data, S
y/x
( ) 1
/
+
=
m n
S
S
r
x y
( )
n = total number of points
d f th l i l
Dr A.I. Delis TUC 2010
Part 4
73
m= order of the polynomial
Algorithm for implementation of polynomial regression
Dr A.I. Delis TUC 2012 Part 4
74
Example Polynomial Model Example -Polynomial Model
Regress the thermal expansion coefficient vs. eg ess t e t e a e pa s o coe c e t s
temperature data to a second order polynomial.
Temperature T Coefficient of
6.00E-06
7.00E-06
i
e
n
t
,

Table. Data points for


temperature vs
Temperature, T
(
o
F)
Coefficient of
thermal
expansion,
(in/in/
o
F)
6
4.00E-06
5.00E-06
n
s
i
o
n

c
o
e
f
f
i
c
n
/
i
n
/
o
F
)
80 6.4710
6
40 6.2410
6
40 5.7210
6
2.00E-06
3.00E-06
T
h
e
r
m
a
l

e
x
p
a
n
(
i
n
120 5.0910
6
200 4.3010
6
280 3.3310
6
1.00E-06
-400 -300 -200 -100 0 100 200
Temperature,
o
F
T
340 2.4510
6
Figure. Data points for thermal expansion coefficient vs
temperature.
75
Example -Polynomial Model cont. p y
We are to fit the data to the polynomial regression model
2
2 1 0
T a T a a + + =
The coefficients
2 1 0
, a ,a a
are found by differentiating the sum of the
square of the residuals with respect to each variable and setting the


n
n
i
n
i
T T n
2

values equal to zero to obtain



=
=
= = =
= =
n
i
i i
i
i
n
i
i
n
i
i
n
i
i
i
i
i
i
T a
a
T T T
T T n
1
1
1
0
1
3
1
2
1
1 1


=
=
= = =
= = =
n
i
i i
i
n
i
i
n
i
i
n
i
i
i i i
T
a
T T T
1
2
1
2
1
4
1
3
1
2
1 1 1

76
Example -Polynomial Model cont. p y
The necessary summations are as follows y
7
Temperature, T
(
o
F)
Coefficient of
thermal expansion,
(in/in/
o
F)
Table. Data points for temperature vs.
5
7
1
2
10 5580 . 2 =

= i
i
T
7
7
3
10 0472 7

T
80 6.4710
6
40 6.2410
6
40 5.7210
6
1
10 0472 . 7 =

= i
i
T
10
7
4
10 1363 . 2

=
i
T
120 5.0910
6
200 4.3010
6
280 3.3310
6
1

= i
5
7
1
10 3600 . 3

=
=

i
i

340 2.4510
6
3
7
1
10 6978 . 2

=
=

i
i i
T
7
1
7
1
2
10 5013 . 8

=
=

i
i i
T
77
Example -Polynomial Model cont. p y
Using these summations, we can now
l l t


5
0
5 2
10 3600 . 3 10 5800 . 2 10 6000 . 8 0000 . 7 a
calculate
2 1 0
, a ,a a

1
3
2
1
0
10 7 5
7 5 2
10 5013 . 8
10 6978 . 2
10 1363 . 2 10 0472 . 7 10 5800 . 2
10 0472 . 7 10 5800 . 2 10 600 . 8
a
a
Solving the above system of simultaneous linear equations we have

6
0
10 0217 . 6 a

11
9
2
1
0
10 2218 . 1
10 2782 . 6
a
a
The polynomial regression model is then
2
2 1 0
+ + = T a T a a
2 11 9 6
T 10 1.2218 T 10 6.2782 10 6.0217

+ =
78
General Linear Least Squares q
f ti b i 1
2 2 1 1 0 0
+ + + + + = e z a z a z a z a y
m m

{ } [ ]{ } { }
[ ]
functions basis 1 are
1 0
+ =
+
E A Z Y
m , z , , z z
m

[ ]
t variable independen the of values measured at the
functions basis the of values calculated the of matrix Z
{ }
{ } ts coefficien unknown A
variable dependent the of valued observed Y

{ }
{ } residuals E
ts coefficien unknown A

=
n m
ji j i r
z a y S
Minimized by taking its partial
derivative w.r.t. each of the coefficients
and setting the resulting equation
Dr A.I. Delis TUC 2012 Part 4
79
1 0 = =

i j
equal to zero
In multiple linear regression y is a linear In multiple linear regression, y is a linear
function of two of more independent variables
(x x x ) For function of x and x (2D) (x
1
, x
2
, x
3
). For function of x
1
and x
2
, (2D)
e x a x a a y + + + =
2 2 1 0
1 e x a x a a y + + +
2 2 1 0
1
Hence, sum of the squares of the residuals , q
(error), S
r
:
( )

=
=
n
i
i r
x a x a a y S
1
2
2 2 2 0
1
= i 1
Dr A.I. Delis TUC 2012 Part 4
80
Calculate the a
0
,a
1
and a
2
using Gauss

y a x x n
Calculate the a
0
,a
1
and a
2
using Gauss
elimination.

i i i
y x
y
a
a
x x
x
x
x
x
n
0 2
2
1

i i
i i
i
i i
i i
i
i
i
y x
y x
a
a
x
x x
x x
x
x
x
1 2
1 1
2
1
2
2
2 1
2 1
1
2
1


i i i i i i
y
1 2 2 2 2 1 2
Similarly, the standard error which estimate y,
the spread of data, S
y/x
S
( ) 1
/
+
=
m n
S
S
r
x y
Dr A.I. Delis TUC 2012 Part 4
81
Multiple Linear Regression
Dr A.I. Delis TUC 2012 Part 4
82
Nonlinear Regression g
Section 5.3
) (
bx
Some popular nonlinear regression models:
) (
bx
ae y =
) (
b
ax y =
1. Exponential model:
2 P d l
) ( ax y =

=
ax
y
2. Power model:
3. Saturation growth model:

+ x b
y 3. Saturation growth model:

1
4. Reciprocal function:

+
=
b mx
y
83
Exponential Model Exponential Model
) , ( , ... ), , ( ), , (
2 2 1 1 n n
y x y x y x
Given best fit
bx
ae y = to the data.
bx
ae y =
) , (
1 1
y x
y
) , ( y x
i
bx
i
ae y
) , (
n n
y x
) , (
2 2
y x
) , (
i i
y
Figure. Exponential model of nonlinear regression for y vs. x data
84
Finding Constants of Exponential g p
Model
( )

n
bx
i
S
2
The sum of the square of the residuals is defined as
( )

=
=
i
i
r
i
ae y S
1
Differentiate with respect to a and b p
( )( ) 0 2 = =

i i
bx
n
bx
i
r
e ae y
S
( )( )
1


= i
a
( )( )

b
n
b
S
( )( ) 0 2
1
= =

=
i i
bx
i
i
bx
i
r
e ax ae y
b
S
85
Finding Constants of Exponential Finding Constants of Exponential
Model
Rewriting the equations, we obtain
0
2
=

n
bx
n
bx
i
i i
e a e y
1 1 = = i i
i
y
0
1
2
1
=

= =
n
i
bx
i
n
i
bx
i i
i i
e x a e x y
86
Finding constants of Exponential Model

n
bx
i
e y
Solving the first equation for a yields

=
=
n
bx
i
i
i
e y
a
2
1

= i
bx
i
e
1
2
Substituting a back into the previous equation
bx
n
0
1
2
2
1
1
=

=
n
i
bx
i
n
b
bx
i
i
bx
i
n
i
i
i
i
i
e x
e y
e x y
1
1
2
1

=
=
= i
i
bx
i
i
e
The constant b can be found through numerical methods such as bisection The constant b can be found through numerical methods such as bisection
method.
87
Example 1-Exponential Model Example 1-Exponential Model
Many patients get concerned when a test involves injection of a y p g j
radioactive material. For example for scanning a gallbladder, a
few drops of Technetium-99m isotope is used. Half of the
techritium-99m would be gone in about 6 hours. It, however, techritium 99m would be gone in about 6 hours. It, however,
takes about 24 hours for the radiation levels to reach what we
are exposed to in day-to-day activities. Below is given the
relative intensity of radiation as a function of time relative intensity of radiation as a function of time.
t(hrs) 0 1 3 5 7 9
Table. Relative intensity of radiation as a function of time.
t(hrs) 0 1 3 5 7 9
1.000 0.891 0.708 0.562 0.447 0.355
88
Example-Exponential Model cont.
The relative intensity is related to time by the equation
Find:
t
Ae

=
a) The value of the regression constants
A
and
b) The half-life of Technium-99m
c) Radiation intensity after 24 hours
89
Plot of data
90
Constants of the Model
t
Ae

=
The value of is found by solving the nonlinear equation
n
( ) 0
2
1
=

=
=
n
t
i
n
i
t
i
t
i
n
i
i
i
i
e t
e
e t f


( ) 0
1
1
2
1
=

=
=
=
= i
i
n
i
t
i
i
i
i
e t
e
e t f


1 = i

n
t
i
i
e

=
=
n
t
i
i
i
e
A
2
1

= i
i
e
1
91
Setting up the Equation in MATLAB
( ) 0
2
2
1
=

=
=
n
t
i
n
n
i
t
i
t
i
n
i
i
i
i
e t
e
e t f



1
1
2
1

=
=
= i
n
i
t
i
i
e

t (hrs) 0 1 3 5 7 9 ( )
1.000 0.891 0.708 0.562 0.447 0.355
92
Setting up the Equation in MATLAB g p q

n
n
t
i
n
i
e

( ) 0
1
2
2
1
1
=

=
=
=
=
n
i
t
i
n
t
i
t
i
n
i
i
i
i
i
e t
e
e t f

= i
1151 0 =
t=[0 1 3 5 7 9]
1151 . 0 =
t [0 1 3 5 7 9]
gamma=[1 0.891 0.708 0.562 0.447 0.355]
syms lamda
sum1=sum(gamma *t *exp(lamda*t)); sum1=sum(gamma. t. exp(lamda t));
sum2=sum(gamma.*exp(lamda*t));
sum3=sum(exp(2*lamda*t));
4 (t * (2*l d *t)) sum4=sum(t.*exp(2*lamda*t));
f=sum1-sum2/sum3*sum4;
93
Calculating the Other Constant
The value of A can now be calculated The value of A can now be calculated

6
t
i
e

=
=
6
2
1 i
i
i
e
A

9998 . 0 =

=1
2
i
t
i
e

The exponential regression model then is
t
e
1151 . 0
9998 0

= e 9998 . 0 =
94
Plot of data and regression curve Plot of data and regression curve
t
e
1151 . 0
9998 0

e 9998 . 0 =
95
Relative Intensity After 24 hrs Relative Intensity After 24 hrs
The relative intensity of radiation after 24 hours
( ) 24 1151 . 0
9998 0
( ) 24 1151 . 0
9998 . 0 = e
2
10 3160 6

= 10 3160 . 6 =
This result implies that only
% 317 . 6 100
9998 0
10 316 . 6
2
=


9998 . 0
di ti i t it i l ft ft 24 h radioactive intensity is left after 24 hours.
96
Linearization of Nonlinear Relationships [exponential (a), power equation(b)]
Dr A.I. Delis TUC 2010
Part 4
97
(a) Untransformed data with the power equation
(b) Transformed data and linearized regression
Dr A.I. Delis TUC 2012 Part 4
98
(previous) Example -Linearization of
data
Many patients get concerned when a test involves injection of a radioactive Many patients get concerned when a test involves injection of a radioactive
material. For example for scanning a gallbladder, a few drops of
Technetium-99m isotope is used. Half of the technetium-99m would be
gone in about 6 hours. It, however, takes about 24 hours for the radiation g
levels to reach what we are exposed to in day-to-day activities. Below is
given the relative intensity of radiation as a function of time.
(h )
Table. Relative intensity of radiation as a function
of time
1
r
a
d
i
a
t
i
o
n
,

t(hrs) 0 1 3 5 7 9
1.000 0.891 0.708 0.562 0.447 0.355

0.5
v
e

i
n
t
e
n
s
i
t
y

o
f

r
0
0 5 10
R
e
l
a
t
i
v
Time t, (hours)
Figure. Data points of relative radiation intensity
vs. time
99
Example -Linearization of data cont.
Find:
) Th l f th i t t A
d a) The value of the regression constants A
and
b) The half-life of Technium-99m
c) Radiation intensity after 24 hours c) Radiation intensity after 24 hours
The relative intensity is related to time by the equation
t
Ae

=
100
Example Linearization of data cont Example -Linearization of data cont.
t
Ae

=
Exponential model given as,
( ) ( ) t A + = ln ln
Linearization gives Ae
( ) ( )
Assuming ln = z ,
( ) A a
o
ln =
and =
1
a we obtain
Linearization gives
g ,
( )
o
1
t a a z
1 0
+ =
This is a linear relationship between
z
and
t
101
Example -Linearization of data cont.
Using this linear relationship, we can calculate
1 0
, a a
n n n
where

= = =

=
n n
n
i
i
n
i
n
i
i i i
z t z t n
a
2
1 1 1
1

= =

n
i
n
i
i
t t n
1 1
2
1
1
and
t a z a
1 0
=
1
a =
1
0
a
e A =
102
Example Linearization of Data cont Example -Linearization of Data cont.
Summations for data linearization are as follows
Table. Summation data for linearization of data model
i i
t
i

i i
z ln =
i i
z t
2
i
t
With 6 = n
000 . 25
6
1

=
=
i
i
t
1
2
3
4
0
1
3
5
1
0.891
0.708
0.562
0.00000
0.11541
0.34531
0.57625
0.0000
0.11541
1.0359
2.8813
0.0000
1.0000
9.0000
25.000
i i i

i i

i i i
1 i

=
=
6
1
8778 . 2
i
i
z
6
4
5
6
5
7
9
0.562
0.447
0.355
0.57625
0.80520
1.0356
2.8813
5.6364
9.3207
25.000
49.000
81.000
25.000 2.8778 18.990 165.00


=
=
6
1
990 . 18
i
i i
z t
00 165
6
2

00 . 165
1
2
=

= i
i
t
103
Example Linearization of Data cont Example -Linearization of Data cont.
Calculating
1 0
, a a
( ) ( )( ) ( ) ( )( )
( ) ( )
2
1
25 00 . 165 6
8778 . 2 25 990 . 18 6


= a 11505 . 0 =
( )
6
25
11505 . 0
6
8778 . 2
0

= a
4
10 6150 . 2

=
Since Since
( ) A a ln
0
=
0
a
e A =
4
10 6150 . 2


= e
99974 . 0 =
11505 0 a
also
11505 . 0
1
= = a
104
Example Linearization of Data cont Example -Linearization of Data cont.
t 11505 0
Resulting model is
t
e
11505 . 0
99974 . 0

=
1
Relative
t
e
11505 . 0
99974 . 0

=
0.5
Intensity
of
Radiation,
0
0 5 10
Time, t (hrs)
Figure. Relative intensity of radiation as a function of
temperature using linearization of data model.
105
Example Linearization of Data cont Example -Linearization of Data cont.
The regression formula is then
t
e
11505 . 0
99974 . 0

=
b) Half life of Technetium 99 is when
0
2
1
=
=
t

( )
( ) t
99974 0
1
99974 0
0 11505 0 11505 0
( )
( )
( )
. e
e . e .
t .
. t .
5 0 l 11505 0
5 0
99974 0
2
99974 0
11508 0
0 11505 0 11505 0
=
=


( )
hours . t
. t .
0248 6
5 0 ln 11505 0
=
=
106
Example Linearization of Data cont Example-Linearization of Data cont.
c) The relative intensity of radiation after 24 hours is then
( ) 24 11505 0 ( ) 24 11505 . 0
99974 . 0

= e
063200 . 0 =
This implies that only
% 3216 . 6 100
99983 0
10 3200 . 6
2
=


of the radioactive p y
99983 . 0
material is left after 24 hours.
107
Comparison
Comparison of exponential model with and without data
linearization:
With d t li i ti With t d t li i ti
Table. Comparison for exponential model with and without data linearization.
With data linearization Without data linearization
A 0.99974 0.99983
0.11505 0.11508
Half-Life (hrs) 6.0248 6.0232
Relative intensity
after 24 hrs.
6.320010
2
6.316010
2
The values are very similar so data linearization was
suitable to find the constants of the nonlinear exponential
d l i thi model in this case.
108
Interpolation p
Section 5.5
E i i f i di l b i Estimation of intermediate values between precise
data points. The most common method is using a
polynomial: polynomial:
2
0 1 2
( ) ( )
n n
n
n
f x a ax a x a x p x = + + + + =
Although there is one and only one nth-order
l i l th t fit 1 i t (L h )
0 1 2
( ) ( )
n n n
f p
polynomial that fits n+1 points (Lagrange theorem),
there are a variety of mathematical formats in which
this polynomial can be expressed: this polynomial can be expressed:
The Newton polynomial
The Lagrange polynomial
Dr A.I. Delis TUC 2012 Part 4
109
The Lagrange polynomial
Linear Parabola Cubic
Dr A.I. Delis TUC 2012
Part 4
110
What is Interpolation ? p
Given (x
0
,y
0
), (x
1
,y
1
), (x
n
,y
n
), find the value of y at a value of x that is not given.
111
Figure: Interpolation of discrete values.
Interpolants
Polynomials are the most commonchoice
of interpolants because they are easy to:
Evaluate
Differentiate, and
Integrate Integrate
112
Direct Method
Gi 1 d t i t ( ) ( ) ( ) Given n+1 data points (x
0
,y
0
), (x
1
,y
1
),.. (x
n
,y
n
),
pass a polynomial of order n through the data as given
below: below:
. .......... ..........
1 0
n
n
x a x a a y + + + =
where a
0
a
1
a are real constants
1 0 n
y
where a
0
, a
1
,. a
n
are real constants.
Set up n+1 equations to find n+1 constants.
To find the value y at a given value of x, simply substitute y g , p y
the value of x in the above polynomial.
113
Example 1 p
The upward velocity of a rocket is given as a
function of time in Table 1 function of time in Table 1.
Find the velocity at t=16 seconds using the
direct method for linear interpolation. direct method for linear interpolation.
Table 1 Velocity as a function
of time.
0 0
( ) s , t ( ) ( ) m/s , t v
0 0
10 227.04
15 362.78 15 362.78
20 517.35
22.5 602.97
Fi V l it ti d t f th
114
30 901.67
Figure Velocity vs. time data for the
rocket example
Linear Interpolation Linear Interpolation
( ) ( ) t a a t v
1 0
+ =
( )
1 1
, y x
y
( ) ( ) 78 . 362 15 15
1 0
= + = a a v
( ) ( ) 35 517 20 20 = + = a a v
( )
0 0
, y x
( )
1 1
, y
( ) ( ) 35 . 517 20 20
1 0
= + = a a v
Solving the above two equations gives,
( )
0 0
, y x
( ) x f
1
x
g q g
93 . 100
0
= a
914 . 30
1
= a
Hence
Figure Linear interpolation.
Hence
( ) . 20 15 , 914 . 30 93 . 100 + = t t t v
( ) ( )
100 93 30 914 16 16 393 7 / +
115
( ) ( )
100.93 30.914 16 16 393.7 m/s v = + =
Example 2 p
The upward velocity of a rocket is given as a
function of time in Table 2 function of time in Table 2.
Find the velocity at t=16 seconds using the
direct method for quadratic interpolation. direct method for quadratic interpolation.
Table 2 Velocity as a function
of time.
0 0
( ) s , t ( ) ( ) m/s , t v
0 0
10 227.04
15 362.78
20 517.35
22.5 602.97
Fi V l it ti d t f th
116
30 901.67
Figure Velocity vs. time data for the
rocket example
Quadratic Interpolation
( )
2
2 1 0
t a t a a t v + + = ( )
2 1 0
t a t a a t v + +
( )
y
( ) ( ) ( ) 04 . 227 10 10 10
2
2 1 0
= + + = a a a v
( ) ( ) ( ) 78 362 15 15 15
2
( )
1 1
, y x
( )
2 2
, y x
( ) ( ) ( ) 78 . 362 15 15 15
2
2 1 0
= + + = a a a v
( ) ( ) ( ) 35 . 517 20 20 20
2
2 1 0
= + + = a a a v
( )
0 0
, y x
( ) x f
2
S l i th b th ti i
x
Figure Quadratic interpolation.
Solving the above three equations gives
05 . 12
0
= a 733 . 17
1
= a
3766 . 0
2
= a
117
05 . 12
0
a 733 . 17
1
a
3766 . 0
2
a
Q d ti I t l ti ( t ) Quadratic Interpolation (cont.)
450
500
550
517.35
( ) 20 10 , 3766 . 0 733 . 17 05 . 12
2
+ + = t t t t v
350
400
y
s
f range ( )
f x
desired
( )
( )
( ) ( ) ( )
2
12.05 17.733 16 0.376 16 6 16 v = + +
250
300
392.19 m/s =
10 12 14 16 18 20
200
227.04
20 10 x
s
range , x
desired
,
The absolute relative approximate error obtained between the
l f h fi d d d l i l i
a

results from the first and second order polynomial is


100
19 392
70 . 393 19 . 392

=
a
118
% 38410 . 0
19 . 392
=
Example p
The upward velocity of a rocket is given as a
function of time in Table 3 function of time in Table 3.
Find the velocity at t=16 seconds using the
direct method for cubic interpolation. direct method for cubic interpolation.
Table 3 Velocity as a function
of time.
0 0
( ) s , t ( ) ( ) m/s , t v
0 0
10 227.04
15 362.78
20 517.35
22.5 602.97
Fi V l it ti d t f th
119
30 901.67
Figure Velocity vs. time data for the
rocket example
Cubic Interpolation Cubic Interpolation
( )
3
3
2
2 1 0
t a t a t a a t v + + + =
y
( )
3 3
, y x
( ) ( ) ( ) ( )
3
3
2
2 1 0
10 10 10 04 . 227 10 a a a a v + + + = =
( ) ( ) ( ) ( )
3 2
15 15 15 78 362 15 + + + ( )
( )
1 1
, y x
( ) ( ) ( ) ( )
3 2 1 0
15 15 15 78 . 362 15 a a a a v + + + = =
( ) ( ) ( ) ( )
3 2
20 20 20 35 517 20 a a a a v + + + = =
x
( ) x f
3
( )
2 2
, y x
( )
0 0
, y x
( ) ( ) ( ) ( )
3 2 1 0
20 20 20 35 . 517 20 a a a a v + + + = =
( ) ( ) ( ) ( )
3
3
2
2 1 0
5 . 22 5 . 22 5 . 22 97 . 602 5 . 22 a a a a v + + + = =
x
Figure Cubic interpolation.
2540 . 4
0
= a 266 . 21
1
= a
13204 . 0
2
= a
0054347 . 0
3
= a
120
Cubic Interpolation (contd) Cubic Interpolation (contd)
( ) 5 . 22 10 , 0054347 . 0 13204 . 0 266 . 21 2540 . 4
3 2
+ + + = t t t t t v
( ) ( ) ( ) ( )
2 3
4 2540 21 266 16 0 13204 16 0 005434 16 7 16 v = + + +
700
( ) ( ) ( ) ( )
4.2540 21.266 16 0.13204 16 0.005434 16
392.06 m/s
7 16 v = + + +
=
600
700
602.97
The absolute percentage relative
approximate error between
second and third order polynomial is
a

400
500
y
s
f range ( )
f x
desired
( )
p y
100
06 392
19 . 392 06 . 392

=
a
200
300
227 04
% 033269 . 0
06 . 392
=
121
10 12 14 16 18 20 22 24
200
227.04
22.5 10 x
s
range , x
desired
,
Comparison Table
Order of
Polynomial
1 2 3
Table 4 Comparison of different orders of the polynomial.
Polynomial
( ) m/s 16 = t v 393.7 392.19 392.06
Absolute Relative Absolute Relative
Approximate Error
---------- 0.38410 % 0.033269 %
122
Distance from Velocity Profile Distance from Velocity Profile
Find the distance covered by the rocket from t=11s to t=16s ? y
( ) 5 . 22 10 , 0054606 . 0 13064 . 0 289 . 21 3810 . 4
3 2
+ + + = t t t t t v( ) 5 . 22 10 , 0054606 . 0 13064 . 0 289 . 21 3810 . 4 + + + t t t t t v
( ) ( ) ( ) 11 16
16
=

dt t v s s( ) ( ) ( )
( ) 0054347 . 0 13204 . 0 266 . 21 2540 . 4
16
3 2
11
+ + + =

dt t t t ( )
4
0054347 . 0
3
13204 . 0
2
266 . 21 2540 . 4
16
4 3 2
11

+ + + =

t t t
t
m 1605
4 3 2
11
=

123
Acceleration from Velocity Profile Acceleration from Velocity Profile
Find the acceleration of the rocket at t=16s given that
( ) 5 . 22 10 , 0054347 . 0 13204 . 0 266 . 21 2540 . 4
3 2
+ + + = t t t t
Find the acceleration of the rocket at t=16s given that
( ) ( ) = t v
dt
d
t a
( ) 0054347 . 0 13204 . 0 266 . 21 2540 . 4
3 2
+ + + = t t t
dt
d
dt
5 . 22 10 , 016382 . 0 26130 . 0 289 . 21
2
+ + = t t t
dt
( ) ( ) ( )
2
2
21.266 0.26408 16 0. 16 016304 16
29 665 m/s
a = + +
124
2
29.665 m/s =
Newtons Interpolating Polynomial
Also known as Newtons divided difference
p g y
interpolating polynomial.
Provides 2 versions: Provides 2 versions:
(a) Linear interpolation
(b) Q d ti i t l ti (b) Quadratic interpolation
Newton polynomial
Linear
interpolation
Quadratic
interpolation
Dr A.I. Delis TUC 2012 Part 4
125
interpolation interpolation
Newtons Interpolating Polynomials:
Section 5 5 2 Section 5.5.2
Linear Interpolation
Is the simplest form of interpolation, connecting two data
points with a straight line.
) ( ) ( ) ( ) ( x f x f x f x f
Slope and a
finite divided
diff
) ( ) (
) ( ) ( ) ( ) (
0
0 1
0
0 1
f f
x x
x f x f
x x
x f x f

difference
approximation to
1
st
derivative
) (
) ( ) (
) ( ) (
0
0
0 1
0 1
x x
x x
x f x f
x f x f

+ = Linear-interpolation
formula
f
1
(x) designates that this is a first-order interpolating
polynomial.
Dr A.I. Delis TUC 2012 Part 4
126
Linear interpolation
The simplest form of interpolation, connecting
p
p p , g
two data points with a straight line.
Dr A.I. Delis TUC 2012 Part 4
127
Estimate ln2
Dr A.I. Delis TUC 2012 Part 4
128
Quadratic Interpolation Q p
If three data points are available, the estimate is
improved by introducing some curvature into the line improved by introducing some curvature into the line
connecting the points.
( ) ( ) ( )( ) f x b b x x b x x x x = + +
A simple procedure can be used to determine the
l f th ffi i t
0 1 0 2 0 2 1
( ) ( ) ( )( ) f x b b x x b x x x x = + +
values of the coefficients.
0 0 0
) ( ) (
) (
f f
x f b x x = =
0
0 1
1 1
) ( ) ( ) ( ) (
) ( ) (
f f f f
x x
x f x f
b x x

= =
0 1
0 1
1 2
1 2
2 2
) ( ) ( ) ( ) (
x x
x x
x f x f
x x
x f x f
b x x

= =
Dr A.I. Delis TUC 2010
Part 4 (Chapters 17-18)
129
0 2
x x
General Form of Newtons Interpolating Polynomials/
0 0 1
0 1 1
0 2 1 0 1
1
0
0
[ , , ( ) ( ) ( )( )
( )( ) ( )
( )
[ , ,
[ , ( ) ]
]
]
,
n n
n
n
f x x x x x x x
x x x x x x
f
f x x
f x
b
f x x x f x
x
x

= + +
+ +
0
1 0
0
1
( )
[ , ]
[ ]
f x
f x x
f x x x
b
b
b
=
=
=
2 1 0
1 1
2
0
[ , , ]
[ , , , , ]
f x x x
f x x x x
b
b
=
=

1 1 0
[ , , , , ]
( ) ( )
[ , ]
n n
i j
i j
i j
n
f x x x x
f x f x
f x x
x x
b

B k t d f ti
[ , ] [ , ]
[ , ] ,
i j
i j j k
i j k
i
f x x f x x
f x x x
x

=
k
x
Bracketed function
evaluations are finite
divided differences
1 1 1 2 0
1 1 0
[ , , , ] [ , , , ]
[ , , , , ]
n n n n
n n
f x x x f x x x
f x x x x

Dr A.I. Delis TUC 2012 Part 4


130
1 1 0
0
[ , , , , ]
n n
n
f
x x

Dr A.I. Delis TUC 2012 Part 4


131
General form General form
(divided differences table)
The third order polynomial given ) ( y x ) ( y x ) ( y x and ) ( y x is The third order polynomial, given ), , (
0 0
y x ), , (
1 1
y x ), , (
2 2
y x and ), , (
3 3
y x is

( ) ( ) ( )( ) [ ] [ ] [ ] f x x x x x f x f x x f x x x x x + +

2 1 0
3 2 1
3 0 0 1 0 0
0 1 2
1
0
( ) ( ) ( )( )
( )( )
[ ,
[ , ,
] [ , ]
]
[
)
,
(
]
,
f x x x x x f x f x x
x x x x x x f x x x
f x x
x
x x x = + +
+


0
b
0
x ) (
0
x f
1
b
] [ x x f b ] , [
0 1
x x f
2
b
1
x ) (
1
x f ] , , [
0 1 2
x x x f
3
b
] , [
1 2
x x f ] , , , [
0 1 2 3
x x x x f
2
x ) (
2
x f ] , , [
1 2 3
x x x f
] , [
2 3
x x f
x ) (x f
132
3
x ) (
3
x f

Example p
The upward velocity of a rocket is given as a function of
time in Table 1. Find the velocity at t=16 seconds using
th N t Di id d Diff th d f bi the Newton Divided Difference method for cubic
interpolation.
T bl V l it Table. Velocity as a
function of time
) s ( t ) m/s ( ) (t v
0 0
10 227.04
15 362 78
) ( ) ( ) (
15 362.78
20 517.35
22.5 602.97
30 901 67
133
Figure. Velocity vs. time data
for the rocket example
30 901.67
Example p
The velocity profile is chosen as
) )( )( ( ) )( ( ) ( ) (
2 1 0 3 1 0 2 0 1 0
t t t t t t b t t t t b t t b b t v + + + =
we need to choose four data points that are closest to
16 = t
, 10
0
= t 04 . 227 ) (
0
= t v
, 15
1
= t 78 . 362 ) (
1
= t v
, 20
2
= t 35 . 517 ) (
2
= t v
, 5 . 22
3
= t 97 . 602 ) (
3
= t v

The values of the constants are found as:
b
0
= 227 04; b
1
= 27 148; b
2
= 0 37660; b
3
= 5 434710
3
134
b
0
227.04; b
1
27.148; b
2
0.37660; b
3
5.4347 10


Example p

0
b
10
0
= t 04 . 227
1
b
0 1
148 . 27
2
b
, 15
1
= t 78 . 362 37660 . 0
3
b
914 . 30
3
10 4347 . 5


, 20
2
= t 35 . 517 44453 . 0
248 . 34
, 5 . 22
3
= t 97 . 602

b
0
= 227 04; b
1
= 27 148; b
2
= 0 37660; b
3
= 5 434710
3
135
b
0
227.04; b
1
27.148; b
2
0.37660; b
3
5.434710
Example p
Hence
) )( )( ( ) )( ( ) ( ) (
2 1 0 3 1 0 2 0 1 0
t t t t t t b t t t t b t t b b t v + + + = ) )( )( ( ) )( ( ) ( ) (
2 1 0 3 1 0 2 0 1 0

) 20 )( 15 )( 10 ( 10 * 4347 . 5
) 15 )( 10 ( 37660 . 0 ) 10 ( 148 . 27 04 . 227
3
+
+ + =

t t t
t t t

At , 16 = t

) 20 16 )( 15 16 )( 10 16 ( 10 * 4347 . 5
) 15 16 )( 10 16 ( 37660 . 0 ) 10 16 ( 148 . 27 04 . 227 ) 16 (
3
+
+ + =

v

) 20 16 )( 15 16 )( 10 16 ( 10 4347 . 5 +
06 . 392 = m/s
The absolute relative approximate error
a
obtained is
a
100 x
06 . 392
19 . 392 06 . 392
=
136
= 0.033427 %
Comparison Table Comparison Table
Order of
Polynomial
1 2 3
Polynomial
v(t=16)
m/s
393.69 392.19 392.06
Absolute Relative
Approximate Error
---------- 0.38502 %

0.033427 %
137
(see also example 5-5)
Errors of Newtons Interpolating Polynomials Errors of Newton s Interpolating Polynomials
Structure of interpolating polynomials is similar to the Taylor
series expansion in the sense that finite divided differences are series expansion in the sense that finite divided differences are
added sequentially to capture the higher order derivatives.
For an n
th
-order interpolating polynomial, an analogous p g p y , g
relationship for the error is:
) ( ) )( (
) (
) 1 (n
f
R
+

is somewhere
i i h k
) ( ) )( (
)! 1 (
) (
1 0 n n
x x x x x x
n
f
R
+
=

containing the unknown


and the data
For non differentiable functions, if an additional point f(x
n+1
)
is available, an alternative formula can be used that does not
require prior knowledge of the function:
) ( ) )( ]( , , , , [
1 0 0 1 1 n n n n n
x x x x x x x x x x f R
+

Dr A.I. Delis TUC 2010
Part 4 (Chapters 17-18)
138
) ( ) )( ]( , , , , [
1 0 0 1 1 n n n n n
x x x x x x x x x x f
+

Dr A.I. Delis TUC 2012 Part 4
139
Cubic estimate f3(x)
Lagrange Interpolating Polynomials g g p g y
Section 5.5.1
The Lagrange interpolating polynomial is simply a
reformulation of the Newtons polynomial that
avoids the computation of divided differences:
n

0
( ) ( ) ( )
n i i
i
f x L x f x
=
=

0
( )
n
j
i
j
x x
L x
x x

0 j
i j
j i
x x
=

Dr A.I. Delis TUC 2012 Part 4


140
0 1
( ) ( ) ( )
x x x x
f x f x f x

( )( ) ( )( )
0 1
0 1
1
0 1 1 0
( ) ( ) ( )
x x x x x x x x
x x
f x f x f x
x x

=
( )( )
( )( )
( )( )
( )( )
( )( )
1 2 0 2
0 1 0 2 1 0 1 2
2 0 1
( ) ( ) ( )
x x x x x x x x
x x x x x x x x
f x f x f x

= +
( )( )
( )( )
0 1
2 0 2 1
2
( )
x x x x
x x x
x
x
f


+
As with Newtons method, the Lagrange version has an As with Newton s method, the Lagrange version has an
estimated error of:

=
n
i n n n
x x x x x x f R
0
0 1
) ( ] , , , , [
Dr A.I. Delis TUC 2012 Part 4
141
= i 0
Dr A.I. Delis TUC 2012 Part 4
142
Visualization of a 2
nd
order
interpolation polynomial in p p y
Lagrange form
2
( ) ( ) ( ) f L f
2
0
2
( ) ( ) ( )
i i
i
j
f x L x f x
x x
=
=

0
( )
j
i
j
i j
j i
x x
L x
x x
=

Dr A.I. Delis TUC 2012 Part 4


143
Polynomial Interpolation at the same data points
(a) 4
th
order (b) 3
rd
order ( c ) 2
nd
order (d) 1
st
order (a) 4 order (b) 3 order ( c ) 2 order (d) 1 order
Higher order polynomials may turned to be ill-conditioned
Dr A.I. Delis TUC 2012 Part 4
144
Higher order polynomials may turned to be ill-conditioned
Estimation of ln2 Estimation of ln2
Dr A.I. Delis TUC 2012 Part 4
145
Dr A.I. Delis TUC 2013 Part 4
146
Example p
The upward velocity of a rocket is given as a function of time in
Table 1. Find the velocity at t=16 seconds using the
Lagrangian method for quadratic interpolation Lagrangian method for quadratic interpolation.
Table Velocity as a
function of time
(s) (m/s)
0 0
t ) (t v
0 0
10 227.04
15 362.78
20 517 35 20 517.35
22.5 602.97
30 901.67
147
Figure. Velocity vs. time data
for the rocket example
Quadratic Interpolation (contd) Quadratic Interpolation (contd)
550
517.35
, 10
0
= t 04 . 227 ) (
0
= t v
15 t 78 362 ) (t v
450
500
, 15
1
= t 78 . 362 ) (
1
= t v
, 20
2
= t 35 . 517 ) (
2
= t v
350
400
y
s
f range ( )
f x
desired
( )

=
2
0
0
) (
j
j
t t
t t
t L

=
2 0
2
1 0
1
t t
t t
t t
t t

250
300

=
0
0 0
j
j j
t t
2 0 1 0
t t t t

=
2
0 1
1
) (
j j
j
t t
t t
t L

=
2 1
2
0 1
0
t t
t t
t t
t t

10 12 14 16 18 20
200
250
227.04
20 10 x
s
range , x
desired
,

=
1
0 1
j
j j
t t
2 1 0 1

=
2
0 2
2
) (
j j
j
t t
t t
t L

=
1 2
1
0 2
0
t t
t t
t t
t t

148
2
0 2
j
j j 1 2 0 2
Quadratic Interpolation (contd) p ( )
( ) ( ) ( ) ( )
2
1 2
1
0 2
0
1
2 1
2
0 1
0
0
2 0
2
1 0
1

= t v
t t
t t
t t
t t
t v
t t
t t
t t
t t
t v
t t
t t
t t
t t
t v
( ) ( ) ( ) ( )
( )( ) ( )( ) ( )( )
35 . 517
15 20
15 16
10 20
10 16
78 . 362
20 15
20 16
10 15
10 16
04 . 227
20 10
20 16
15 10
15 16
16

= v
( )( ) ( )( ) ( )( )
m/s 19 . 392
35 . 527 12 . 0 78 . 362 96 . 0 04 . 227 08 . 0
=
+ + =
The absolute relative approximate error obtained between the
results from the first and second order polynomial is
a

100
19 . 392
70 . 393 19 . 392

=
a
149
% 38410 . 0 =
Cubic Interpolation Cubic Interpolation
For the third order polynomial (also called cubic interpolation), we choose the velocity given by

=
3
) ( ) ( ) (
i i
t v t L t v
=0 i
) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (
3 3 2 2 1 1 0 0
t v t L t v t L t v t L t v t L + + + =
600
700
602.97
500
y
s
f range ( )
f x
d i d
( )
300
400
f x
desired
( )
150
10 12 14 16 18 20 22 24
200
227.04
22.5 10 x
s
range , x
desired
,
Example p
The upward velocity of a rocket is given as a function of time
in Table 1. Find the velocity at t=16 seconds using the
Lagrangian method for cubic interpolation.
Table Velocity as a
function of time
(s) (m/s)
0 0
t ) (t v
0 0
10 227.04
15 362.78
20 517 35 20 517.35
22.5 602.97
30 901.67
151
Figure. Velocity vs. time data
for the rocket example
Cubic Interpolation (contd) Cubic Interpolation (contd)
( ) 04 . 227 , 10 = =
o o
t v t ( ) 78 . 362 , 15
1 1
= = t v t
( ) 35 . 517 , 20
2 2
= = t v t ( ) 97 . 602 , 5 . 22
3 3
= = t v t


=
3
) (
j
t t
t L


=
3 2 1
t t t t t t
;
700
602 97

=

=
0
0
0
0
) (
j
j
j
t t
t L


=
3 0 2 0 1 0
t t t t t t
;


=
3
) (
j
t t
t L


=
3 2 0
t t t t t t
600
602.97

=

=
1
0 1
1
) (
j
j j
t t
t L


=
3 1 2 1 0 1
t t t t t t


=
3
) (
j
t t
t L


=
3 1 0
t t t t t t
;
400
500
y
s
f range ( )
f x
desired
( )

=

=
2
0 2
2
) (
j
j j
t t
t L


=
3 2 1 2 0 2
t t t t t t
;


=
3
) (
j
t t
t L


=
2 1
0
t t t t t t
10 12 14 16 18 20 22 24
200
300
227.04
152

=

=
3
0 3
3
) (
j
j j
t t
t L


=
2 3 1 3 0 3
t t t t t t
10 12 14 16 18 20 22 24
22.5 10 x
s
range , x
desired
,
Cubic Interpolation (contd) Cubic Interpolation (contd)
( ) ( ) ( )
2
3 2 0
1
3 2 1


= t v
t t
t t
t t
t t
t t
t t
t v
t t
t t
t t
t t
t t
t t
t v
( ) ( )
3
2 3
2
1 3
1
1 3
1
2
3 2
3
1 2
1
0 2
0
3 1 2 1 0 1 3 0 2 0 1 0


t v
t t
t t
t t
t t
t t
t t
t v
t t
t t
t t
t t
t t
t t
t t t t t t t t t t t t
( ) ( ) ( )
20 16 15 16 10 16 5 22 16 15 16 10 16
78 . 362
5 . 22 15
5 . 22 16
20 15
20 16
10 15
10 16
04 . 227
5 . 22 10
5 . 22 16
20 10
20 16
15 10
15 16
16

=

v
( ) ( )
( )( ) ( )( ) ( )( ) ( )( ) 97 . 602 1024 . 0 35 . 517 312 . 0 78 . 362 832 . 0 04 . 227 0416 . 0
97 . 602
20 5 . 22
20 16
15 5 . 22
15 16
10 5 . 22
10 16
35 . 517
5 . 22 20
5 . 22 16
15 20
15 16
10 20
10 16
+ + + =

+
m/s 06 . 392 =
The absolute relative approximate error obtained between the
results from the first and second order polynomial is
a

results from the first and second order polynomial is


100
06 . 392
19 . 392 06 . 392

=
a
153
% 033269 . 0 =
Comparison Table Comparison Table
Order of
Polynomial
1 2 3
Polynomial
v(t=16) m/s 393.69 392.19 392.06
Absolute Relative
0 38410% 0 033269%
Approximate Error
-------- 0.38410% 0.033269%
154
Coefficients of an Interpolating
P l i l Polynomial
Although both the Newton and Lagrange g g g
polynomials are well suited for determining
intermediate values between points, they do not p , y
provide a polynomial in conventional form:
2
Si 1 d i i d d i 1
2
0 1 2
( )
n
n n
f x a a x a x a x = + + + +
Since n+1 data points are required to determine n+1
coefficients, simultaneous linear systems of equations
b d t l l t th can be used to calculate the as.
Called method of undetermined coefficients
Dr A.I. Delis TUC 2012 Part 4
155
2
0 1 2
( )
n
f x a a x a x a x = + + + +
0 1 2
( )
n n
f x a a x a x a x + + + +
Simultaneous linear systems of equations y q
can be used to calculate as for:
n
f + + +
2
) (
n
n
n
x a x a x a a x f
x a x a x a a x f
+ + + =
+ + + =

2
0
2
0 2 0 1 0 0
) (
) (
n
x a x a x a a x f + + + =

1 1 2 1 1 0 1
) (
n
n n n n n
x a x a x a a x f + + + =
2
2 1 0
) (
Notoriously ill-conditioned (especially for large n)
Dr A.I. Delis TUC 2012 Part 4
156
Dr A.I. Delis TUC 2012 Part 4
157
Spline Interpolation p p
Section 5.6
There are cases where polynomials can lead
to erroneous results because of round off
error and overshoot.
Alternative approach is to apply lower order Alternative approach is to apply lower-order
polynomials to subsets of data points. Such
connecting polynomials are called spline
functions.
Dr A.I. Delis TUC 2012 Part 4
158
Why Splines ? y p
1
) ( f
2
25 1
) (
x
x f
+
=
Table : Six equidistantly spaced points in [-1, 1]


x
2
25 1
1
x
y
+
=


25 1 x +
-1.0 0.038461
0 6 0 1



-0.6 0.1
-0.2 0.5


th
0.2 0.5
0.6 0.1
159
Figure : 5
th
order polynomial vs. exact function
1.0 0.038461
Why Splines ? Why Splines ?
1.2
0.8
1.2
0.4
y
-0 4
0
-1 -0.5 0 0.5 1
-0.8
-0.4
x
19th Order Polynomial f (x) 5th Order Polynomial
160
Figure : Higher order polynomial interpolation is a bad idea
How does splines works?
(a) (c) interpolating polynomials
(d) Linear spline
Dr A.I. Delis TUC 2012 Part 4
161
A natural spline
Dr A.I. Delis TUC 2012 Part 4
162
Provides 3 versions: Provides 3 versions:
(a) Linear splines
(b) Q d ti li (b) Quadratic splines
(c) Qubic splines
Spline Interpolation
Linear
splines
Quadratic
splines
Cubic splines
Dr A.I. Delis TUC 2012 Part 4
163
Dr A.I. Delis TUC 2012 Part 4
164
Linear splines
Connect two data points in a straight line.
D fi f d t i t i t f li Define a group of data points in a set of linear
functions:
( ) ( ) ( ) x x m x f x f +
( ) ( ) ( )
0 0 0
x x m x f x f + =
1 0
x x x

( ) ( ) ( )
1 1 1
x x m x f x f + =


( ) ( ) ( )
2 1
x x x
( ) ( ) ( )
1 1 1
+ =
n n n
x x m x f x f
n n
x x x
1
where m is the slope of the straight line: where m
i
is the slope of the straight line:
( ) ( )
i i
i
x f x f
m

=
+1
Dr A.I. Delis TUC 2012 Part 4
165
i i
x x
+1
Quadratic splines
Dr A.I. Delis TUC 2012 Part 4
166
Quadratic splines: Section 5.6.2
To derive a 2
nd
order polynomial for each interval
between data points: between data points:
ti 5 66
( )
2
f x c a b x x + +
>>> equation 5.66
( )
i i i i
f x c a b x x = + +
Total 3n unknowns (n-intervals)
By assuming that the second derivative is zero By assuming that the second derivative is zero
at the first point (i.e. a1=0), there are
3 conditions required to evaluate the unknowns:
Dr A.I. Delis TUC 2012 Part 4
167
1.The function values of adjacent polynomials
must be equal at the interior knots: must be equal at the interior knots:
for i=2,,n (Total: 2n -2 conditions)
2. The first and last functions must pass
through the end points:
, , ( )
through the end points:
>>>equations 5.67 & 5.68
(Total: 2n -2+2 = 2n conditions ) (Total: 2n 2+2 2n conditions )
3. The first derivatives at the interior knots must
be equal:
>>>equation 5.70 equation 5.70
(Total: 2n + n-1 = 3n-1 conditions )
Dr A.I. Delis TUC 2012 Part 4
168
Try out Example 5-7
Quadratic Spline Example
The upward velocity of a rocket is given as a function of time.
Using quadratic splines g q p
a) Find the velocity at t=16 seconds
b) Find the acceleration at t=16 seconds
c) Find the distance covered between t=11 and t=16 seconds
Table Velocity as a Table Velocity as a
function of time
(s) (m/s)
0 0
t
) (t v
0 0
10 227.04
15 362.78
Figure Velocity vs time data
20 517.35
22.5 602.97
30 901.67
169
Figure. Velocity vs. time data
for the rocket example
Solution
) (
2
b , ) (
1 1
2
1
c t b t a t v + + =
10 0 t
2
t b t + +
15 10 t
,
2 2
2
2
c t b t a + + =
15 10 t
2
c t b t a + + =
20 15 t
,
3 3 3
c t b t a + + =
20 15 t
2
c t b t a + + =
5 22 20 t
,
4 4 4
c t b t a + + =
5 . 22 20 t
2
c t b t a + + = 30 5 22 t ,
5 5 5
c t b t a + + = 30 5 . 22 t
L t t th ti
170
Let us set up the equations
Each Spline Goes Through Two p g
Consecutive Data Points
, ) (
1 1
2
1
c t b t a t v + + =
10 0 t
2
0 ) 0 ( ) 0 (
1 1
2
1
= + + c b a
04 227 ) 10 ( ) 10 (
2
b 04 . 227 ) 10 ( ) 10 (
1 1
2
1
= + + c b a
171
Each Spline Goes Through Two p g
Consecutive Data Points
t v(t)
04 . 227 ) 10 ( ) 10 (
2 2
2
2
= + + c b a
78 362 ) 15 ( ) 15 (
2
+ + c b a
s m/s
0 0
78 . 362 ) 15 ( ) 15 (
2 2 2
= + + c b a
78 . 362 ) 15 ( ) 15 (
3 3
2
3
= + + c b a
10 227.04
15 362.78
35 . 517 ) 20 ( ) 20 (
3 3
2
3
= + + c b a
35 517 ) 20 ( ) 20 (
2
+ + b
15 362.78
20 517.35
22 5 602 97
35 . 517 ) 20 ( ) 20 (
4 4
2
4
= + + c b a
97 . 602 ) 5 . 22 ( ) 5 . 22 (
4 4
2
4
= + + c b a
22.5 602.97
30 901.67
67 901 ) 30 ( ) 30 (
2
+ + c b a
97 . 602 ) 5 . 22 ( ) 5 . 22 (
5 5
2
5
= + + c b a
172
67 . 901 ) 30 ( ) 30 (
5 5 5
= + + c b a
Derivatives are Continuous at
Interior Data Points
, ) (
1 1
2
1
c t b t a t v + + = 10 0 t
2
b ,
2 2
2
2
c t b t a + + =
15 10 t
( ) ( )
2 2
b
d
b
d
( ) ( )
10
2 2
2
2
10
1 1
2
1
= =
+ + = + +
t t
c t b t a
dt
c t b t a
dt
( ) ( )
10
2 2
10
1 1
2 2
= =
+ = +
t t
b t a b t a
( ) ( )
2 2 1 1
10 2 10 2 b a b a + = +
0 20 20 b b
173
0 20 20
2 2 1 1
= + b a b a
Derivatives are continuous at
Interior Data Points
0 ) 10 ( 2 ) 10 ( 2 = + b a b a
At t=10
0 ) 10 ( 2 ) 10 ( 2
2 2 1 1
= + b a b a
At t=15
0 ) 15 ( 2 ) 15 ( 2
3 3 2 2
= + b a b a
At t=20
0 ) 20 ( 2 ) 20 ( 2
4 4 3 3
= + b a b a
At t 20
0 ) 5 . 22 ( 2 ) 5 . 22 ( 2
5 5 4 4
= + b a b a
At t=22.5
174
0 ) 5 . 22 ( 2 ) 5 . 22 ( 2
5 5 4 4
+ b a b a
Last Equation q
0
1
= a
175
Final Set of Equations Final Set of Equations

04 . 227
04 . 227
0
0 0 0 0 0 0 0 0 0 1 10 100 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 10 100
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
1
1
1
c
b
a

35 . 517
78 . 362
78 . 362
0 0 0 0 0 0 1 20 400 0 0 0 0 0 0
0 0 0 0 0 0 1 15 225 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 15 225 0 0 0
2
2
2
c
b
a

=
97 . 602
97 . 602
35 . 517
1 5 . 22 25 . 506 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 5 . 22 25 . 506 0 0 0 0 0 0 0 0 0
0 0 0 1 20 400 0 0 0 0 0 0 0 0 0
3
3
3
c
b
a



0
0
67 . 901
0 0 0 0 0 0 0 1 30 0 1 30 0 0 0
0 0 0 0 0 0 0 0 0 0 1 20 0 1 20
1 30 900 0 0 0 0 0 0 0 0 0 0 0 0
4
4
4
c
b
a



0
0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 1 45 0 1 45 0 0 0 0 0 0 0 0 0
0 0 0 0 1 40 0 1 40 0 0 0 0 0 0
5
5
5
4
c
b
a
176
5
Coefficients of the Spline Coefficients of the Spline
i a
i
b
i
c
i
1 0 22.704 0 1 0 22.704 0
2 0.8888 4.928 88.88
3 0.1356 35.66 141.61
4 1 6048 33 956 554 55 4 1.6048 33.956 554.55
5 0.20889 28.86 152.13
177
Final Solution
22 7 ( ) 04 v t t =
10 0 t
22.7 ( ) 04 , v t t =
10 0 t
2
0.8888 4.928 , 88.88 t t + + = 15 10 t
2
0.1356 35.66 141.61, t t + =
20 15 t
2
1.6048 33.956 554.55, t t + = 5 . 22 20 t
2
0.20889 28.86 , 152.13 t t + = 30 5 . 22 t
178
Velocity at a Particular Point Velocity at a Particular Point
(a) Velocity at t=16 (a) Velocity at t 16
, 704 . 22 ) ( t t v =
10 0 t
, 88 . 88 928 . 4 8888 . 0
2
+ + = t t 15 10 t
61 141 66 35 1356 0
2
+ = t t
20 15 t
, 61 . 141 66 . 35 1356 . 0 + = t t
20 15 t
, 55 . 554 956 . 33 6048 . 1
2
+ = t t 5 . 22 20 t
13 152 86 28 20889 0
2
30 5 22 , 13 . 152 86 . 28 20889 . 0
2
+ = t t 30 5 . 22 t
( ) ( ) ( )
2
( ) ( ) ( )
m/s 24 . 394
61 . 141 16 66 . 35 16 1356 . 0 16
2
=
+ = v
179
Acceleration from Velocity Profile Acceleration from Velocity Profile
(b) The quadratic spline valid at t=16 is given by
) ( ) 16 ( t
d
(b) The quadratic spline valid at t 16 is given by
16
) ( ) 16 (
=
=
t
t v
dt
a
d
( ) , 61 . 141 66 . 35 1356 . 0
2
+ = t t t v
20 15 t
) 61 . 141 66 . 35 1356 . 0 ( ) (
2
+ = t t
dt
d
t a
, 66 . 35 2712 . 0 + = t 20 15 t
66 35 ) 16 ( 2712 0 ) 16 ( + = a
2
m/s 321 . 31 =
180
66 . 35 ) 16 ( 2712 . 0 ) 16 ( + a
m/s 321 . 31
Distance from Velocity Profile Distance from Velocity Profile
(c) Find the distance covered by the rocket from t=11s to (c) d t e d sta ce co e ed by t e oc et o t s to
t=16s.
( ) ( )

=
16
) ( 11 16 dt t v S S( ) ( )

=
11
) ( 11 16 dt t v S S
( ) 15 10 88 88 928 4 8888 0
2
+ + t t t t v( )
20 15 , 61 . 141 66 . 35 1356 . 0
15 10 , 88 . 88 928 . 4 8888 . 0
2
+ =
+ + =
t t t
t t t t v
( ) ( ) ( ) ( ) ( ) 11 16
16
11
15
11
16
15
+ = =

dt t v dt t v dt t v S S
( ) ( ) 61 . 141 66 . 35 1356 . 0 88 . 88 928 . 4 8888 . 0
16
15
2
15
11
2
+ + + + =

dt t t dt t t
181
m 9 . 1595
15 11
=
Cubic splines: Section 5.6.3 p
Dr A.I. Delis TUC 2012 Part 4
182
Cubic splines: Section 5.6.3
Dr A.I. Delis TUC 2012 Part 4
183
Cubic splines: Section 5.6.3 p
Cubic equation for each interval:
ti 5 73
( )
3 2
>>>equation 5.73
( )
3 2
i i i i i
f x x x d b a c x = + + +
n intervals 4n unkowns
There are 5 conditions required to evaluate
the unknowns which gives the cubic equation: the unknowns which gives the cubic equation:
>>>equations 5.74-5.80
Try out example 5-8
Dr A.I. Delis TUC 2012 Part 4
184
Assignment 3 Assignment 3
Practice exercises:
Regression:5.1, 5.3, 5.4, 5.9
Polynomial interpolation: 5.12, 5.12, 5.13 y p , ,
A i t #3 Assignment #3
Regression: 5.5, 5.7, 5.8 Regression: 5.5, 5.7, 5.8
Interpolation:5.16, 5.17
Due Monday 21/05/2012
Dr A.I. Delis TUC 2012 Part 4
185
Due Monday 21/05/2012

You might also like