You are on page 1of 26

EL 625 Lecture 4

EL 625 Lecture 4
Solution of the dynamic state equations

x(t)
= A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)

(1)

Homogeneous equation (Unforced system):


x = A(t)x

(2)

Consider the matrix differential equation, Q(t)


= A(t)Q(t). If
this can be solved, the solution to the homogeneous equation is
x(t) = Q(t)Q1(t0)x(t0)

(3)

(t0)x(t0)
x(t)
= Q(t)Q

= A(t)Q(t)Q1(t0)x(t0)
= A(t)x(t)

(4)

Also, evaluating the left side of (3) at t = t0, we have


Q(t0)Q1(t0)x(t0) = x(t0)
So, the solution satisfies the initial condition.

(5)

EL 625 Lecture 4

4
Transition matrix, (t, t0) = Q(t)Q1(t0)

(6)

x(t) = (t, t0)x(t0)

(7)

The transition matrix characterizes the flow of the differential equation.


Properties of the transition matrix:
1. x(t0) = (t0, t0)x(t0)
(t0, t0) = I

(8)

2.
(t2, t0)x(t0) = x(t2)
= (t2, t1)x(t1)
= (t2, t1)(t1, t0)x(t0)
(t2, t1)(t1, t0) = (t2, t0)
3.
x(t2) = (t2, t1)x(t1) = (t2, t1)(t1, t2)x(t2)

(9)

EL 625 Lecture 4

(t2, t1)(t1, t2) = I

(10)

(t1, t2) = 1(t2, t1)

(11)

Given a transition matrix, (t, t0), A(t) can be evaluated as follows.


t0)x(t0)

x(t)
= (t,
Also,

x(t)
= A(t)x(t)
= A(t)(t, t0)x(t0)
t0) = A(t)(t, t0)
(t,

(12)

t0)|t =t = A(t)
(t,
0

(13)

We can also determine an expression for 1(t, t0).


(t, t0)1(t, t0) = I

d
d

(t, t0) 1(t, t0) + (t, t0) 1(t, t0) = 0


dt
dt

(14)
(15)

d
d

(t, t0) 1(t, t0) = (t, t0) 1(t, t0)


dt
dt

d 1
d
(t, t0) = 1(t, t0) (t, t0) 1(t, t0)
dt
dt
= 1(t, t0)A(t)(t, t0)1(t, t0)
= 1(t, t0)A(t)

(16)

EL 625 Lecture 4

Solution of the forced system equations:


Assume:
x(t) = (t, t0)f (t)

(17)

=
t0)f (t) + (t, t0)f (t)

x(t)
= (t,
= A(t)(t, t0)f (t) + (t, t0)f (t)
= A(t)x(t) + (t, t0)f (t)
= (t, t0)f (t) = B(t)u(t)

(18)

Thus,
f (t) = f (t0) +
x(t) = (t, t0)f (t0) +

t
t0

t
t0

(t0, )B()u()d

(t, t0)(t0, )B()u()d

x(t) = (t, t0)x(t0) +

t0

(t, )B()u()d

(19)
(20)

(21)

EL 625 Lecture 4

y(t) =

C(t)(t,{zt0)x(t0)}
Zero-input response
|

t
t0

C(t)(t, )B()u()d + D(t)u(t)


{z

Zero-state response

(22)

Using
u(t) =

u()(t )d

(23)

and
Z

t
t0

C(t)(t, )B()u()d =

t0

C(t)(t, )B()u()1(t )d
(24)

With t0 = and x(t0) = 0,


Z

t0

C(t)(t, )B()u()1(t )d =

C(t)(t, )B()u()1(t )d
(25)

Hence,
Zero-state
response

=
=

where,

[C(t)(t, )B()1(t ) + D(t)(t )]u()d


H(t, )u()d

(26)

EL 625 Lecture 4

H(t, ) = C(t)(t, )B()1(t ) + D(t)(t )


For fixed systems, A, B, C and D are constant matrices
H(t, ) = H(t )

(27)

H(t) = C(t)B1(t) + D(t)

(28)

A different approach to the derivation of the forced response

x(t)
= A(t)x(t) + B(t)u(t)

(t0, t)x(t)
= (t0, t)A(t)x(t) + (t0, t)B(t)u(t)
(t0, t)A(t)x(t) = (t0, t)B(t)u(t)
(t0, t)x(t)
0, t)x(t) = (t0, t)B(t)u(t)
+ (t
(t0, t)x(t)
d
((t0, t)x(t)) = (t0, t)B(t)u(t)
dt
Z t
t
(t0, )x(t)| =t0 = t (t0, )B( )u( )d
0

(t0, t)x(t) x(t0) =

t
t0

(t0, )B( )u( )d

(t0, t)x(t) = x(t0) +

t0

(29)
(30)
(31)
(32)

(t0, )B( )u( )d(33)

x(t) = (t, t0)x(t0)


+(t, t0)

t
t0

(t0, )B( )u( )d(34)

EL 625 Lecture 4

= x(t) = (t, t0)x(t0) +

t0

(t, )B( )u( )d


(35)

Computing the transition matrix


The physical meaning of (t, t0):

x1(t)
x2(t)
...
xn(t)

= xi(t) =

11(t, t0) 12(t, t0) . . . 1n(t, t0)


21(t, t0) 22(t, t0) . . . 2n(t, t0)
...
...
...
...
n1(t, t0) n2(t, t0) . . . nn(t, t0)

n
X

j=1

ij (t, t0)xj (t0)

x1(t0)

x2(t0)
(36)
...
xn(t0)

(37)

6 k,
If xk (t0) = 1 with xj (t0) = 0 for all j =
xi(t) = ik (t, t0)

(38)

ij (t, t0) response observed at the output of the ith integrator


at time t when a unit initial condition is placed on the j th
integrator at t = t0 and all other integrators have zero initial
(39)
conditions (with all inputs being zero).
Example:
x 1 = x2
x 2 =

1
x2
t+1

(40)

EL 625 Lecture 4

x1(t)
x2(t)

11(t, t0) 12(t, t0) x1(t0)


21(t, t0) 22(t, t0)

x2(t0)

1. With x1(t0) = 1 and x2(t0) = 0,


x 1 = 0
x 2 = 0
= x1(t) = 1
x2(t) = 0
= 11(t, t0) = 1
21(t, t0) = 0
2. With x1(t0) = 0 and x2(t0) = 1,
1
x2 with x2(t0) = 1
x 2 = t+1

Solving,
t0 + 1
t0 + 1
x2(t0) =
t+1
t+1
t0 + 1
= 22(t, t0) =
t+1
t0 + 1
x 1(t) = x2(t) =
t+1

x2(t) =

(41)

EL 625 Lecture 4

x1(t) =

t t0
t0

+1
d
+1

t+1
= (t0 + 1) ln
t0 + 1

t
+
1

= 21(t, t0) = (t0 + 1) ln


t0 + 1

(t, t0) =

1 (t0 + 1) ln
0

t+1
t0 +1

t0 +1
t+1

Disadvantage of this method: We must integrate the unforced system equations to calculate the transition matrix. For fixed systems,
simpler solutions which do not require this integration are available.

Fixed systems: A is a constant matrix.


First order case:

= ax(t)
x(t)
= x(t) = ea(tt0)x(t0)
= (t, t0) = ea(tt0)

(42)

Is the general solution (t, t0) = eA(tt0)? Yes

At

2
i

X
4
2t
it
= I + At + A + . . . =
A
2!
i!
i=0

(43)

EL 625 Lecture 4

deAt
dt

10

= A[I + At + A

2t

2!

+ . . .] = AeAt

x(t) = eA(tt0)x(t0)

(44)

(45)

Evaluating the left hand side at t = t0


x(t)|t = t0 = eA0x(t0) = x(t0)

= AeA(tt0)x(t0) = Ax(t)
x(t)

(46)
(47)

Thus,
(t, t0) = eA(tt0)

(48)

Properties of eAt:
A0

2
20

= I + A0 + A

2!

+ ... = I

(eAt)1 = eAt
d At
e = AeAt
dt

(49)
(50)
(51)

Solution of the forced system equations


x(t) = eAtx(0) +

t
t0

eA(t )Bu( )d

(52)

Impulse response:
h(t, ) = h(t ) = CeA(t )1(t ) + D(t )
h(t) = CeAtB1(t) + D(t)

(53)
(54)

EL 625 Lecture 4

11

Calculating eAt:
Classical approach for finding the solutions of the unforced system:
Guess an exponential solution. . .
x(t) = et ( 6= 0)

(55)

EL 625 Lecture 4

12

= et
x(t)
A(t)x(t) = et
= Aet = et

= A

(56)

This is the eigenvalue problem.

eigenvalue
eigenvector ( 6= 0)

(I A) = 0
det[I A] = 0

(57)

characteristic equation of the matrix A


4
p() = det[I A] : an nth degree polynomial in , called the
characteristic polynomial.

EL 625 Lecture 4

13

Characteristic equation, p() = 0 has n solutions,


the eigenvalues,1, 2, . . . , n not necessarily distinct
For each eigenvalue, associated eigenvector equation:
[iI A]i = 0

(58)

has a nontrivial solution, i


Useful theorems from matrix theory:
1. Eigen vectors corresponding to distinct eigen values are linearly
independent.
Reminder: Vectors v1, v2, . . . vn are linearly independent if

Pn

i=1 i vi

0 = i = 0i = 1, 2, ...n. In the figure below, v1 and v2 are


linearly independent. But, v1 and v3 are linearly dependent.




v2

v1

x 6






0
v3





EL 625 Lecture 4

14

2. An n n matrix can be diagonalized if it has n distinct eigenvalues.

x(t) = k1e1t1 + k2e2t2 + . . . + knentn

(59)

x(0) = k11 + k22 + . . . + knn

(60)

= [1, 2, . . . , n]

Let T = [1, 2, . . . , n] and k =

k1
k2
...
kn

k1
k2
...
kn

(61)

(62)

Thus,
x(0) = T k = x0
k = T 1x0

(63)

EL 625 Lecture 4

15

1 t

n t

2 t

x(t) = [e 1, e 2, . . . , e

= [1, 2, . . . , n]

= T etT 1x0

e1t
0
...
0
e1t
0
...
0

k1

k2
n ]
...
kn

(64)

...

e2t . . .
... . . .

0
...

. . . ent

...

e2t . . .
... . . .

0
...

. . . ent

k1
k2
...
kn

T 1x0

(65)

(66)

(67)

where

1 0 . . . 0
0 2 . . .
... ... . . .
0

0
...

0 . . . n

(68)

EL 625 Lecture 4

16

det
=
dt

e1t
0
...

...

e2t . . .
... . . .

0
...

. . . ent

i1 0 . . . 0
0 i2 . . .
... ... . . .

1e1t
0
...

(69)

0
...

0 . . . in

(70)

...

2e2t . . .
...
...

0
...

. . . nent

= et

x(t) = T etT 1x0

(71)

(72)

x(t)
= T etT 1x0
= T T 1T etT 1x0
= T T 1x(t)
= Ax(t)

(73)

EL 625 Lecture 4

17

A = T T 1

(74)

T 1AT =

(75)

Thus, A is diagonalized by the similarity transformation, T .


If f () is any function which can be expanded in a power series,
f () =
Define: f (A) =
f (A) =
=

i
i=0 ai A

i=0

i=0

i=0

= T

i=0

aii

(76)

Thus,

ai(T T 1)i

(77)

aiT iT 1

(78)

aiT

i=0

i1 0 . . . 0
0 i2 . . .
... ... . . .
0

0
...

0 . . . in

aii1

0
...

0
P

i
i=0 ai 2

T 1

...

...

0
...

...

...

...
{z

4
= f ()

(79)

i
i=0 ai n

T 1

(80)

EL 625 Lecture 4

18

f (A) = T f ()T 1
Example:
A=
The characteristic equation is

(81)

0 1

2 3

p() = det[I A] = 0
Thus,

= 0

2 3
= 2 3 + 2 = 0
The eigenvalues are:
1 = 1
2 = 2

(82)

(1I A)1 = 0

1 1 11
2 2

12
0
= 11 = 12

EL 625 Lecture 4

19

Thus, the first eigenvector is


1 =

1
1

(83)

(2I A)2 = 0

2 1 21

2 1

0
22
= 22 = 221

Thus, the second eigenvector is


2 =

1
2

(84)

T = [1, 2]
=

AT =

1 1

1 2

(85)

2 1

1 1

1 0
0 2

(86)

(87)

EL 625 Lecture 4

20

eAt = T etT 1
=

1 1
1 2

et 0
0 e

2et e2t
t

2e 2e

2t

2 1

1 1

e2t et

2t

2t

2e e

(88)
(89)

(90)

But, this requires finding the eigenvectors. . . Can we avoid this?


Yes

f () =

n
X

k=0

f (k )Ek

(91)

Ei is a matrix with a 1 in the i, i position and zeros everywhere else.


ith column

0 ... ... ... ... ...


... . . . ... ... ... ...

...

0 ... 0 ... ... ... 0

Ei = 0 . . . 0

0 ... ... 0
... ... ... ...

th
row
0 . . . 0 i

0 ...
... ...

0
...

0 ... ... ... ... ... 0

(92)

EL 625 Lecture 4

21

f (A) = T f ()T 1
=
=

n
X

k=0
n
X
k=0

f (k )T Ek T 1
f (k )Zk0

(93)

where
Zk0 = T Ek T 1

(94)

The matrices Zk0 are independent of the function f . Once


these matrices are evaluated, they can be used to find any function,
f (A).
Can we find Zk0 without finding T ? Yes
Method of Trial Functions: Choose a number of trial functions for which f (A) is easy to calculate and use these to calculate
Zk0.
Example:
A=

0 1

2 3

The eigenvalues of this matrix are 1 = 1 and 2 = 2.


Using the method of trial functions, the calculation of the eigen-

EL 625 Lecture 4

22

vectors can be avoided.


f (A) = f (1)Z10 + f (2)Z20

(95)

Choose as the first trial function,


f1() = ( 2)
f1(A) = A 2I
=

(96)

2 1
2 1

= f1(1)Z10 + f1(2)Z20
= Z
10

= Z10 =

2 1
2 1

Choose as the second trial function,

f2() = ( 1)
f2(A) = A I
=

1 1
2 2

= f2(1)Z10 + f2(2)Z20
= Z20

(97)

(98)

EL 625 Lecture 4

23

= Z20 =
Thus, for any function, f (),
f (A) = f (1)
With f () = et,
At

= e

1 1

2 1

2 1

2 1

2e 2e

+ f (2)

2t

2t

+e

2et e2t
t

(99)

2 2

2 1

1 1

2 2

1 1

2 2

e2t et
2t

2e e

(100)

(101)

(102)

In general, if A has n distinct eigenvalues, 1, 2, . . . , n, choose


as the k th trial function,
n
Y

fk () =

i=1

( i)

(103)

i 6= k

fk (j ) =

for j 6= k

n
Q

i=1

(k i)

for

j=k

(104)

i 6= k

Zk0 =

fk (A)
fk (k )

(105)

EL 625 Lecture 4

24

(106)
n
Q

(A iI)

n
Q

(k i)

i=1
i=
6 k

i=1
i 6= k

f (A) =

n
X

k=0

f (k )

n
Q

i=1
i 6= k

n
Q

i=1
i 6= k

(107)

(A iI)
(k i)

(108)
(109)

If the matrix, A has repeated eigenvalues, i.e. the characteristic


equation has repeated roots,
p() = ( 1)n1 ( 2)n2 ( s)ns

(110)

ni is the algebraic multiplicity of the eigenvalue i. In this case,


f (A) =

s
X

nX
i 1

i=1 j=0

j
d f ()

Zij
dj =i

(111)

Example:
A =

I A =

1 1

0 1

(112)

p() = det() = 2 2 + 1 = 0

(113)

EL 625 Lecture 4

25

Thus, the matrix has a repeated eigen value at 1 i.e. 1 = 1, 2 = 1.


f (A) = f (1)Z10 + f 0(1)Z11

(114)

f1() = ( 1)

(115)

Choosing

f1(A) = A I
=

0 1

0 0

= f1(1)Z10 + f10 ()|=1Z11


= Z
11

= Z11 =
Choosing,

0 1

0 0

(116)

f2() =

(117)

f2(A) = A
= f2(1)Z10 + f10 ()|=1Z11
= Z10 + Z11
Z11
= Z10 = A

1 1
0 1

0 1
0 0

EL 625 Lecture 4

26

1 0
0 1

(118)

Thus, for any function f () (which is differentiable at 1),


f (A) = f (1)

0 1

0 0

+ f ()|=1

With, f () = et, f 0()|=1 = tet|=1 = tet


e

At

= e

1 0
0 1

et tet
t

0 e

+ te

1 0
0 1

0 1
0 0

(119)

(120)

(121)

You might also like