You are on page 1of 10

Outline

Discrete Kalman Filter

M. Sami Fadali
Professor EE
University of Nevada

What is the Discrete Kalman


Filter (DKF)?
Derivation of DKF.
Implementation of DKF.
Example

What is the (DKF)?

Derivation of DKF
Process and measurement models

Algorithm for the optimal recursive


estimation of the state of a system.
Needs:

state vector at
state-transition matrix at
measurement vector at
n measurement matrix at

Initial state estimate and error covariance.


Noisy measurements with known
properties.
System state-space model.

Notation

Noise
=

zero-mean white Gaussian process


noise vector at

^ = estimate
= perturbation.
a priori estimate of
(before the
measurement at ).
a posteriori estimate of
(after
the measurement at ).

zero-mean white Gaussian


measurement noise vector at

Measurement Noise

Measurement noise

Unbiased Linear Estimator


is white.

A priori estimate at time (i.e. )


uncorrelated with measurement noise at
time .

A priori estimate (assume unbiased)

A posteriori estimate

Error

,
,

Expectation must be zero for unbiased


,
,

Derivation of DKF

Error Covariance Matrices

Recursively correct estimate.

Error: recall

Choose gain
(blending factor) to minimize
mean square error.

Assume unbiased estimates

A priori error

A priori error covariance matrix

A posteriori error

A posteriori error covariance matrix

Minimum Mean-square Error

Error Covariance Matrix

10

Measurement noise uncorrelated to a priori error:

Minimize over all possible choices of K

Substitute for a priori state estimate.

11

12

Minimization

Derivative of Trace
For any scalar

Use

(same trace for two terms)

Apply trace formulas

Solve for the Kalman Gain

13

Error Covariance Matrix Forms

14

Joseph Form

Joseph form

Four expressions for the error covariance.

Expand to obtain the other forms.

Best numerical computation properties.

Use Joseph form to reduce numerical errors.

Numerical Computation: behave differently,


Josephs form performs well.
15

16

Derivation of Other Forms

Derivation (Cont.)

Derived earlier

Cancel two equal terms (two forms)

Common factor

Three equal terms

17

18

DKF Loop

A Priori Estimate

Enter initial state estimate and


its error covariance x , P
0

Measurements

z 0 , z 1 ,

Compute Kalman Gain

K k Pk H kT H k Pk H kT Rk

Fundamental theorem of estimation theory


Sum of two orthogonal terms

Update estimate with


measurement zk
x x k K k z k H k x k

Project Ahead:

x k 1 k x k

Pk 1 k Pk kT Q k

Compute error covariance

Pk I K k H k Pk
19

State Estimates

, x 1 ,
20

Example: Wiener Process

Example: Discretization

Scalar example.
Discretize the CT system.
St. dev. of measurement error

Unity Gaussian
White Noise u(t)

v(t)
1
s

x(t)

z(t)

x(0)=0
21

Kalman Loop: k = 0

22

Kalman Loop: k = 1

Calculate the gain

Calculate the gain

Update estimate

Update estimate

Update error

Update error

Project ahead

Project ahead

23

24

MATLAB DKF Implementation

Example: Gauss-Markov Process

% Across Measurements:
K=P*H/(H*P*H+R);
xhatp=xhat+K*(z-H*xhat);
P=(eye(n)-K*H)*P;
P=(P+P)/2;
% Between Measurements:
xhat=phi*xhat;
P=phi*P*phi+Q;

Unity Gaussian
white noise

2 2
s

X(s)=Y(s)

25

Example: Discretization

26

Initial Conditions

Process mean and variance

27

Use process mean and variance to


initialize

28

Discrete Lyapunov Equation

Simulation Results

Use unity variance


and unity time
.
constant,
Use a sampling period
s
Steady-state Kalman gain K = 0.165
Steady-state error variance
RMS Error
Close to steady state after 20 steps.
Suboptimal filter: Use K =0.165 for a simple
implementation.

Substitute for P +

Lyapunov Equation:

Applies for any gain K (not just the optimal


Kalman gain K)

29

Discrete Riccati Equation

Solution of Lyapunov Eqn.

30

For the Kalman (optimal) gain (slide 14)

Riccati Equation

Proof by induction.
31

32

Wiener Example: Lyapunov


Equation

Wiener Example: Riccati Equation

33

Deterministic Inputs

34

Real-time Implementation

Linear system: use superposition.


a. Add zero-state deterministic response

b. (i) Subtract deterministic output from


(ii) Compute , separately and add to the KF
estimate to obtain the state estimate.
,

,
35

Data Latency: delay between data time


and current time due to sensor,
computation, and information delivery.
Processor Loading: how much is the
processor used?

throughput (bits/s) analysis


dedicated vs. shared processor
specialized exploitation to reduce
computation e.g. exploit sparse matrices.
36

Round-off Errors

Use high precision off line.

Choose step carefully.

The Separation Principle

Propagate n(n+1)/2 unique terms of


symmetric P matrix only.

Use array algorithms that propagate the


square root of the matrix P (see Kailath,
covered later).

Linear system with state estimator


feedback.
Design controller and state estimator
separately.
True for observer or Kalman filter.

Use suboptimal filter (fix K).


37

Conclusion

Popular recursive algorithm.


Minimize mean-square error.
Real-time implementation.
Estimator feedback.

39

38

You might also like