Professional Documents
Culture Documents
Prediction step
Based on e.g.
physical model
Prior knowledge
of state
Next timestep
Update step
Compare prediction
to measurements
Measurements
Output estimate
of state
The Kalman lter keeps track of the estimated state of the system and the variance or uncertainty of the estimate. The estimate is updated using a state transition model and measurements. x
k|k1 denotes the estimate of the systems state at time
step k before the k-th measurement y has been taken into account; Pk|k1 is the corresponding uncertainty.
Kalman ltering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of
measurements observed over time, containing statistical
noise and other inaccuracies, and produces estimates of
unknown variables that tend to be more precise than those
based on a single measurement alone. The lter is named
after Rudolf E. Klmn, one of the primary developers
of its theory.
The Kalman lter has numerous applications in technology. A common application is for guidance, navigation
and control of vehicles, particularly aircraft and spacecraft. Furthermore, the Kalman lter is a widely applied
concept in time series analysis used in elds such as signal
processing and econometrics. Kalman lters also are one
of the main topics in the eld of robotic motion planning
and control, and they are sometimes included in trajectory
optimization. The Kalman lter has also found use in
modeling the central nervous systems control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, use of the Kalman Rudolf Emil Klmn, co-inventor and developer of the Kalman
lter provides the needed model for making estimates of lter.
the current state of the motor system and issuing updated
commands.[1]
The lter is named after Hungarian migr Rudolf E.
The algorithm works in a two-step process. In the predic- Klmn, although Thorvald Nicolai Thiele[3][4] and Peter
tion step, the Kalman lter produces estimates of the cur- Swerling developed a similar algorithm earlier. Richard
rent state variables, along with their uncertainties. Once S. Bucy of the University of Southern California conthe outcome of the next measurement (necessarily cor- tributed to the theory, leading to it often being called
rupted with some amount of error, including random the KalmanBucy lter. Stanley F. Schmidt is genernoise) is observed, these estimates are updated using a ally credited with developing the rst implementation of a
weighted average, with more weight being given to esti- Kalman lter. He realized that the lter could be divided
mates with higher certainty. The algorithm is recursive. It into two distinct parts, with one part for time periods becan run in real time, using only the present input measure- tween sensor outputs and another part for incorporating
ments and the previously calculated state and its uncer- measurements.[5] It was during a visit by Klmn to the
1
EXAMPLE APPLICATION
NASA Ames Research Center that he saw the applicability of his ideas to the problem of trajectory estimation
for the Apollo program, leading to its incorporation in the
Apollo navigation computer. This Kalman lter was rst
described and partially developed in technical papers by
Swerling (1958), Kalman (1960) and Kalman and Bucy
(1961).
ters behavior in terms of gain. The Kalman gain is a function of the relative certainty of the measurements and current state estimate, and can be tuned to achieve particular performance. With a high gain, the lter places more
weight on the measurements, and thus follows them more
closely. With a low gain, the lter follows the model predictions more closely, smoothing out noise but decreasing
Kalman lters have been vital in the implementation of the responsiveness. At the extremes, a gain of one causes
the lter to ignore the state estimate entirely, while a gain
the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation sys- of zero causes the measurements to be ignored.
tems of cruise missiles such as the U.S. Navys Tomahawk When performing the actual calculations for the lter (as
missile and the U.S. Air Force's Air Launched Cruise discussed below), the state estimate and covariances are
Missile. It is also used in the guidance and navigation sys- coded into matrices to handle the multiple dimensions intems of the NASA Space Shuttle and the attitude control volved in a single set of calculations. This allows for a repand navigation systems of the International Space Station. resentation of linear relationships between dierent state
This digital lter is sometimes called the Stratonovich variables (such as position, velocity, and acceleration) in
KalmanBucy lter because it is a special case of a more any of the transition models or covariances.
general, non-linear lter developed somewhat earlier by
the Soviet mathematician Ruslan Stratonovich.[6][7][8][9]
In fact, some of the special case linear lters equa- 3 Example application
tions appeared in these papers by Stratonovich that were
published before summer 1960, when Kalman met with As an example application, consider the problem of deStratonovich during a conference in Moscow.
termining the precise location of a truck. The truck can
be equipped with a GPS unit that provides an estimate of
the position within a few meters. The GPS estimate is
likely to be noisy; readings 'jump around' rapidly, though
2 Overview of the calculation
always remaining within a few meters of the real position.
In addition, since the truck is expected to follow the laws
The Kalman lter uses a systems dynamics model (e.g., of physics, its position can also be estimated by integratphysical laws of motion), known control inputs to that ing its velocity over time, determined by keeping track
system, and multiple sequential measurements (such as of wheel revolutions and the angle of the steering wheel.
from sensors) to form an estimate of the systems varying This is a technique known as dead reckoning. Typically,
quantities (its state) that is better than the estimate ob- the dead reckoning will provide a very smooth estimate
tained by using any one measurement alone. As such, it of the trucks position, but it will drift over time as small
is a common sensor fusion and data fusion algorithm.
errors accumulate.
All measurements and calculations based on models are In this example, the Kalman lter can be thought of as opestimated to some degree. Noisy sensor data, approxima- erating in two distinct phases: predict and update. In the
tions in the equations that describe how a system changes, prediction phase, the trucks old position will be modied
and external factors that are not accounted for introduce according to the physical laws of motion (the dynamic or
some uncertainty about the inferred values for a systems state transition model) plus any changes produced by
state. The Kalman lter averages a prediction of a sys- the accelerator pedal and steering wheel. Not only will
tems state with a new measurement using a weighted av- a new position estimate be calculated, but a new covarierage. The purpose of the weights is that values with ance will be calculated as well. Perhaps the covariance
better (i.e., smaller) estimated uncertainty are trusted is proportional to the speed of the truck because we are
more. The weights are calculated from the covariance, a more uncertain about the accuracy of the dead reckoning
measure of the estimated uncertainty of the prediction of position estimate at high speeds but very certain about the
the systems state. The result of the weighted average is position estimate when moving slowly. Next, in the upa new state estimate that lies between the predicted and date phase, a measurement of the trucks position is taken
measured state, and has a better estimated uncertainty from the GPS unit. Along with this measurement comes
than either alone. This process is repeated every time some amount of uncertainty, and its covariance relative to
step, with the new estimate and its covariance informing that of the prediction from the previous phase determines
the prediction used in the following iteration. This means how much the new measurement will aect the updated
that the Kalman lter works recursively and requires only prediction. Ideally, if the dead reckoning estimates tend
the last best guess, rather than the entire history, of a to drift away from the real position, the GPS measuresystems state to calculate a new state.
ment should pull the position estimate back towards the
Because the certainty of the measurements is often di- real position but not disturb it to the point of becoming
cult to measure precisely, it is common to discuss the l- rapidly changing and noisy.
k -1
k -1
In DempsterShafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman lter is a special case of combining
linear belief functions on a join-tree or Markov tree. Additional approaches include belief lters which use Bayes
or evidential updates to the state equations.
A wide variety of Kalman lters have now been developed, from Kalmans original formulation, now called
the simple Kalman lter, the KalmanBucy lter,
Schmidts extended lter, the information lter, and a
variety of square-root lters that were developed by
Bierman, Thornton and many others. Perhaps the most
commonly used type of very simple Kalman lter is the
phase-locked loop, which is now ubiquitous in radios,
especially frequency modulation (FM) radios, television
sets, satellite communications receivers, outer space communications systems, and nearly any other electronic
communications equipment.
k -1
xk-1,Pk -1
k +1
k +1
xk ,Pk
k +1
k+1
xk+1,Pk +1
Model underlying the Kalman lter. Squares represent matrices. Ellipses represent multivariate normal distributions (with the
mean and covariance matrix enclosed). Unenclosed values are
vectors. In the simple case, the various matrices are constant with
time, and thus the subscripts are dropped, but the Kalman lter
allows any of them to change each time step.
xk = Fk xk1 + Bk uk + wk
where
Fk is the state transition model which is applied to
the previous state xk;
Bk is the control-input model which is applied to the
control vector uk;
Underlying
model
dynamic
system
zk = Hk xk + vk
where Hk is the observation model which maps the true
state space into the observed space and vk is the observation noise which is assumed to be zero mean Gaussian
white noise with covariance Rk.
6 DETAILS
some reason, the update may be skipped and multiple prediction steps performed. Likewise, if multiple indepenvk N (0, Rk )
dent observations are available at the same time, multiple
update steps may be performed (typically with dierent
The initial state, and the noise vectors at each step {x0 , observation matrices Hk).[16][17]
w1 , , wk, v1 vk} are all assumed to be mutually
independent.
Many real dynamical systems do not exactly t this model.
In fact, unmodelled dynamics can seriously degrade the
lter performance, even when it was supposed to work
with unknown stochastic signals as inputs. The reason for
this is that the eect of unmodelled dynamics depends on
the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm
diverge. The problem of separating between measurement noise and unmodelled dynamics is a dicult one
and is treated in control theory under the framework of
robust control.[14][15]
Details
6.1 Predict
6.2 Update
The formula for the updated estimate covariance above is
only valid for the optimal Kalman gain. Usage of other
gain values requires a more complex formula found in the
derivations section.
6.3 Invariants
0|0 and P0|0
If the model is accurate, and the values for x
accurately reect the distribution of the initial state values, then the following invariants are preserved: (all estimates have a mean error of zero)
k|k )
Pk|k = cov(xk x
k|k1 )
Pk|k1 = cov(xk x
Sk = cov(yk )
xk = Fxk1 + wk
where wk N (0, Q) and
Q = GGT a2 =
t4
4
t3
2
t3
2
a2 .
At each time step, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also normally distributed, with mean 0
and standard deviation z.
zk = Hxk + vk
where
[
H= 1
and
[ ]
R = E[vk vTk ] = z2
[ ]
x
x
0|0 =
x
[ ]
0
0
xk = Fxk1 + Gak
8 Derivations
and
[ t2 ]
G=
t
so that
8 DERIVATIONS
k|k )
Pk|k = cov(xk x
k|k
substitute in the denition of x
Pk|k = cov(xk (
xk|k1 + Kk yk ))
and substitute yk
k|k1 )))
Pk|k = cov(xk (
xk|k1 + Kk (zk Hk x
and zk
tr(Pk|k )
= 2(Hk Pk|k1 )T + 2Kk Sk = 0.
Kk
k|k1 )))
Pk|k = cov(xk (
xk|k1 +Kk (Hk xk +vk Hk x
k|k1 ) Kk vk )
Pk|k = cov((I Kk Hk )(xk x
k|k1 )) + cov(Kk vk )
Pk|k = cov((I Kk Hk )(xk x
The formula used to calculate the a posteriori error covariance can be simplied when the Kalman gain equals
the optimal
value derived above. Multiplying both sides
T
Pk|k = (IKk Hk )cov(xk
xk|k1 )(IKk Hk )T +Kk cov(v
ofk )K
ourk Kalman gain formula on the right by SkKkT , it
follows that
which, using our invariant on Pk | k and the denition
of Rk becomes
Kk Sk KTk = Pk|k1 HTk KTk
Pk|k = (I Kk Hk )Pk|k1 (I Kk Hk )T + Kk Rk KTk
Referring back to our expanded formula for the a posteriori error covariance,
8.2
The Kalman lter is a minimum mean-square error esti- Pk|k = Pk|k1 Kk Hk Pk|k1 = (I Kk Hk )Pk|k1 .
mator. The error in the a posteriori state estimation is
This formula is computationally cheaper and thus nearly
always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causk|k
xk x
ing problems with numerical stability, or if a non-optimal
We seek to minimize the expected value of the square Kalman gain is deliberately used, this simplication cank|k 2 ] . This not be applied; the a posteriori error covariance formula
of the magnitude of this vector, E[xk x
is equivalent to minimizing the trace of the a posteriori as derived above (Joseph form) must be used.
Sensitivity analysis
error often causes a small positive eigenvalue to be computed as a negative number. This renders the numerical
The Kalman ltering equations provide an estimate of the representation of the state covariance matrix P indenite,
k|k and its error covariance Pk|k recursively. The while its true form is positive-denite.
state x
estimate and its quality depend on the system parameters Positive denite matrices have the property that they have
and the noise statistics fed as inputs to the estimator. This a triangular matrix square root P = SST . This can be
section analyzes the eect of uncertainties in the statisti- computed eciently using the Cholesky factorization alcal inputs to the lter.[21] In the absence of reliable statis- gorithm, but more importantly, if the covariance is kept
tics or the true values of noise covariance matrices Qk in this form, it can never have a negative diagonal or
and Rk , the expression
become asymmetric. An equivalent form, which avoids
many of the square root operations required by the matrix
square root yet preserves the desirable numerical properPk|k = (I Kk Hk )Pk|k1 (I Kk Hk )T + Kk Rk KTk
ties, is the U-D decomposition form, P = UDUT , where
U is a unit triangular matrix (with unit diagonal), and D
no longer provides the actual error covariance. In other
is a diagonal matrix.
k|k )(xk x
k|k )T ] . In most
words, Pk|k = E[(xk x
real-time applications, the covariance matrices that are Between the two, the U-D factorization uses the same
used in designing the Kalman lter are dierent from the amount of storage, and somewhat less computation, and
actual (true) noise covariances matrices. This sensitivity is the most commonly used square root form. (Early litanalysis describes the behavior of the estimation error co- erature on the relative eciency is somewhat misleading,
variance when the noise covariances as well as the system as it assumed that square roots were much more timematrices Fk and Hk that are fed as inputs to the lter are consuming than divisions,[22]:69 while on 21-st century
incorrect. Thus, the sensitivity analysis describes the ro- computers they are only slightly more expensive.)
bustness (or sensitivity) of the estimator to misspecied Ecient algorithms for the Kalman prediction and upstatistical and parametric inputs to the estimator.
date steps in the square root form were developed by G.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the
actual noise covariances are denoted by Qak and Rak respectively, whereas the design values used in the estimator are Qk and Rk respectively. The actual error covariance is denoted by Pak|k and Pk|k as computed by
the Kalman lter is referred to as the Riccati variable.
When Qk Qak and Rk Rak , this means that
Pk|k = Pak|k . While computing the actual error covarik|k )(xk x
k|k )T ] , subance using Pak|k = E[(xk x
stituting for b
xk|k and using the fact that E[wk wTk ] = Qak
T
and E[vk vk ] = Rak , results in the following recursive
equations for Pak|k :
Pak|k1 = Fk Pak1|k1 FTk + Qak
and
Pak|k = (I Kk Hk )Pak|k1 (I Kk Hk )T + Kk Rak KTk
11 Relationship
to
recursive
While computing Pk|k , by design the lter implicitly assumes that E[wk wTk ] = Qk and E[vk vTk ] = Rk . Note
Bayesian estimation
that the recursive expressions for Pak|k and Pk|k are identical except for the presence of Qak and Rak in place of the
The Kalman lter can be presented as one of the simdesign values Qk and Rk respectively.
plest dynamic Bayesian networks. The Kalman lter calculates estimates of the true values of states recursively
over time using incoming measurements and a mathemat10 Square root form
ical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability denOne problem with the Kalman lter is its numerical stabil- sity function (PDF) recursively over time using incoming
ity. If the process noise covariance Qk is small, round-o measurements and a mathematical process model.[27]
12 MARGINAL LIKELIHOOD
In recursive Bayesian estimation, the true state is assumed The probability distribution of the update is proportional
to be an unobserved Markov process, and the measure- to the product of the measurement likelihood and the prements are the observed states of a hidden Markov model dicted state.
(HMM).
p(xk | Zk ) =
The denominator
p(zk | Zk1 ) =
hidden markov model
is a normalization term.
because of the Markov assumption, the true state is con- The remaining probability density functions are
ditionally independent of all earlier states given the immediately previous state.
p(xk | xk1 ) = N (Fk xk1 , Qk )
p(xk | x0 , . . . , xk1 ) = p(xk | xk1 )
p(zk | xk ) = N (Hk xk , Rk )
12 Marginal likelihood
k
i=1
p(xk | Zk1 ) =
Zt = {z1 , . . . , zt }
9
In some applications, it is useful to compute the probability that a Kalman lter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal
likelihood because it integrates over (marginalizes out)
the values of the hidden state variables, so it can be
computed using only the observed signal. The marginal
likelihood can be useful to evaluate dierent parameter
choices, or to compare the Kalman lter against other
models using Bayesian model comparison.
k|k
k|k k|k
. Thus the marginal likelihood is given by
Similarly the predicted covariance and state have equivalent information forms, dened as:
p(z) =
p(zk | xk )p(xk | zk1 , . . . , z0 )dxk
k=0
Yk|k1 = P1
k|k1
1
k|k1 , Pk|k1 )dxyk
=
N (zk ; Hk xk , Rk )N (xk ; x
k|k1
k|k1 = Pk|k1 x
k=0
k|k1 , Sk ),
N (zk ; Hk x
Ik = HTk R1
k Hk
k=0
k=0
i = HTk R1
k zk
i.e., a product of Gaussian densities, each correspond- k
ing to the density of one observation zk under the cur- The information update now becomes a trivial sum.
k|k1 , Sk . This can easily
rent ltering distribution Hk x
be computed as a simple recursive update; however, to
avoid numeric underow, in a practical implementation Y = Y
k|k
k|k1 + Ik
it is usually desirable to compute the log marginal likelihood = log p(z) instead. Adopting the convention yk|k = yk|k1 + ik
(1) = 0 , this can be done via the recursive update rule
The main advantage of the information lter is that N
measurements can be ltered at each timestep simply by
summing their information matrices and vectors.
)
1 ( T 1
yk Sk yk + log |Sk | + dy log 2
(k) = (k1)
2
where dy is the dimension of the measurement vector.
[28]
Y
=
Y
+
Ik,j
k|k
k|k1
An important application where such a (log) likelihood
j=1
of the observations (given the lter parameters) is used
is multi-target tracking. For example, consider an obN
10
15 FIXED-INTERVAL SMOOTHERS
To predict the information lter the information matrix If the estimation error covariance is dened so that
and vector can be converted back to their state space
equivalents, or alternatively the information space predic[(
]
) (
)
tion can be used.
ti|t
ti|t | z1 . . . zt ,
Pi := E xti x
xti x
then we have that the improvement on the estimation of
xti is given by:
1
T
Mk = [F1
k ] Yk1|k1 Fk
1
Ck = Mk [Mk + Q1
k ]
Lk = I Ck
P Pi =
T
Yk|k1 = Lk Mk LTk + Ck Q1
k Ck
[
]1 ( (i) )T
P(j) HT HPHT + R
H P
j=0
T
yk|k1 = Lk [F1
yk1|k1
k ]
14
i [
Fixed-lag smoother
15 Fixed-interval smoothers
The optimal xed-interval smoother provides the optimal
k|n ( k < n ) using the measurements from
estimate of x
a xed interval z1 to zn . This is also called Kalman
Smoothing. There are several smoothing algorithms in
common use.
The optimal xed-lag smoother provides the optimal eskN |k for a given xed-lag N using the meatimate of x
surements from z1 to zk . It can be derived using the pre- 15.1 RauchTungStriebel
vious theory via an augmented state, and the main equaThe RauchTungStriebel (RTS) smoother is an ecient
tion of the lter is the following:
two-pass algorithm for xed interval smoothing.[29]
0
I
I
t1|t 0
t|t1 +
= .. x
..
..
.
.
.
tN +1|t
x
0
0
t|t
x
...
0
..
.
...
where:
The
forward
pass is the same as the regular Kalman lter
0 x
K(0) These ltered a-priori and a-posteriori state
t1|t1 algorithm.
.. x
K(1)x
t2|t1
k|k1
k|k and covariances Pk|k1 , Pk|k are
,x
.
estimates
yint|t1
..
.. use
saved
for
the
backwards pass.
..
.
.
.
1)
pass, we compute the smoothed state
tN +1|t1 In the
K(Nbackwards
I x
k|n and covariances Pk|n . We start at the last
estimates x
time step and proceed backwards in time using the following recursive equations:
[
]1
K(i) = P(i) HT HPHT + R
and
[
]i
T
P(i) = P [F KH]
where P and K are the prediction error covariance and the gains of the standard Kalman lter
(i.e., Pt|t1 ).
11
The equations for the backward pass involve the recursive
computation of data which are used at each observation
yk|N = zk Rk k
time to compute the smoothed state and covariance.
The recursive equations are
T
k = HTk S1 Hk + C
k k Ck
k
yk|k = zk Rk S1/2
k
k
k1 = FTk
k Fk
which is identical to the minimum-variance Kalman lter. The above solutions minimize the variance of the output estimation error. Note that the RauchTungStriebel
smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and
input estimation can be constructed similarly.
n = 0
k = HT S1 y + C
T
k k
k k
k
k1 = FT
k k
n = 0
k = I Kk Hk
where Sk is the residual covariance and C
. The smoothed state and covariance can then be found A continuous-time version of the above smoother is described in.[31][32]
by substitution in the equations
k Pk|k
Pk|n = Pk|k Pk|k
k
xk|n = xk|k Pk|k
or
k Pk|k1
Pk|n = Pk|k1 Pk|k1
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance lter
and smoother recursions (extended Kalman ltering).
k .
xk|n = xk|k1 Pk|k1
An important advantage of the MBF is that it does not
require nding the inverse of the covariance matrix.
15.3
Minimum-variance smoother
The minimum-variance smoother can attain the bestpossible error performance, provided that the models are
linear, their parameters and the noise statistics are known
precisely.[30] This smoother is a time-varying state-space
generalization of the optimal non-causal Wiener lter.
k = Sk
1/2
k|k1 + Sk
Hk x
zk
17 Frequency-weighted
lters
Kalman
Pioneering research on the perception of sounds at dierent frequencies was conducted by Fletcher and Munson in
the 1930s. Their work led to a standard way of weighting
measured sound levels within investigations of industrial
noise and hearing loss. Frequency weightings have since
12
18 NON-LINEAR FILTERS
been used within lter and controller designs to manage non-linear, the extended Kalman lter can give particperformance within bands of interest.
ularly poor performance.[36] This is because the covariTypically, a frequency shaping function is used to weight ance is propagated through linearization of the underlying
model. The unscented Kalman lter (UKF)
the average power of the error spectral density in a spec- non-linear
[36]
uses
a
deterministic sampling technique known as
ied frequency band. Let y - y denote the output estithe
unscented
transform to pick a minimal set of sammation error exhibited by a conventional Kalman lter.
ple
points
(called
sigma points) around the mean. These
Also, let W denote a causal frequency weighting transsigma
points
are
then
propagated through the non-linear
fer function. The optimum solution which minimizes the
functions,
from
which
the mean and covariance of the
variance of W ( y - y ) arises by simply constructing
estimate
are
then
recovered.
The result is a lter which
1
W y .
more accurately captures the true mean and covariance.
The design of W remains an open question. One way (This can be veried using Monte Carlo sampling or
of proceeding is to identify a system which generates the through a Taylor series expansion of the posterior statisestimation error and setting W equal to the inverse of tics.) In addition, this technique removes the requirement
that system.[35] This procedure may be iterated to ob- to explicitly calculate Jacobians, which for complex functain mean-square error improvement at the cost of in- tions can be a dicult task in itself (i.e., requiring comcreased lter order. The same technique can be applied plicated derivatives if done analytically or being computo smoothers.
tationally costly if done numerically).
18
Non-linear lters
Predict
As with the EKF, the UKF prediction can be used indeThe basic Kalman lter is limited to a linear assumption. pendently from the UKF update, in combination with a
More complex systems, however, can be nonlinear. The linear (or indeed EKF) update, or vice versa.
non-linearity can be associated either with the process
The estimated state and covariance are augmented with
model or with the observation model or with both.
the mean and covariance of the process noise.
18.1
xak1|k1 = [
xTk1|k1 E[wTk ] ]T
[
Pk1|k1
0
Pak1|k1 =
0
Qk
xk = f (xk1 , uk ) + wk
0k1|k1 = xak1|k1
zk = h(xk ) + vk
ik1|k1 = xak1|k1 +
(
)
(L + )Pak1|k1 ,
i = 1, . . . , L
i
(
)
= xak1|k1
(L + )Pak1|k1
,
i = L+1, . . . , 2
iL
18.2
18.2
13
The matrix square root should be calculated using numer- Alternatively if the UKF prediction has been used the
ically ecient and stable methods such as the Cholesky sigma points themselves can be augmented along the foldecomposition.
lowing lines
The sigma points are propagated through the transition
function f.
k|k1 := [Tk|k1
ik|k1 = f (ik1|k1 )
2L
2L
[
Rak =
0
0
0
Rk
Wsi ik|k1
i=0
Pk|k1 =
(L + )Rak
where
i = 0, . . . , 2L
where f : RL R|x| . The weighted sigma points are recombined to produce the predicted state and covariance.
k|k1 =
x
E[vTk ] ]T
k|k1 ][ik|k1 x
k|k1 ]T
Wci [ik|k1 x
i=0
ki = h(ik|k1 )
i = 0..2L
where the weights for the state and covariance are given
The weighted sigma points are recombined to produce
by:
the predicted measurement and predicted measurement
covariance.
0
Ws =
L+
2L
zk =
Wsi ki
Wc0 =
+ (1 2 + )
L+
i=0
1
i
i
Ws = Wc =
2(L + )
2L
2
Pzk zk =
Wci [ki zk ][ki zk ]T
= (L + ) L
i=0
and control the spread of the sigma points. is related to the distribution of x . Normal values are = The state-measurement cross-covariance matrix,
103 , = 0 and = 2 . If the true distribution of x is
Gaussian, = 2 is optimal.[37]
2L
k|k1 ][ki zk ]T
P
=
Wci [ik|k1 x
xk zk
Update
i=0
The predicted state and covariance are augmented as beis used to compute the UKF Kalman gain.
fore, except now with the mean and covariance of the
measurement noise.
xak|k1 = [
xTk|k1 E[vTk ] ]T
[
Pk|k1
0
a
Pk|k1 =
0
Rk
Kk = Pxk zk P1
zk zk
As with the Kalman lter, the updated state is the predicted state plus the innovation weighted by the Kalman
gain,
0k|k1 = xak|k1
ik|k1 = xak|k1 +
ik|k1 = xak|k1
(
(
(L + )Pak|k1
(L + )Pak|k1
)
,
)
,
iL
i = LP
+k|k
1, .=. .P, k|k1
2L Kk Pz
k zk
KkT
14
19
22
KalmanBucy lter
APPLICATIONS
[
]
[
]
0|0 = E x(t0 ) , P0|0 = V ar x(t0 )
Initialize x
(t) = F(t)
k1|k1
x
x(t) + B(t)u(t) with ,
x(tk1 ) = x
The KalmanBucy lter (named after Richard Snow
x
=
x
(t
)
k
k|k1
den Bucy) is a continuous time version of the Kalman
Predict
lter.[38][39]
P(t)
= F(t)P(t) + P(t)F(t)T + Q(t) with ,P(tk1 ) = Pk1|k
It is based on the state space model
Pk|k1 = P(tk )
d
(t) = F(t)
x
x(t) + B(t)u(t) + K(t)(z(t) H(t)
x(t))
dt
Pk|k = (I Kk Hk )Pk|k1
22 Applications
Attitude and heading reference systems
20
Most physical systems are represented as continuoustime models while discrete-time measurements are frequently taken for state estimation via a digital processor.
Therefore, the system model and measurement model are
given by
Autopilot
Battery state of charge (SoC) estimation[44][45]
Brain-computer interface
Chaotic signals
Tracking and vertex tting of charged particles in
particle detectors[46]
(
)
x(t)
= F(t)x(t) + B(t)u(t) + w(t), w(t) N 0, Q(t) Tracking of objects in computer vision
zk = Hk xk + vk ,
vk N (0, Rk )
Dynamic positioning
where
Economics, in particular macroeconomics, time series analysis, and econometrics[47]
xk = x(tk )
15
Nuclear medicine single photon emission computed tomography image restoration[48]
Orbit Determination
Power system state estimation
Radar tracker
Satellite navigation systems
Seismology
[49]
23
See also
SchmidtKalman lter
Separation principle
Sliding mode control
Stochastic dierential equations
Volterra series
Wiener lter
Zakai equation
24 References
[1] Wolpert, Daniel; Ghahramani, Zoubin (2000). Computational principles of movement neuroscience. Nature Neuroscience 3: 12127. doi:10.1038/81497. PMID
11127840.
[2] Kalman, R. E. (1960). A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering 82: 35. doi:10.1115/1.3662552.
[3] Steen L. Lauritzen. Time series analysis in 1880. A
discussion of contributions made by T.N. Thiele. International Statistical Review 49, 1981, 319333. JSTOR
1402616
[4] Steen L. Lauritzen, Thiele: Pioneer in Statistics, Oxford
University Press, 2002. ISBN 0-19-850972-3.
[5] Mohinder S. Grewal and Angus P. Andrews
16
24
REFERENCES
[37] Wan, E.A.; Van Der Merwe, R. (2000). The unscented Kalman lter for nonlinear estimation.
Proceedings of the IEEE 2000 Adaptive Systems for
Signal Processing, Communications, and Control
153.
Symposium (Cat.
No.00EX373) (PDF). p.
doi:10.1109/ASSPCC.2000.882463.
ISBN 0-78035800-7.
[38] Bucy, R.S. and Joseph, P.D., Filtering for Stochastic Processes with Applications to Guidance, John Wiley & Sons,
1968; 2nd Edition, AMS Chelsea Publ., 2005. ISBN 08218-3782-6
17
25 Further reading
Einicke, G.A. (2012). Smoothing, Filtering and Prediction: Estimating the Past, Present and Future. Rijeka, Croatia: Intech. ISBN 978-953-307-752-9.
Jinya Su; Baibing Li; Wen-Hua Chen (2015).
On existence, optimality and asymptotic
stability of the Kalman lter with partially
observed inputs.
Automatica 53: 149154.
doi:10.1016/j.automatica.2014.12.044.
Gelb, A. (1974). Applied Optimal Estimation. MIT
Press.
Kalman, R.E. (1960).
A new approach to
linear ltering and prediction problems (PDF).
Journal of Basic Engineering 82 (1): 3545.
doi:10.1115/1.3662552. Retrieved 2008-05-03.
Kalman, R.E.; Bucy, R.S. (1961). New Results in
Linear Filtering and Prediction Theory.
Harvey, A.C. (1990). Forecasting, Structural Time
Series Models and the Kalman Filter. Cambridge
University Press.
Roweis, S.; Ghahramani, Z. (1999).
A
Unifying Review of Linear Gaussian Models.
Neural Computation 11 (2): 305345.
doi:10.1162/089976699300016674.
PMID
9950734.
Simon, D. (2006).
Optimal State Estimation:
Kalman, H Innity, and Nonlinear Approaches.
Wiley-Interscience.
Stengel, R.F. (1994). Optimal Control and Estimation. Dover Publications. ISBN 0-486-68200-5.
Warwick, K. (1987).
Optimal observers
for ARMA models (PDF). International
Journal of Control 46 (5):
14931503.
doi:10.1080/00207178708933989.
Retrieved
2008-05-03.
Bierman, G.J. (1977). Factorization Methods for
Discrete Sequential Estimation. Mathematics in Science and Engineering 128 (Mineola, N.Y.: Dover
Publications). ISBN 978-0-486-44981-4.
Bozic, S.M. (1994). Digital and Kalman ltering.
ButterworthHeinemann.
Haykin, S. (2002). Adaptive Filter Theory. Prentice
Hall.
Liu, W.; Principe, J.C. and Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. John Wiley.
Manolakis, D.G. (1999). Statistical and Adaptive
signal processing. Artech House.
18
26
26
External links
EXTERNAL LINKS
19
Extended Kalman Filters explained in the context of
Simulation, Estimation, Control, and Optimization
Online demo of the Kalman Filter. Demonstration
of Kalman Filter (and other data assimilation methods) using twin experiments.
Botella, Guillermo; Martn h., Jos Antonio; Santos, Matilde; Meyer-Baese, Uwe (2011). FPGABased Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision. Sensors 11
(12): 12511259. doi:10.3390/s110808164.
Hookes Law and the Kalman Filter A little spring
theory emphasizing the connection between statistics and physics.
Examples and how-to on using Kalman Filters with
MATLAB A Tutorial on Filtering and Estimation
Explaining Filtering (Estimation) in One Hour, Ten
Minutes, One Minute, and One Sentence by Yu-Chi
Ho
20
27
27
27.1
Kalman lter Source: https://en.wikipedia.org/wiki/Kalman_filter?oldid=707275061 Contributors: AxelBoldt, Marj Tiefert, The Anome,
Fnielsen, Michael Hardy, Kku, Ee79, Egil, Ellywa, Angela, Mark Foskey, Ciphergoth, Cyan, Novum, Catskul, Robbot, Benwing, Jredmond, Aetheling, Giftlite, Seabhcan, Michael Devore, Alexander.stohr, Eoghan, Eregli bob, Ablewisuk, Rdsmith4, Cihan, Sam Hocevar,
Qef, Flex, N328KF, Rich Farmbrough, TedPavlic, Caesar, Bender235, Neko-chan, Violetriga, MisterSheik, Sietse Snel, Peter M Gerdes,
Kghose, Meggar, I9Q79oL78KiL0QTFHgyc, Giraedata, Warnet, Crust, Don Reba, Arthena, Kotasik, PAR, Nullstein~enwiki, BRW,
Cburnett, Alai, Jehrlich, Forderud, Oleg Alexandrov, Firsfron, GregorB, Chrislloyd, Joe Beaudoin Jr., Rjwilmsi, Strait, Gareth McCaughan,
Amelio Vzquez, Mathbot, Chobot, YurikBot, Ashsong, Gaius Cornelius, VogonPoet, Nils Grimsmo, Voidxor, Light current, The imp,
Ccwen, Mebden, Amit man, SmackBot, PEHowland, Ekschuller, Number 8, Eike Welk, Mcld, Benjaminevans82, Thumperward, Nbarth,
Torzsmokus, Joeyo, Memming, SeanAhern, Forrestv, Cronholm144, Kvng, Kurtan~enwiki, Ioannes Pragensis, BoH, Myasuda, Cydebot,
Krauss, Ryan, Alan1507, Ztolstoy, Gogo Dodo, Ksood22, Skittleys, Oruanaidh, A.K., Thijs!bot, Nick Number, Binarybits, Robbbb, Erxnmedia, Drizzd~enwiki, .anacondabot, OM, Livermouse, Albmont, Netzer moriya, Michel ouiki~enwiki, Nikevich, User A1, Martynas Patasius, Vilwarin, Chris G, Gcranston, Khbkhb, Stochastics, Jiuguang Wang, Tdadamemd, JParise, Wgarn, Molly-in-md, Epiphany Johnson,
Jevansen, VolkovBot, AlnoktaBOT, Julian I Do Stu, Slaunger, Damien d, Marcosaedro, Bsrancho, Kovianyo, PaulTanenbaum, Jmath666,
Hmms, Forwardmeasure, Petteri Aimonen, StevenBell, Cwkmail, Stelleg151, KoenDelaere, Melcombe, Headlessplatter, Rinconsoleao,
Tigergb, Clarkmoody, Gvw007, Ezavarei, JimInTheUSA, Niemeyerstein en, Bradka, Niceguyedc, Excirial, Simonmckenzie, PixelBot,
Dilumb, Brews ohare, Butala, Chaosdruid, AlexCornejo, M.B.Shadmehr, Qwfp, XLinkBot, Stickee, Chanakal, Mikejulietvictor, Paulginz,
Daveros2008, Addbot, Jjunger, DOI bot, Olli Niemitalo, Keithbierman, Fgnievinski, PaleS2, MrOllie, EconoPhysicist, Publichealthguru,
SpBot, Oldlab, Ettrig, CountryBot, Luckas-bot, Yobot, AnomieBOT, Jose278, Fiacobelli, Citation bot, DayDuck1981, TheAMmollusc, Anschmid13, Omnipaedista, Kaslanidi, MultiPoly, 55604PP, FrescoBot, Fetchmaster, Fortdj33, Olexa Riznyk, Sanpitch, Anterior1,
Briardew, OgreBot, Citation bot 1, Zacio, Kiefer.Wolfowitz, Havocgb, Martinvl, Jerha202, YWD, Houber1, Serols, Trappist the monk,
Emmanuel Battesti, Reach Out to the Truth, Vishalv2050, Wogud86, Obankston, RjwilmsiBot, Iulian.serbanoiu, Fblasqueswiki, EmausBot,
Kronick, Drrggibbs, Mmeijeri, Brent Perreault, Yoepp, , Vgiri88, Mineotto6, AManWithNoPlan, Cblambert, Ray Czaplewski, Ptsneves, Yunfei Chu, Chris857, Ebehn, Kkddkkdd, Thundersh24, KlappCK, Lakshminarayanan r, Frietjes, Jerrykorulla, Danim, Hikenstu,
Mdutch2001, Helpful Pixie Bot, Qzertywsx, Mrajaa, Strike Eagle, Schnicki, BG19bot, Davmre, Stuartmacgregor, Eitt, Manoguru, Newstateofme, Dlituiev, Kfriston, Pshark sk, BattyBot, ChrisGualtieri, Gohchunfan, Brybot, Me, Myself, and I are Here, Pearl92, IheartDA,
Liwangyan, Reculet, Smirglis, Pdecalculus, Babitaarora, Stamptrader, Xenxax, Dalemcl, Avi.nehemiah, Impsswoon, Monkbot, Luigidigiacomo, Potwang, Pezcore2014, Loraof, Purefel, Navraj42, KasparBot, Lipkenszaal, Wikitidy, Myraeri, Jamal Abdalla and Anonymous:
399
27.2
Images
File:Basic_concept_of_Kalman_filtering.svg Source:
https://upload.wikimedia.org/wikipedia/commons/a/a5/Basic_concept_of_
Kalman_filtering.svg License: CC0 Contributors: Own work Original artist: Petteri Aimonen
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:HMM_Kalman_Filter_Derivation.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/HMM_Kalman_Filter_
Derivation.svg License: Public domain Contributors: Own work Original artist: Qef
File:Kalman.png Source: https://upload.wikimedia.org/wikipedia/commons/d/da/Kalman.png License: CC BY-SA 4.0 Contributors:
Own work Original artist: Lipkenszaal
File:Kalman_filter_model_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Kalman_filter_model_2.svg License:
Public domain Contributors: Own work (Original text: I made this image myself, and I release it to the public domain.) Original artist:
User:Headlessplatter
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Rudolf_Kalman.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ac/Rudolf_Kalman.jpg License: CC BY-SA 2.0
de Contributors: http://owpdb.mfo.de/detail?photo_id=11513 Original artist: Greuel, Gert-Martin
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:
Wiki_letter_w.svg' class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_
letter_w.svg/50px-Wiki_letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/
Wiki_letter_w.svg/75px-Wiki_letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/
100px-Wiki_letter_w.svg.png 2x' data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward
27.3
Content license