You are on page 1of 37

Probabilistic Robotics

Bayesian filtering
Martin Magnusson
April 3, 2012

Intro and recap

Robot environment interaction

Agenda

Intro and recap

Robot environment interaction

Bayes filters

Outro

Bayes filters

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Important relations
Conditional probability
(the prob that X = x if we know, or assume, that Y = y)

p(x | y) =

p(x, y) joint prob


p(y) scaled to fit y

note that p(x, y) p(y)

Joint probability of x and y


(the prob that both X = x and Y = y)

p(x, y) = p(x | y)
| {z }

p(y)
|{z}

= p(y | x)p(x)

cond prob scaled to fit

Conditional independence: x and y are independent, given z,


iff
p(x, y | z) = p(x | z)p(y | z)

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Causal vs. diagnostic reasoning


Environment state X: open or closed.
Robot sensor reading Y: open or closed.
Assume we know p(Y = y | X = open)
(i.e., quality of the sensor causal knowledge)

and need to know p(X = open | Y = y)


(i.e., prob that the door is open diagnostic knowledge)

Bayes rule lets us use causal knowledge to infer diagnostic


knowledge:
p(open | y) =

p(y | open)p(open)
p(y)

(How to compute p(y)? Well see that later.)

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Bayes formula
p(x | y) =

p(y | x)p(x)
p(y)

Compare def. of conditional probability:


p(x, y) = p(x | y)p(y) = p(y | x)p(x)
Theorem (for discrete RV)
p(x | y) =

p(y | x)p(x)
p(y)

law of total prob.

p(y | x)p(x)
0
0
x0 p(y | x )p(x )

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Bayes formula, explained


Prior: p(x) (probability before sensor input)
Posterior: p(x | y) (probability after input = diagnosis)
Bayes rule: probability that x is true given y (the posterior)
increases with
the prior of x (i.e., prob of x before the test),
and the prob of finding y in a world where x is true

decreases with
the prior prob of finding y (i.e., prob of getting test result y
without knowing the state of x)

The denominator doesnt depend on x, so its the same for


both p(cancer | pos) and p(cancer | pos) and is used to
make the posterior p(x | y) integrate to 1.

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Bayes formula, robotics example


X: world state, Z: robot measurements.
Noisy sensors:
= 0.6
= 0.4

p(Z = open | X = closed)


= 0.2
p(Z = closed | X = closed) = 0.8

p(Z = open | X = open)


p(Z = closed | X = open)

hard to sense open door

easy to sense closed door

Prior probabilities
p(X = open) = 0.5
p(X = closed) = 0.5

Outro

Intro and recap

Robot environment interaction

Bayes filters

Outro

Recap of last lecture

State estimation example

Suppose the robot senses Z = open.


What is the probability that the door is actually open; that
is, p(X = open | Z = open)?
Apply Bayes formula:
p(X = open | Z = open) =
p(Z = open | X = open)p(X = open)
p(Z = open | X = open)p(X = open) + p(Z = open | X = closed)p(X = closed)
0.6 0.5
=
0.6 0.5 + 0.2 0.5

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Law of total probability


Where does the denominator come from? If all y are pairwise
disjoint and fill up all of , then
Theorem (Discrete case)
p(x) =

X
y

p(x | y)p(y) =

p(x, y)

Follows from the definition of conditional probability and


Kolmogorovs axioms.
Robot state variables fulfil the requirements: Can only be in
one state at a time, and all outcomes = .

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Law of total probability, illustration

p(x) =

n
X

p(x, yi )=

i=1

n
X

p(x | yi )p(yi )

i=1

y2

y1

y6

y4
y7
y3

y5

Outro

Intro and recap

Robot environment interaction

Bayes filters

Recap of last lecture

Law of total probability, proof


If x occurs, then one of yi must also occur (since yi are
disjoint and fill ).
So x occurs and both x and one yi occurs are
equivalent.
Equivalent to ni=1 (x yi ) occurs.
X

p(y)

axiom 1

p(x) = ni=1 (x yi )

axiom 3

n
X

p(x, yi )

i=1
n
def. of joint prob. X
p(x)
=
p(x | yi )p(yi )
i=1

Outro

Intro and recap

Robot environment interaction

Bayes filters

Outro

State

State
Description of what the robot needs
to know.
State at time t is denoted xt .
State transitions over time:
x0 x1 . . .
The set of all states from time t1 to
time t2 :
xt1 :t2 = xt1 , xt1 +1 , xt1 +1 , . . . , xt2
Internal state Typically the pose [x, y, ].
External state Map, other agents, etc.

x, y

Intro and recap

Robot environment interaction

Bayes filters

State

Markov state

The Markov property


The conditional probability distribution of
future states depends only upon the present
state, not on the sequence of events that
preceded it.
In other words, past (x0:t1 ) and future
(xt+1: ) states are conditionally independent,
given the present state xt .

Outro

Intro and recap

Robot environment interaction

Bayes filters

State

Markov state, example

Positions of chess pieces is Markov


state (complete state), in idealised
chess. . .

. . . but not in real-world chess!

In reality, complete state descriptions are infeasible.

Outro

Intro and recap

Robot environment interaction

Interaction

Measurements

Sensor input from environment.


Measurement at time t is denoted zt .
Measurements decrease uncertainty.

Bayes filters

Outro

Intro and recap

Robot environment interaction

Bayes filters

Interaction

Actions

Action at time t is denoted ut .


Typical actions:
the robot turns its wheels to move,
the robot uses its manipulator to grasp an object,
do nothing (and let time pass by).

Note that
actions are never carried out with absolute certainty,
actions generally increase uncertainty.

Outro

Intro and recap

Robot environment interaction

Bayes filters

Interaction

Modelling actions
The outcome of an action u is modelled by the conditional
probability distribution
p(x | u, x0 )
That is, the probability that, when in state x0 , executing action u,
changes the state to x.
1

state x0 = [10 m, 5 m, 0 ]

action u = move 1 m forward

what is, for example, p(x = [11 m, 5 m, 0 ])?


(p < 1 because of wheel slip, etc.)

Outro

Intro and recap

Robot environment interaction

Bayes filters

Belief

Belief
We never know the true state of the robot.
All we have is the belief.
Represent belief through conditional probability
distribution:
bel(xt ) = p(xt | z1:t , u1:t )
A belief distribution assigns a probability density (or mass)
to each possible outcome, (given a sequence of actions and
measurements).
Belief distributions are posterior probabilities over state
variables, conditioned on the available data.

Outro

Intro and recap

Robot environment interaction

Bayes filters

Belief

Prediction vs. belief


Represent belief through conditional probability distribution:
bel(xt ) = p(xt | z1:t , u1:t )

Prediction: the belief distribution before incorporating a


measurement
bel(xt ) = p(xt | z1:t1 , u1:t )
Belief: the belief distribution after a measurement
bel(xt ) = p(xt | z1:t , u1:t )

Outro

Intro and recap

Robot environment interaction

Bayes filters

The algorithm

Bayes filters: framework


Given:
1

stream of observations z and action data u


{z1:t , u1:t } = {u1 , z1 , . . . , ut , zt }

2
3
4

sensor model p(z | x) (how accurate the sensors are)


action model p(x | u, x0 ) (how reliable the actuators are)
prior probability of the system state p(x).

Wanted:
estimate of the state x (the belief)
bel(xt ) = p(xt | z1:t , u1:t )

Update the belief recursively: bel(xt ) is computed from


bel(xt1 ).

Outro

Intro and recap

Robot environment interaction

Bayes filters

Outro

The algorithm

Bayes filters: assumptions

x0

...

ut2

ut1

ut

xt2

xt1

xt

zt2

zt1

zt

...

Markov assumption implies


static world
independent controls
perfect model no approximation errors
p(xt | x0:t1 , z1:t , u1:t ) = p(xt | xt1 , ut )
p(zt | x0:t , z1:t , u1:t ) = p(zt | xt )

state transition probability


measurement probability

Intro and recap

Robot environment interaction

Example

State estimation, example

Robot observing a door


Given a sensor reading open from
the camera, what is the
probability that the door is
actually open?
p(X = open | Z = open)

Bayes filters

Outro

Intro and recap

Robot environment interaction

Bayes filters

Outro

Example

State estimation example, sensor model


Xt = {open, closed} : world state
Zt = {open, closed}: robot measurements.
Noisy sensors:
p(Zt = sense_open | Xt = open)
= 0.6
p(Zt = sense_closed | Xt = open) = 0.4
p(Zt = sense_open | Xt = closed)
p(Zt = sense_closed | Xt = closed)

= 0.2
= 0.8

hard to sense open door


easy to sense closed door

Intro and recap

Robot environment interaction

Bayes filters

Outro

Example

State estimation example, actions


Actions Ut = {push, null}
p(Xt = open | Ut = push, Xt1 = open)
p(Xt = closed | Ut = push, Xt1 = open)
p(Xt = open | Ut = push, Xt1 = closed)
p(Xt = closed | Ut = push, Xt1 = closed)
p(Xt
p(Xt
p(Xt
p(Xt

=1
=0

= 0.8
= 0.2

door stays open


cant always open door

= open | Ut = null, Xt1 = open)


=1

= closed | Ut = null, Xt1 = open)


=0
no other agents
= open | Ut = null, Xt1 = closed)
=0

= closed | Ut = null, Xt1 = closed) = 1

Intro and recap

Robot environment interaction

Bayes filters

Outro

Example

State estimation example, t = 1


Suppose at time t = 1, the robot takes action U1 = null and
senses Z1 = open.
We want to compute an updated belief distribution bel(X1 ).
With Bayes filter, we can do that using the prior belief
bel(X0 ).
bel(X1 = open)
= p(X = open | Z = open) =
p(Z = open | X = open)p(X = open)
p(Z = open | X = open)p(X = open) + p(Z = open | X = closed)p(X = closed)
0.6 0.5
=
0.6 0.5 + 0.2 0.5
= 0.75
=

bel(X1 = closed) =

0.2 0.5
= 0.25 = 1 bel(X1 = open)
0.6 0.5 + 0.2 0.5

Intro and recap

Robot environment interaction

Bayes filters

Example

State transisions
p(x | u, x0 ) for u = push
0

X = closed

X = open

0.2

0.8

This is a simple two-state Markov chain.


If the door is closed, the action push succeeds in 80% of the
cases.

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

Integrating the outcome of actions


We know p(x | u, x0 ) (thats our action model).
How to compute the posterior p(x | u)? I.e., the resulting
belief after the action.
Integrate over all prior states x0 .
The law of total probability gives us
X
p(x | u) =
p(x | u, x0 )p(x0 )

discrete case

x0

Z
p(x | u) =

p(x | u, x0 )p(x0 ) dx0

continuous case

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

State estimation example, executing an action


Suppose at time t = 2, the robot takes action u2 = push.
p(X = open | u2 )
=

p(X = open | u2 , x0 )p(x0 )

x0

= p(X = open | u2 , X = open)p(X = open)


+ p(X = open | u2 , X = closed)p(X = closed)
= 1 0.75 + 0.8 0.25 = 0.95

p(X = closed | u2 )
=

p(X = closed | u2 , x0 )p(x0 )

x0

= p(X = closed | u2 , X = open)p(X = open)


+ p(X = closed | u2 , X = closed)p(X = closed)
= 0 0.75 + 0.2 0.25 = 0.05

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

Combining evidence

How can we integrate the next observation Z2 ?


More generally, how can we estimate p(X | Z1 , . . . , Zn )?

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

Bayes rule, with background knowledge

p(x | y) =

p(y | x)p(x)
p(y)

We can also condition Bayes rule on additional RVs


(background knowledge):
p(x | y, z) =

p(y | x, z)p(x | z)
p(y | z)

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

Recursive Bayesian updating

p(x | z1 , . . . , zt ) =

p(zt | x, z1 , . . . , zt1 )p(x | z1 , . . . , zt1 )


p(zt | z1 , . . . , zt1 )

Markov assumption: zt is independent of z1:t1 if we know x.


Then we can simplify:
sensor model

prior

}|
{
z }| { z
p(zt | x) p(x | z1 , . . . , zt1 )
p(x | z1 , . . . , zt ) =
p(zt | z1 , . . . , zt1 )
|
{z
}
normaliser

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

State estimation example, t = 2


After taking action u2 = push, it senses z2 = open.
bel(X2 = open)
= p(X2 = open | z1 , z2 ) =
p(z2 | X1 = open)p(X1 = open | z1 )
p(z2 | X1 = open)p(X1 = open | z1 ) + p(z2 | X1 = closed)p(X1 = closed | z1 )
0.6 0.75
=
0.6 0.75 + 0.2 0.25
= 0.90

bel(X2 = closed) =

0.2 0.25
= 0.10 = 1 bel(X2 = open)
0.6 0.75 + 0.2 0.25

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

The Bayes filter algorithm


Given
the previous belief distribution,
the latest action,
and the latest sensor measurement,

compute an updated belief distribution for time t.


1:
2:
3:
4:
5:
6:
7:

function BayesFilter(bel(Xt1 ), ut , zt )
for all xt do R
bel(xt ) = p(xt | ut , xt1 ) bel(xt1 ) dxt1 . control update
bel(xt ) = p(zt | xt ) bel(xt ) p(zt )1
. measurement update
end for
return bel(Xt )
end function

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

The Bayes filter algorithm explained

The control update comes from the law of total probability:


For all prior states xt1 , sum up (integrate)
the product of the prior for xt1
and the prob that u makes the transition from xt1 to xt .

The measurement update comes from Bayes rule


The prob of getting zt in xt
times the prior for xt (after the control update),
divided by the prior of zt , in order to make the total mass of
bel(xt ) = 1.

Outro

Intro and recap

Robot environment interaction

Bayes filters

Example

Why cant we use the Bayes filter in reality?

Because we cant compute the update rule for continuous state


spaces!
Because of the integral in the denominator (normaliser) of
Bayes rule
Because of the integral in the control update

Outro

Intro and recap

Robot environment interaction

Bayes filters

Summary
Markov assumptions: we dont need history of all previous
states.
Sensor measurements Z decrease uncertainty, robot actions
U increase uncertainty.
Belief is represented as posterior PDF over possible state
outcomes, conditioned on sensor data and actions.
Bayes rule allows us to compute probabilities that are hard
to assess otherwise.
Under the Markov assumption, recursive Bayesian updating
can be used to efficiently combine evidence.
Bayes filters are a probabilistic tool for estimating the state
of dynamic systems.
The Bayes filter cannot be implemented for realistic,
continuous, state spaces. (The remainder of the course will
discuss approximations.)

Outro

Intro and recap

Robot environment interaction

Next lecture

Time and space


10.1512.00, Wednesday April 11
T-111
Reading material
Thrun et al., Chapters 5 and 6

Bayes filters

Outro

You might also like