You are on page 1of 47

EGN 3060c:

Introduction to Robotics
Lecture 7:
Background to Planning;
Planning as Search
Instructor: Dr. Gita Sukthankar
Email: gitars@eecs.ucf.edu
Announcements
  Background reading on planning: Chapter 10
  Lab report due Friday by email (midnight).
  Questions due next Wed (hardcopy in class is
easier).
  For this lab in particular:
  Present a table with the measurements that you
made during lab. Calculate the mean and standard
deviation of the error for both the clockwise and
counterclockwise robot squares (distance only,
angular error not required).
Lab Report Format
General guidelines (also on webcourses):
  1 paragraph (~1/2 page) describing overall
operation of code
  Javadoc style listing of classes and methods with
a text description of what each method does
  A couple of sentences describing any problems
that you experienced doing the lab (which won’t
count toward your grade but will keep me
informed)

3
Where Planning Fits In…..
Architecture

Localization
Sensing

(Following 2 weeks) (Toward the end


World of the semester)
Model

Planning

(Next 2 weeks)
How to select a planner?
Can the robot
  Identify objects and their absolute location with
confidence?
  Determine its absolute location in the world?
  Execute actions reliably?
Does the environment….
  Require a large number of states to describe it
(lots of objects, locations, actions)
  Change in ways that aren’t well described by the
robot’s model (exogeneous events)
Localization
  Why is it difficult for the robot to figure out
where it is in the world?
Localization
  Limitations of encoders
  Slippage!
  Limitations of sensing technology

9
Optical Encoders (absolute)
•  Detecting motor shaft orientation

Binary encoding
of shaft rotation

via light patterns

potential problems?
Gray Code

# Binary
0
0 000
1
1 001
2
3 10 011

4 11 010
5 100 110
6
101 111
7
110 101
8
9 111 100

1000 1100 neighboring representations


differ by only 1 bit
1001 1101
Absolute Optical Encoders
•  Complexity of distinguishing many different states --
high resolution is expensive!

something simpler ?
Relative Encoders (Create)
•  Relative position

light sensor

decode
light emitter circuitry

grating

B quadrature
A leads B
encoding
Probabilistic Kinematics
Key question: We may know where our robot is supposed to be,
but in reality it might be somewhere else…

supposed final pose

lots of possibilities for the


actual final pose
VL (t) x

VR(t)
starting position
What should we do?
MODEL the error in order to reason about it!

Running around in squares


3 •  Create a program that will run your
robot in a square (~2m to a side),
pausing after each side before turning
and proceeding.

•  For 10 runs, collect both the


odometric estimates of where the robot
2 thinks it is and where the robot
actually is after each side.

4 •  You should end up with two sets of 30


angle measurements and 40 length
measurements: one set from odometry and
one from “ground-truth.”

1 •  Find the mean and the standard


deviation of the differences between
start and “end” odometry and ground truth for the angles
and for the lengths – this is the robot’s
motion uncertainty model.
This provides a probabilistic kinematic model.
Robot models: how-to
p( o | r, m ) sensor model p( rnew | rold, a, m ) action model

(0) Model the physics of the sensor/actuators theoretical


(with error estimates) modeling
Robot models: how-to
p( o | r, m ) sensor model p( rnew | rold, a, m ) action model

(0) Model the physics of the sensor/actuators theoretical


(with error estimates) modeling

(1) Measure lots of sensing/action results empirical


and create a model from them modeling

•  take N measurements, find mean (m) and st.


dev. (σ) and then use a Gaussian model
•  or, some other easily-manipulated (probability?) model...

0 if |x-m| > s 0 if |x-m| > s


p( x ) = p( x ) =
1 otherwise 1- |x-m|/s otherwise
Types of World Models
Once you know where the robot is (localization), how do you
model the world? (representation)?
Types of World Models

What does the metric map allow you to do that the topological map does not?
Hint: it’s all about distance!
Old-Style: Using Landmarks
neighborhood
boundary
distinctive
place (within
the corner)

path of robot as it moves


into neighborhood and
to the distinctive place

Use one behavior until robot sees the distinctive place (exteroceptive
cueing) then swap to a landmark localization behavior
Relational Graphs
•  Convert floorplan into the relational graph before
giving it to the planner
•  Label each edge with the appropriate local control
schema: move through door, follow hall
•  Label each node with the type of gateway: dead-end,
junction, room
•  Search with Djkstra’s graph search (single source
shortest path)
de3
fh Room 1 r1 Room
mtd 2
r2
mtd
t1 fh t2 fh t3 fh de1
mtd mtd
fh r4
Room 3r3 Room 4
de2
Occupancy Grids
  Most commonly used metric map representation
  Discretize the world
  Make a relational graph by each element as a
node, connecting neighbors (4-connected, 8-
connected)
  Requires more memory than the other
representation
Connectvity

8-point connectivity will generate diagonal shortest paths


which might not actually be traversable by the robot.
Types of Planners
  Planning with a map
  Search-based (A*)
  Wavefront
  Navigation without a map
  Bug planner
  Potential fields
  Other issues:
  Non-navigational planning
  Repairing plans
  Repeated Forward A*
  D* Lite
  High-dimensional spaces
  Planning for non-navigational tasks
Planning as Search
  Find a sequence/set of actions that can
transform an initial state of the world to a
goal state

  Planning Involves Search Through a Search


Space
  How to represent the search space? (occupancy
grid)
  How to conduct the search? (search heuristics)
  How to evaluate the solutions? (cost functions)
Search Recap

  A sampling of search strategies … there are


many more ….

Search

Uninformed search Informed search

Breadth-first Uniform-cost Depth first Greedy best first


A* search
search search search search

To use informed search, you need to have some way of evaluating the
nearness of a state to the goal state (possible with metric representations)
Search State Space
  A state space is a graph, (V, E), where V is a
set of nodes and E is a set of arcs, where each
arc is directed from one node to another node
  Each node is a data structure that contains a
state description plus other information
relevant to your search algorithm
  Each arc corresponds to a transition from a
node to its successor
  Each arc has a fixed, positive cost associated
with that transition
Search State Space

  Each node has a set of successor nodes


  The process of expanding a node means to generate all
of the successor nodes and add them and their
associated arcs to the state-space graph
  One or more nodes are designated as start nodes
  A goal test determines if a node is a goal node
  A solution is a path in a state space from a start node to
a goal node
  The cost of a solution is the sum of the arc costs on the
solution path
Once we have our state space (and action
space, and cost function...)
●  Perform tree-based search
–  Construct the root of the tree as the start state, and give it
value 0
–  While there are unexpanded leaves in the tree
●  Find the leaf x with the lowest value

●  For each action, create a new child leaf of x

●  Set the value of each child as:

g(x) = g(parent(x))+c(parent(x),x)
where c(x, y) is the cost of moving from
x to y (distance, in this case)
Planning by Searching a Tree
Planning by Searching a Tree
Planning by Searching a Tree
Planning by Searching a Tree

....
Simple Search Algorithm
Let Q be a list of partial paths,
Let S be the start node and
Let G be the Goal node.
1.  Initialize Q with partial path (S) as only
entry; set Visited = {}
2.  If Q is empty, fail. Else, pick some
partial path N from Q
3.  If head(N) = G, return N (goal
reached!)
4.  Else
a)  Remove N from Q
b)  Find all children of head(N) not in
Visited and create all the one-
step extensions of N to each Q Visited
child. 1 (3, 11)
c)  Add to Q all the extended paths;
2
d)  Add children of head(N) to
Visited 3
e)  Go to step 2. 4
Simple Search Algorithm
Let Q be a list of partial paths,
Let S be the start node and
Let G be the Goal node.
1.  Initialize Q with partial path (S) as only
entry; set Visited = {}
2.  If Q is empty, fail. Else, pick some
partial path N from Q
3.  If head(N) = G, return N (goal
reached!)
4.  Else
a)  Remove N from Q
b)  Find all children of head(N) not in
Visited and create all the one-
Q Visited
step extensions of N to each 1 (3, 11)
child.
(2, 11) ( 2, 10), (3, 10),
c)  Add to Q all the extended paths; 2 (3, 11)
….
d)  Add children of head(N) to
Visited 3
e)  Go to step 2. 4
Simple Search Algorithm
Let Q be a list of partial paths,
Let S be the start node and
Let G be the Goal node.
1.  Initialize Q with partial path (S) as only
entry; set Visited = {}
2.  If Q is empty, fail. Else, pick some
partial path N from Q
3.  If head(N) = G, return N (goal
reached!)
4.  Else
a)  Remove N from Q
b)  Find all children of head(N) not in
Visited and create all the one- Q Visited
step extensions of N to each
child. 1 (3, 11)
c)  Add to Q all the extended paths; (2, 10) ( 3, 10), (4, 10),
2 (3, 11)
d)  Add children of head(N) to ….
Visited 3 (1, 9), (2, 9), (3, 9), …. (3, 11), (2, 10)
e)  Go to step 2.
4
Simple Search Algorithm
public class Search {
static Path search(State start, State goal) {
Queue q = new Queue();
HashSet<State> visited = new HashSet<State>();
q.add(new Path(start));
while (!q.empty()) {
Path partialPath = q.pop();
State head = partialPath.head();
if (head.matches(goal))
return partialPath; // Goal reached!
for (int i = 0; i < head.numNeighbours(); i++) {
if (visited.contains(head.neighbour(i))
continue;
// Create a new path to a node we haven’t seen before
// by adding the neighbour of the head to the current path
Path extension = new Path(head.neighbour(i), partialPath));
visited.add(head.neighbour(i));
q.push(extension);
}
} // End of while (q.empty());
return null; // No path found
}
}

Java has lots of in-built data structures (e.g. priority queues) that make the
problem of implementing the tree-based search relatively easy
Move Generation
●  How to determine the lowest-cost child to
consider next?
●  Shallowest next
–  aka: Breadth-first search
–  Guaranteed shortest
–  Storage intensive
●  Deepest next
–  aka: Depth-first search
–  Can be storage cheap
–  No optimality guarantees
Informed Search – A*
●  Use domain knowledge to bias the search
Estimated cost from
●  Favour actions that might get closer to the
here to the goal:
Cost incurred
goal so far,
“heuristic” cost
from the start state
●  Each state gets a value

f(x)=g(x)+h(x)
●  For example

–  g(x) = 3
–  h(x) = ||x-g||
=sqrt(82+182)
=19.7
–  f(x) =22.7
Informed Search – A*
●  Use domain knowledge to bias the search
●  Favour actions that might get closer to the
goal
●  Each state gets a value
f(x)=g(x)+h(x)
●  For example
–  g(x) = 4
–  h(x) = ||x-g||
=sqrt(112+182)
=21.1
–  f(x) =25.1
How to choose heuristics
●  The closer h(x) is to the true cost to the goal,
h*(x), the more efficient your search
BUT
●  h(x) ≤ h*(x) to guarantee that A* finds the
lowest-cost path
●  In this case, h is an “admissible” heuristic
Once the search is done, and we have found the
goal
●  We have a tree that contains a path from the start
(root) to the goal (some leaf)
●  Follow the parent pointers in the tree and trace back
from the goal to the root, keeping track of which
states you pass through
●  This set of states constitutes your plan

●  To execute the plan, use your


controller to face the first state in
the plan, and then drive to it
●  Once at the state, face and drive
to the next state
A problem with plans
●  We have a plan that
gets us from the
start to the goal
●  What happens if we
take an action that
causes us to leave
the plan?

1)  It’s a problem with planners!


We should use reactive behaviors!
2)  We can replan
3)  We can keep a cached conditional plan
4)  We can keep a policy
Progression vs. Regression
●  Progression (forward-chaining):
–  Choose action whose preconditions are satisfied
–  Continue until goal state is reached
●  Regression (backward-chaining):
–  Choose action that has an effect that matches an unachieved
subgoal
–  Add unachieved preconditions to set of subgoals
–  Continue until set of unachieved subgoals is empty
●  Progression: + Simple algorithm (“forward simulation”)
–  Often large branching factor
●  Regression: + Focused on achieving goals
–  Need to reason about actions
–  Regression is incomplete, in general, for functional effects
Design Choices
●  How is your map described? This may have an impact on the
state space for your planner
–  Is it a grid map?
–  Is it a list of polygons?
–  The critical choice for motion planning is state space
–  The other choices tend to affect computational performance, not
robot performance
●  What kind of controller do you have?
–  Do you just have controllers on distance and orientation?
–  Do you have behaviours that will let you do things like follow
walls?
●  What do you care about?
–  The shortest path?
–  The fastest path?
●  What kind of search to use?
–  Do you have a good heuristic?
–  If so, then maybe A* is a good idea.
Evaluating Search Algorithms
●  Completeness
–  Guarantees finding a solution whenever one exists
●  Time Complexity
–  How long (worst or average case) does it take to find a
solution? Usually measured in terms of the number of nodes
expanded.
●  Space Complexity
–  How much memory is used by the algorithm? Usually
measured in terms of the maximum size that the "nodes" list
becomes during the search.
●  Optimality
–  If a solution is found, is it guaranteed to be an optimal one?
That is, is it the one with minimum cost?
Tips about FSAs
  States: correspond to robot behavior modes
  Directed links between states: triggered by
sensory events or internal state variables (e.g. a
counter)
  Directed links can loop back to the same state
  Termination state is useful to include
  Consider all possible state transitions

You might also like