You are on page 1of 14

Artificial Intelligence

Outline – Beyond Classical Search


Artificial Intelligence ƒ Informed Searches
ICS461 • Best-first search
Fall 2010 • Greedy best-first search
• A* search
• Heuristics
ƒ Local search algorithms
Nancy E. Reed • Hill-climbing search
• Simulated annealing search
nreed@hawaii.edu • Local beam search
ƒ Genetic algorithms
ƒ Chapter 4

Review: Tree search Implementation of search algorithms


Function General-Search(problem, Queuing-Fn) returns a solution, or
ƒ A search strategy is defined by picking failure
the order of node expansion nodes Å make-queue(make-node(initial-state[problem]))
loop do
ƒ Breadth-first if node is empty then return failure
node Å Remove-Front(nodes)
ƒ Depth-first if Goal-Test[problem] applied to State(node) succeeds
then return node
ƒ Iterative deepening nodes Å Queuing-Fn(nodes, Expand(node,
Operators[problem]))
end

Queuing-Fn(queue, elements) is a queuing function that inserts a set of elements


into the queue and determines the order of node expansion. Varieties of the
queuing function produce varieties of the search algorithm.

5 6

Breadth First Search


Enqueue nodes in FIFO (first-in, first-out) order.
Uniform Cost Search
Enqueue nodes in order of cost

5 2 5 2 5 2
1 7 1 7
4 5

Intuition: Expand the cheapest node. Where


Intuition: Expand all nodes at depth i before
the cost is the path cost g(n)
expanding nodes at depth i + 1
• Complete? Yes.
• Complete? Yes. • Optimal? Yes, if path cost is nondecreasing function of depth
• Optimal? Yes, if path cost is nondecreasing function of depth • Time Complexity: O(bd)
• Time Complexity: O(bd) • Space Complexity: O(bd), note that every node in the fringe keep in the queue.
• Space Complexity: O(bd), note that every node in the fringe is kept in the queue.
Note that Breadth First search can be seen as a special case of Uniform Cost Search, where the path cost is just the depth.
Artificial Intelligence

7 8

Depth First Search


Enqueue nodes in LIFO (last-in, first-out) order.
Depth-Limited Search
Enqueue nodes in LIFO (last-in, first-out) order. But limit depth to L

L is 2 in this example

Intuition: Expand node at the deepest level Intuition: Expand node at the deepest level,
(breaking ties left to right) but limit depth to L

• Complete? No (Yes on finite trees, with no loops). • Complete? Yes if there is a goal state at a depth less than L
• Optimal? No • Optimal? No
Picking the right value for L is
• Time Complexity: O(bm), where m is the maximum depth. • Time Complexity: O(bL), where L is the cutoff. a difficult, Suppose we chose
• Space Complexity: O(bm), where m is the maximum depth. • Space Complexity: O(bL), where L is the cutoff. 7 for FWGC, we will fail to
find a solution...

9 10

Iterative Deepening Search I


Do depth limited search starting a L = 0, keep incrementing L by 1.
Iterative Deepening Search II
Iterative deepening looks wasteful because
we re-explore parts of the search space many
times...

Consider a problem with a branching factor of


10 and a solution at depth 5...

1+10+100+1000+10,000+100,000 = 111,111

1
Intuition: Combine the Optimality and
1+10
completeness of Breadth first search, with the
1+10+100
• Complete? Yes low space complexity of Depth first search
1+10+100+1000
• Optimal? Yes 1+10+100+1000+10,000
• Time Complexity: O(bd), where d is the depth of the solution. 1+10+100+1000+10,000+100,000
• Space Complexity: O(bd), where d is the depth of the solution. = 123,456

11 12
Example: Farmer, Wolf, Goat, and
Example - Problem Design Cabbage Problem
ƒ State-space search problem ƒ A farmer with his wolf, goat, and cabbage come to the
• World can be in one of several states edge of a river they wish to cross.
• Moves can change the state of the world ƒ There is a boat at the river's edge, but, of course, only the
farmer can row.
ƒ Design/create data structures to represent ƒ The boat can carry a maximum of the farmer and one
the “world” other animal/vegetable.
ƒ If the wolf is ever left alone with the goat, the wolf will
ƒ Design/create functions representing eat the goat.
“moves” in the world ƒ Similarly, if the goat is left alone with the cabbage, the
goat will eat the cabbage.
ƒ Devise a sequence of crossings of the river so that all
four of them arrive safely on the other side of the river.
Artificial Intelligence

13 14

Code in sssearch directory FWGC State Creation and Access


ƒ http://www2.hawaii.edu/~nreed/ ƒ Functions to create and access states
;; Use a list to represent the state of the world –
lisp/sssearch/ ;;i.e. the side of the river that each of the 4
;;characters is on. East and West are the sides.

(defun make-state (f w g c) (list f w g c))

West (defun farmer-side (state)


(nth 0 state)) ; or (car state)

(defun wolf-side (state)


(nth 1 state)) ; or (cadr state)

(defun goat-side (state)


(nth 2 state)) ; or (caddr state)
East
(defun cabbage-side (state)
(nth 3 state)) ; or (cadddr state)

15 16

FWGC Start, Goal and Moves Utility Functions


ƒ Global variables ƒ Determine the opposite side and
;; Represent start and goal states and list
of possible moves if a state is safe
; Start state *start*
(setq *start* (make-state ‘e ‘e ‘e ‘e))
;; Note – these are very simple,
; Goal state *goal*
however they make it easy to
(setq *goal* (make-state ‘w ‘w ‘w ‘w))
;; change – e.g. north and south
; Possible moves *moves*
(setq *moves* '(farmer-takes-self farmer-takes-wolf instead of east and west
farmer-takes-goat farmer-takes-cabbage))

(defun opposite (side) ; Return


the opposite side of the river

17 18

Move Function Code


(defun farmer-takes-self (state) This means that
(cond everybody/everything is
((safe (make-state (opposite (farmer-side state))
on the same side of the FWGC
; if safe, move
(wolf-side state) river.
(goat-side state)
(cabbage-side state)))); return new state
(t nil))) ; otherwise - return F

(defun farmer-takes-wolf (state)


(cond
((equal (farmer-side state) (wolf-side state))
; if safe, move
(safe (make-state (opposite (farmer-side state)) This means that we
(opposite (wolf-side state)) somehow got the Wolf to F GC
(goat-side state)
(cabbage-side state)))); return new state
the other side. W
(t nil))) ; otherwise – return F
Artificial Intelligence

F WG C 19 F WG C 20

WG C G C W C WG WG C G C W C WG
F F W F G F C F F W F G F C

F W C F WG C
G

Search Tree for “Farmer, Wolf, Goat, Cabbage” Illegal State Search Tree for “Farmer, Wolf, Goat, Cabbage” Illegal State Repeated State

F WG C 21 22

F WG C F WG C
Initial State
WG C G C W C WG
F F W F G F C
W C
F G F
W
G
C Farmer takes goat to left bank
F W C F WG C
G
F W C
G
F W
G
C Farmer returns alone
W C C W
F G F WG F G C C
F WG F WG
C Farmer takes wolf to left bank

F C F W C F G C F W F WG F W C
F G C
WG G W GC C G
W
F
W
G C
Farmer returns with goat

G C C G WG G W G G
F W F WG F W C F C F W C F GC F W C F W C
Farmer takes cabbage to left
bank
F G F G
F G F WG F G C
W C C W
W C W C
Farmer returns alone

G F WG C F WG C Farmer takes goat to left bank


F W C F WG C

Search Tree for “Farmer, Wolf, Goat, Cabbage” Success!


Illegal State Repeated State Goal State

23 24

Informed Search Methods


Summary – Uninformed Search
ƒ Heuristics
The search techniques we have seen so far...
ƒ Best-first search
• Breadth first search ƒ Greedy best-first search
• Uniform cost search
uninformed search
blind search ƒ A* search
• Depth first search
• Depth limited search
• Iterative Deepening

...are all too slow for most real world problems


Artificial Intelligence

25

Heuristic Search (informed) Best-first search


•A Heuristic is a function that, when applied to a state, ƒ Idea: use an evaluation function f(n) for each
returns a number that is an estimate of the merit of the node
state, with respect to the goal. We can use this knowledge • estimate of "desirability" Expand most desirable
of the relative merit of states to guide search unexpanded node
ƒ Implementation:
•In other words, the heuristic tells us approximately how
far the state is from the goal state*.
Order the nodes in open list (fringe) in
•Note we said “approximately”. Heuristics might
decreasing order of desirability
underestimate or overestimate the merit of a state. But for ƒ Special cases:
reasons which we will see, heuristics that only • greedy best-first search
underestimate are very desirable, and are called admissible. • A* search
*I.e Smaller numbers are better

27

Some states may appear better that others...


Greedy best-first search
ƒ Evaluation function f(n) = h(n)
7 8 4 1 2 3 (heuristic)
3 5 1 4 5 6
6 2 7 8
ƒ = estimate of cost from n to goal
ƒ e.g., hSLD(n) = straight-line distance from
FWG G
C FW C n to Bucharest
ƒ Greedy best-first search expands the
node that appears to be closest to goal

29 30

Example: Romania Romania Map -- Distances Between Cities


ƒ You’re in Romania, on vacation in Arad.
ƒ Flight leaves tomorrow from Bucharest.

ƒ Formulate goal: be in Bucharest Start

ƒ Formulate problem: states: various cities

ƒ operators: drive between cities

ƒ Find solution: sequence of cities, such Goal


that total driving distance is minimized.
Artificial Intelligence

31

Romania Straight-Line Distances to Bucharest Greedy best-first search example

Greedy best-first search example Greedy best-first search example

Greedy best-first search example Properties of Greedy Best-First Search


ƒ Complete? No – can get stuck in loops,
e.g., Iasi Æ Neamt Æ Iasi Æ Neamt Æ
ƒ Time? O(bm), but a good heuristic can
give dramatic improvement
ƒ Space? O(bm) -- keeps all nodes in
memory
ƒ Optimal? No
Artificial Intelligence

38

The A* Search Algorithm (“A-Star”)


The A* Algorithm
ƒ Idea: avoid expanding paths that are
already expensive f(n) as the estimated cost of the cheapest solution
that goes through node n
ƒ Keep track of a) the cost to get to the
node as well as b) the cost to the goal Use the general search algorithm with a priority
ƒ Evaluation function f(n) = g(n) + h(n) queue queuing strategy.
ƒ g(n) = cost so far to reach n If the heuristic is optimistic, that is to say, it never overestimates
ƒ h(n) = estimated cost from n to goal the distance to the goal, then…

ƒ f(n) = estimated total cost of path A* is optimal and complete!


through n to goal

39 40

Romania Map Starting in Arad

Start

Goal

A* search example A* search example


Artificial Intelligence

A* search example A* search example

46

A* search example

47 48

Example: 8-puzzle

start state goal state

ƒ State: integers on each tile


ƒ Operators: moving blank left, right, up, down
(move must be possible)
ƒ Goal test: does state match goal state?
ƒ Path cost: 1 per move
Artificial Intelligence

;;; HEURISTIC evaluation functions for


49 50

Encapsulating state information in nodes


the n-puzzle problem
; This evaluation function can be used in the absence of a better one
; It looks at the elements of a state and counts how many are the same
; in that state and the goal.
; HEURISTIC-EVAL-0 - Return the number of tiles out of place.
(defun heuristic-eval-0 (state goal)
(cond ((null state) 0) ; all tiles processed
((equal (car state) *blank*) ; the blank doesn't count as a tile
(heuristic-eval-0 (cdr state) (cdr goal))) ; Return value
; of next
((equal (car state) (car goal)) ; the tiles are the same
(heuristic-eval-0 (cdr state) (cdr goal))) ; Return value
; of next
(t ; otherwise, Add 1 (for this tile) to
(1+ (heuristic-eval-0 (cdr state) (cdr goal)))))) ; value of
; rest

51 52

Heuristic Evaluation Function #1 Heuristic Evaluation Function #2


;; HEURISTIC-EVAL-2 - Return the sum of the distances out
;; HEURISTIC-EVAL-1 - Return the sum of the distances
;;tiles are out of place. ;; of place plus two times the number of direct tile reversals.

; A better heuristic for the n-puzzle problem than # of (defun heuristic-eval-2 (state goal)
tiles out of place. "Sum of distances plus 2 times the number of tile reversals"

(+
(defun heuristic-eval-1 (state goal) (heuristic-eval-1 state goal)
"Sum the distance for each tile from its goal position." (* 2 (tile-reversals state goal)) ))
(do ; loop adding distance
((sum 0) (tile 1 (+ tile 1)));for tiles 1 to (n^2 - 1) ; Call the desired heuristic here from heuristic-eval
((equal tile (* *n* *n*)) (defun heuristic-eval (state goal)
sum) ; return sum when done (heuristic-eval-2 state goal))
(setq sum (+ sum (distance tile state goal)))) )

; TILE-REVERSALS - Return the number of tiles directly reversed between state and goal. 53 ; TILE-REVERSE - Return 1 if the tiles are reversed in 54
(defun tile-reversals (state goal)
the two states, else ; return 0. If one of the tiles is
"Calculate the number of tile reversals between two states"
(+
the blank, it doesn't count (return 0).
(do (defun tile-reverse (pos1 pos2 state1 state2)
((sum 0) (i 0 (1+ i))) ; loop checking row adjacencies
((equal i (- *n* 1)) sum) ; until i = n-2, return sum "Return 1 if the tiles in the two positions are
(do reversed, else 0"
((j 0 (1+ j)))
((equal j *n*) ) ; until j = n-1, add sum to return
(cond
(setq sum
((or
(+ sum
(tile-reverse (+ (* i *n*) j) (+ (* i *n*) j 1) (equal (nth pos1 state1) *blank*); blanks don't count
state goal)))))
(do
(equal (nth pos2 state1) *blank*))
((sum 0) (i 0 (1+ i))) ; loop checking column adjacencies
0) ; return 0
((equal i *n* ) sum) ; until i = n-1, return sum
(do ((and ; the tiles are reversed
((j 0 (1+ j)))
((equal j (- *n* 1)) ) ; until j = n-2, add sum to return
(equal (nth pos1 state1) (nth pos2 state2))
(setq sum
(equal (nth pos2 state1) (nth pos1 state2)))
(+ sum
(tile-reverse (+ (* i *n*) j) (+ (* i *n*) j 1) 1) ; return 1
state goal)))) )))
(t 0)) ) ; else return 0
Artificial Intelligence

55 56

Distance Function 8 Puzzle Problem

;; DISTANCE - calculate the distance a tile is from


;; its goal position.
(defun distance (tile state goal)
"Calculate the Manhattan distance a tile is from its
goal position."
(+
(abs (- (row tile state)
(row tile goal)))
(abs (- (column tile state)
(column tile goal)))) )

Admissible Heuristics Consistent Heuristics


ƒ A heuristic h(n) is admissible if for every node n, ƒ A heuristic is consistent if for every node n, every successor
n' of n generated by any action a,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the
goal state from n. h(n) ≤ c(n,a,n') + h(n')
ƒ An admissible heuristic never overestimates the
ƒ If h is consistent, we have
cost to reach the goal, i.e., it is optimistic f(n') = g(n') + h(n')
ƒ Example: hSLD(n) (never overestimates the actual = g(n) + c(n,a,n') + h(n')
road distance) ≥ g(n) + h(n)
= f(n)
ƒ Theorem: If h(n) is admissible, A* using TREE- ƒ i.e., f(n) is non-decreasing along any path.
SEARCH is optimal ƒ Theorem: If h(n) is consistent, A* using GRAPH-SEARCH
is optimal

Optimality of A* Optimality of A* (proof)


ƒ A* expands nodes in order of increasing f value ƒ Suppose some suboptimal goal G2 has been generated and is
in the fringe. Let n be an unexpanded node in the fringe such
ƒ Gradually adds "f-contours" of nodes that n is on a shortest path to an optimal goal G.
ƒ Contour i has all nodes with f=fi, where fi < fi+1

ƒ f(G2) = g(G2) since h(G2) = 0


ƒ g(G2) > g(G) since G2 is suboptimal
ƒ f(G) = g(G) since h(G) = 0
ƒ f(G2) > f(G) from above
Artificial Intelligence

Optimality of A* (proof) Properties of A*


ƒ Suppose some suboptimal goal G2 has been generated and is in
the fringe. Let n be an unexpanded node in the fringe such that ƒ Complete? Yes (unless there are
n is on a shortest path to an optimal goal G. infinitely many nodes with f ≤ f(G) )
ƒ Time? Exponential
ƒ Space? Keeps all nodes in memory
ƒ Optimal? Yes
ƒ f(G2) > f(G) from above
ƒ h(n) ≤ h^*(n) since h is admissible
ƒ g(n) + h(n) ≤ g(n) + h*(n)
ƒ f(n) ≤ f(G)\
Hence f(G2) > f(n), and A* will never select G2 for expansion

Admissible heuristics Admissible heuristics


E.g., for the 8-puzzle: E.g., for the 8-puzzle:
ƒ h1(n) = number of misplaced tiles ƒ h1(n) = number of misplaced tiles
ƒ h2(n) = total Manhattan distance ƒ h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile) (i.e., no. of squares from desired location of each tile)

ƒ h1(S) = ? 8
ƒ h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Dominance Relaxed problems


ƒ If h2(n) ≥ h1(n) for all n (both admissible) ƒ A problem with fewer restrictions on the
ƒ then h2 dominates h1 actions is called a relaxed problem
ƒ h2 is better for search ƒ The cost of an optimal solution to a relaxed
ƒ Typical search costs (average number of problem is an admissible heuristic for the
nodes expanded): original problem
ƒ d=12 IDS = 3,644,035 nodes ƒ If the rules of the 8-puzzle are relaxed so that
A*(h1) = 227 nodes a tile can move anywhere, then h1(n) gives the
A*(h2) = 73 nodes shortest solution
ƒ d=24 IDS = too many nodes ƒ If the rules are relaxed so that a tile can move
A*(h1) = 39,135 nodes to any adjacent square, then h2(n) gives the
A*(h2) = 1,641 nodes shortest solution
Artificial Intelligence

Local search algorithms Example: n-queens


ƒ In many optimization problems, the path ƒ Put n queens on an n × n board with no
to the goal is irrelevant; the goal state two queens on the same row, column, or
itself is the solution
diagonal
ƒ State space = set of "complete"
configurations
ƒ Find configuration satisfying constraints,
e.g., n-queens
ƒ In such cases, we can use local search
algorithms
ƒ keep a single "current" state, try to
improve it

Hill-Climbing Search Hill-Climbing Search


ƒ "Like climbing Everest in thick fog with ƒ Problem: depending on initial state, can
amnesia” get stuck in local maxima

Hill-Climbing Search: 8-Queens Problem Hill-Climbing Search: 8-Queens Problem

ƒ h = number of pairs of queens that are attacking • A local minimum with h = 1


each other, either directly or indirectly
ƒ h = 17 for the above state
Artificial Intelligence

Simulated Annealing Search Properties of Simulated Annealing Search


ƒ Idea: escape local maxima by allowing some "bad" moves
but gradually decrease their frequency
ƒ One can prove: If T decreases slowly
enough, then simulated annealing search
will find a global optimum with
probability approaching 1
ƒ Widely used in VLSI layout, airline
scheduling, etc

Local Beam Search Genetic Algorithms


ƒ Keep track of k states rather than just one ƒ A successor state is generated by combining
two parent states
ƒ Start with k randomly generated states ƒ Start with k randomly generated states
ƒ At each iteration, all the successors of all (population)
k states are generated ƒ A state is represented as a string over a finite
alphabet (often a string of 0s and 1s)
ƒ If any one is a goal state, stop; else select
ƒ Evaluation function (fitness function). Higher
the k best successors from the complete values for better states.
list and repeat. ƒ Produce the next generation of states by
selection, crossover, and mutation

Genetic Algorithms Genetic Algorithms

ƒ Fitness function: number of non-attacking


pairs of queens (min = 0, max = 8 × 7/2 = 28)
ƒ 24/(24+23+20+11) = 31%
ƒ 23/(24+23+20+11) = 29% etc
Artificial Intelligence

Summary
ƒ Problem formulation usually requires abstracting
away real-world details to define a state space that
can feasibly be explored
ƒ Informed Searches
• Best-first search
• Greedy best-first search
• A* search
• Heuristics
ƒ Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search
ƒ Genetic algorithms
ƒ Chapter 4 79

You might also like