You are on page 1of 5

Q.

Write in brief about the AI techniques


which are required for solving problem?
Ans: Definition of AI: An artificial
intelligence is a study of how to make
computer do things which at the movement
people do better.
The word artificial itself indicate the
thing is not natural means its designed by
designer so the task of applying such design
task to the computer to make it intelligence
one is on artificial intelligence.
AI Techniques:
1) Knowledge capture generalizations. In
otherwords, it is not necessary to represent
separately each individual situation.
Instead, situation that share important
properties are grouped together. If
knowledge doesnt have this property
inordinate amount of memory and updating
will be require. So we use data rather than
knowledge.
2) It can be understood by people who must
provide it. Although for many programs the
bulk of data can be acquired automatically
(for example by taking readings from a
variety of instructions.
3) It can easily be modified to correct errors
and to reflect changes in the world and out
world view.
4) It can be easily modified to correct an
error.
5) It can be used in a great many situations
even if it is not totally accurate or complete.
6) It can be used to help overcome its own
sheer bulk by helping to narrow the range
of possibilities that must usually be
considered.
For solving the problems we have to
consider the following things:
Define the problem precisely, this
definition
must
include
precise
specifications of what the initial
situations(s) will be as well as what final
situations constitute acceptable solutions to
the problem.
Analyze the problem. A few very
important features can have an immense
impact on the appropriateness of various
possible techniques for solving the problem.
Isolate and represent the task knowledge
that is necessary to solve the problem.
Choose the it them to the particular
problem.
Q2. Explain, what are the AI problem
characteristics? Discuss in brief.
Ans: The following points shows the AI
problem characteristics:
1) Is the problem decomposable into a set
of (nearly) independent smaller or easier
sub-problems?
2) Can solution steps e ignored or at least
undone if they prove unwise?
3) Is the problems universe predictable?
4) Is a good solution to the problem obvious
with comparison to all other possible
solutions?
5) Is the desired solution a state of the
world or path to a state?
6) Is a large amount knowledge absolutely
require to solve the problem, or is a
knowledge importance only to constrain the
search?
7) Can a computer that is simply given the
problems return the solution, or will the
solution of the problem require interaction
between the computer and a person?
Q3.
Illustrate
the
decomposable
technique for problem solving?
Ans: Suppose we want to solve the problem
of computing the expression.
(x2 + 3x + Sin2x Cos2x) dx
We can solve this problem by breaking it
into thee smaller problems, each of which
we can then solve by using a small
collection of specific rules. The method or
steps of decomposition are: 1) It checks
whether the problem it is working on is
immediately solvable. 2) If so then answer
returned directly. 3) If not, decomposition
checks whether it can decompose the
problem into smaller problem. And if yes
then again there will be call for
decomposition method.
Following tree shows example of
decomposition.

Q.4
Explain
the
brief
various
applications of A.I.?
Ans: There are various application of AI
but few are listed below: 1) Game Playing
2) Natural Language Processing 3) Vision
4) Robotics 5) Expert System
1) Game Playing: The first operations
game playing program in addition to simply
playing the game. Through this application
we can play the games on the PC. We can
do the enhancement on these games. The
basic idea behind this technique is, we have
to increase our intelligency through playing
PC games like chess.
2) Natural Language Processing (NLP):
Language is a media for communicating
with the world. The largest part of human
communication occur through speech.
NLP
includes
understanding
&
generation as well as other task such as

Q5. Describe how branch and bound


techniques could be used to find an
optimal solution of an AI problem
implement this technique to find the
optimal solution of the following
problem.

x 2 dx
3x dx
Sin 2 x Cos 2 x dx

1 Cos 2 x Cos 2 x dx
3 k dx

Condit
ion

After
Applies
on Jugs
(4, y)

Meaning / Description

(x, y)
If x<4
(x, y)
If y<3
(x, y)

(x, 3)

Fill the 3 gallon jug.

If y>0
(x, y)

(x, y-d)

Pour some water out of


3 gallon jug

If x>0
(x, y)

(x-d, y)

Pour some water out of


4 gallon jug

If x>0
(x, y)

(0, y)

Empty the 4-gallon jug


on the ground

If y>0
(x, y)

(x, 0)

Empty the 3-gallon jug


on the ground

(4, y-(4x)

Pour water from the 3gallon jug into 4-gallon


jug until 4-gallon jug is
full.

If
x+y4
& y>0
(x, y)

(x-(3-y),
3)

Pour water from the 4gallon jug into the 3gallon jug until the 3gallon jug is full.

If
x+y3
& x>0
(x, y)

(x+y, 0)

Pour all the water


from 3-gallon jug into
the 4-gallon jug.

10

If
x+y4
& y=0
(x, y)

(0, x+y)

Pour all the water


from the 4-gallonsjug
into the 3-gallon jug.

11

If x+y

&
x>0
(0, 2)

(2, 0)

Pour the 2-gallons


from the 3-gallon jug
into the 4-gallon water
jug.

Cos 2 x dx

3x 2
2

Cos 4 x dx

1 1 Cos 2 x dx
2
1
1
2
1 dx Cos x dx
2
2
multilinked translation steps in which the
process of NLP words.
i) Morphological Analysis: Individual
words analyze into their components & non
word taken such as punctuation are
separated.
ii) Syntactic Analysis: It will check the
grammar like: it will reject if the English
sentence is Boy the go to the school.
iii) Semantic Analysis: Here mapping is
made through semantic between syntactic
analysis and objects.
iv) Discovers Integration: It will check the
dependent sentences and current the bugs
lies in these two or more sentences.
4) Robotics: Robotics is a collection of
intelligent components i.e. surprisingly
animate & perform the task as per situation.
The architecture of robotics is given as.

Perc
eptio
n
Cog
nitio
n

Sr

x2 + 3x + Sin2x Cos2x dx.

x3
3

Q6. Describe the four factor consisted by


a production system?
Or What do you mean by production
system? What are he elements of
production system?
Or Set of rules, knowledge databases,
control strategy and rule applier really a
important factor of production system?

Acti
on

P
h
ys
ic
al
W
or
ld

The
above
block
works as
per given instruction to perform the task as
human being performs.
5) Expert System: An expert system is a
programs that manipulates encoded
knowledge to solve problem in a
specialized domain that normally requires
human expertise.
Through expert system we can use in the
following area:
Medical Diagnosis
S/W development
Planning experiments in Bio-Chemistry
experiments
Forecasting
Identification of chemical components
Scheduling of customer
Diagnosis of complex electronic &
electromechanical system

Statement: You are given two jugs, a 4gallon one and a 3-gallon one neither has
any measuring markers on it, there is a
pump that can be used to fill the jugs with
water. How can you get exactly 2-gallons of
water into 4-gallon jug.
The state space for this problem can be
described as the set of ordered pairs of
integer(x,y) such that x=0,1,2,3, or 4 and
y=0,1,2 or 3; x Represent the number of
gallons of water in 4-gallon jug, and y
Represents the quantity of water in the 3gallon jug.
The start state is(0,0). The goal state is
(2,n) for any value of n (since the problem
does not specify how many gallons need to
be in the 3 gallon jug).
The following same nations are given
which are used in water jug problem
solving:
> (Generator) Quantity of water is more
than gallon jug. < (Lesser) Quantity of
water is less
(-) or d Detection of water + Adding
water in the jug.
Production Rules of the water Jug problem:
Gallons in the
4-Gallon Jug
0

Gallons in the
3-Gallon Jug
0

Fill the 4-gallon jug

Why, explain?
Ans: 1) A set of rules, each consisting of a
left side (a pattern that determines the
applicability of the rule and right side that
describes the operation to be perform if the
rule is applied. 2) One or more
knowledge/databases that contains whatever
information is appropriate for the particular
task. Some parts of the database may be
permanent, while other parts of it may
pertain only the solution of the current
problem. The information in this databases
may be structured in any appropriate way.
3) A control strategy that specify the order
in which the rules will be compared to the
database and a way of resolving he conflict
that arise when several rules are match at
once. 4) A rule applier.
Definition of Production System: The
production system encompasses a great
many systems, including our description of
both a chess player and a water jug problem
solver. It also encompasses a family of
general production system interpreters.

Rule Applied
2
9
2
7
5 or 12
9 or 11

One Solution to water jug problem.


1) Define a states space that contains all the
possible configurations of relevant objects.
2) Specify one or more states within that
space describe possible situations from
which the problem solving process may
star. These states are called initial states.
3) Specify one or more states that would be
acceptable as an solution to the problem.
These states are called goal state.
4) Analyze the miles given in the table of
production.

Q7. What are the requirements of a good


control strategy for an AI system? OR
What are the contents of good control
strategy? Why good control strategy
should cause motion and why? It is
systematic?
Ans: First Requirement of good control
strategy: Consider the water jug problem,
suppose we implemented the simple control
strategy of starting each time at the top of
the list of rules and choosing the first
applicable one. If we did this, we never get
to solve problem. We would continuously
filing the 4-gallon jug with water control
strategy that does not cause motion will
never lead to a solution.
The second requirement of a good
control strategy consider water jug problem,
on each cycle, choose at a random from
among the applicable rules. This strategy

better than first because it causes the motion


and many times it happen same state can be
get. Because control strategy s not
systematic, for that we have to construct a
tree with the initial state as for local motion.
Generate all offspring of the root by
applying each of the applicable rules to the
initial state.
Q8. Explain the importance of state space
search and problem space search in AI?
Ans: Importance of State Space Search:
1) It allows for a formal definition of
problem as the need to convert some given
situation into some desired situation using a
set of permissible operation.
2) It permits us to define the process of
solving particular problem as a combination
of known techniques (each represented as a
rule defining a single step in the a space)
and search, the general technique of
exploring the space to try important process
in the solution of hard problems for which
no more direct techniques are available.
Problem Space Search in AI: To build a
program that could pay chess we would first
have to specify the starting position of the
chess board, the rules that define the legal
moves and board positions that 1 represent
a win for one side or the other.
We can define as our goal any board
position in which the opponent does not
have a legal move and his or her king is
under attack. The legal moves provide the
way of getting from initial state to a goal
state.
There are two parts: a left & right that
serves as a pattern match against the board.
For moving one position to another position
on the chess board there are 10 120 possible
board positions, two difficulties comes in
front of us: 1) No person could ever supply
a complete set of rules. It would take too
long and could certainly not be done
without mistake. 2) No programs could
easily handle all those rules. Although a
hashing scheme could be used to find the
relevant rules for each moves fairly quickly,
just staring that many rules poses serious
difficulties.
In order to minimize such problems, we
should look for a way to write the rules
describing the legal moves in as general a
way as possible, to do this it is useful to
introduce some convenient notation for
describing patterns and substitutions.
Q9. What do you mean by BFS? Write an
Algorithm for BFS?
Ans: The BFS is nothing but Breadth First
Search. The Search is performs level wise
of the tree.
Algorithms: (Breadth First Search)
1. Create a variable called NODE_LIST and
set it to the initial state.
2. Until a goal state is found or
NODE_LIST is empty do.
a) Remove the first element from
NODE_LIST and call it E. If NODE_LIST
was empty
quit.
b) For each
way that
each rule
can match
the
sate
described
in E do: i) Apply the rule to generate a new
state. ii) If the new state is goal state, quit
and return this state. iii) Otherwise add the
new state to the end of NODE_LIST.
Example,
Advantage of BFS:
1) BFS will not get trapped exploring a
blind alley. This contrast with DFS, which
may follow a single unfruitful path for a
very long time, perhaps forever before the
path actually terminates in that state that has
no successors. 2) BFS gives guaranteed
result if goal is present. 3) BFS checks each
and every node level wise in the tree.
Q10. Why in BFS technique memory is
not efficiently utilized? Explain?
Ans: In BFS, searching is done breadth
wise or level wise. So it visits all the nodes
of the tree present in that level. If the goal is

not present it will move towards next level


of the tree. If the goal state is found then it
will stop and return goal node from the tree.
To search the node in B tree through
DFS we have to have store all the nodes
present in that level.
Hence, it will take move memory to
store the resultant goal node from the tree.
Example, can be seen next page.
Q11. Write an algorithm of DFS? & its
advantages?
Ans: Algorithms: (Depth First Search)
1. If the initial state is goal state, quit &
return success. 2. Otherwise, do the
following until success or failure is
signaled: a) Generate a success or E, of the
initial state. If there are no more successors,
signal failure. b) Call DFS with E as the
initial state. c) If success is returned, signal
success. Otherwise continue in this loop.
Example,
Red edge indicate the path from A to Z.
Shaded node indicate the goal node.
DFS provides the search depth wise
means vertically.
Advantage of DFS: 1) DFS requires less
memory to store the search path from the
tree. 2) It is less time consuming, if the
node present in early stage of the tree. 3)
DFS may find a solution without examining
much of search space at all.
Q12. Why DFS search technique to most
suitable than BFS?
Ans: Depth First Search (DFS) technique is
the most suitable than Breadth first Search
(BFS) because:
1) In DFS less no of comparisons are done,
if the goal node is present in initial vertical
nodes. Where as in BFS we have to
compare every node lies in the tree.
2) Time required in DFS is less, where as in
BFS time requires more.
3) Memory to store the DFS nodes are
required less, where as more memory is
required in BFS.
4) In DFS searching is preformed vertically,
where in BFS searching is performed
horizontally.
5) DFS may find a solution without
examining much of search space at all.
6) In DFS no backtracking is occurred
where as in BFS there is a backtracking
path, through which we can change the
parent of a node.
Q13. Using an appropriate heuristic
search technique solve the following 8puzzle problem?
Ans:
Initial
Goal State
we
State
might
2
8
3
1
2
3
start
1
6
4
8
4
by the
7
5
7
6
5
sliding 5 into the empty space. Having done
that, we can not change out mind
immediately slide tile 6 into empty space
since the empty space will essentially have
moved. But we can backtrack and undo the
first move. Sliding tile 5 back to where it
was. Then we can move tile 6, to the down
word direction then move tile 8 in the same
direction. Later on move 2 & 1 one by one.
Then shift 8 to the left most side we get the
goal state as mentioned in the puzzle.
An additional step must be performed to
undo each incorrect step, where as no action
was required to undo a useless lemma.
The 8-puzzle, theorem proving and
playing chess are the problems which
creates the following.
Ignorable (e.g. theorem proving) in
which solution steps can be ignored.
Recoverable (e.g. Chess) in which
solution steps can be undone.
Irrecoverable (e.g. Chess) in which
solution steps cannot be undone.
The recoverability of problems play
important role in determining the
complexity of the control structure
necessary for the problems solution.
Ignorable problems can be solved using a

simple control structure that never


backtracks. Such a control structure is
easily implemented.
Recoverable problems can be solved by
slightly more complicated control strategy
that does sometimes make mistakes.
Backtracking will be necessary to recover
from such mistakes, so the control structure
must be using a push down stack.
Q14. Explain generate and test algorithm
with the example.
Ans: 1) Generate-and-Test : Algorithms:
1) Generate a possible solution. For some
problems, this means generating a particular
point in the problem space. For others, it
means generating a path from a start state.
2) Test to see if this is actually a solution by
comparing the chosen point or the end point
of the chosen path to the set of acceptable
goal state. 3) If a solution has been found,
quit. Otherwise, return to step 1.
Consider the following example,
A box contains three bolls having
different colors i.e. Red, Green, Yellow. Our
aim is to pick out Red Boll. Then by
applying step no 1 we have to pick out one
ball from the box. The second step shows,
test to see if this is actually a solution
means is it Red Boll if not then step 3 tells
repeat step 1, again we have to pick out
another boll if it is Red then solution has
been found return the final step and quit.
Q15. Explain simple Hill climbing
algorithm?
Ans: Algorithms:1) Evaluate the initial
state, if it is also a goal state, then return it
and quit. Otherwise continue with the initial
state as the current state. 2) Loop until a
solution is found or until there are no new
operators left to be applied to the current
state.
(a) Select an operator that has not yet been
applied to the current state and apply it to
produce a new state. (b) Evaluate the new
state: (i) If it is a goal state then return and
quit. (ii) If it is not a goal state but it is
better then the current state, then make it
the current state. (iii) If it is not better than
the current stat, then continue in the loop.
Q16. Explain Steepest- A scent Hill
Climbing Algorithm?
Algorithms:
1) Evaluate the initial state if it is also a
goal state, then return it and quit.
Otherwise, continue with the initial state as
the current state. 2) Loop until a solution is
found or until a complete iteration produces
a no change to current state: a) Let SUCC
be a state such that any possible successor
of the current state will be better than
SUCC.
b) For each operator that applies to the
current state do: i) Apply the operator and
generate a new state. ii) Evaluate the new
state. If it is a goal state, then return it and
quit. If not compare it to SUCC. If it is
better than set SUCC to this state. If it is not
better, leave SUCC alone. c) If the SUCC is
better then current state, then set current
state to SUCC.
Q17. Give the meaning of the following
terms: a) Local Maximum b) Plateau c)
Ridge
Ans: 1) Local Maximum: A local
maximum is a state that is better than all its
neighbors but is not better than some other
states further away. At a local maximum, all
moves appear to move things worse. Local
maxima are particularly frustrating because
they often occur almost within sight of
solution. In this case they are called
foothills.
2) Plateau: A plateau is a flat area of the
search space in which whole set of
neighbors state have the same value. On a
plateau, it is not possible to determine the
best direction in which to move by making
local comparisons.
3) Ridge: A Ridge is a special kind of local
maximum. It is an area of the search space
that is higher than surrounding area and that
itself a has a slope. But the orientation of
high region, compares to the set of available

moves and the directions in which they


moves, makes it impossible to traverse a
ridge by s a single moves.
Q18. Write down the Algorithm for
simulated annealing.
Algorithms:
1) Evaluate the initial state. If it is also a
goal state, then return it and quit. Otherwise
continue with the initial state as the current
state. 2) Initialize BEST-SO-FAR to the
current state. 3) Initialize T according to the
annealing schedule. 4) Loop until a solution
is found or until there are no new operators
left to be applied in the current state.
(a) Select an operator that has not yet been
applied to the current state and apply it to
produce a new state. (b) Evaluate the new
state Compute E= (Value of current)
(Value of new state)
i) If the new state is a goal state then return
it and quite. ii) If it is not a goal state but is
better than the current state then also set
BEST-SO-FAR to this new state. iii) If it is
not better than the current state than make it
current state with probability p as defined
above. This step is usually implemented by
invoking a random numbers generator to
produce a number in the range [0, 1]. If that
no is less than p then move is accepted
otherwise do nothing.
Q19. Explain with the example BestFirst-Search Algorithms?

Algorithms:1) Start with OPEN containing


just initial state. 2) Until a goal is found or
there are no nodes left on OPEN do:
a) Pick the best node on OPEN.
b) Generate its successors.
c) For each successor do: i) If it has been
generated before, evaluate it, add to OPEN,
and record its parent. ii) If it has been
generated before, change the parent if this
new path is better than the previous one. In
this case, update the cost of getting to this
node and to any successor that this node
may already have.
Example,
Q20. Explain A* Algorithm?
Algorithms: 1) Start with OPEN containing
only the initial node. Set that nodes g value
to O, its h' value to whatever it is, and its F'
value to h'+O, or h', set CLOSED to empty
list. 2) Until a goal node is found, repeat the
following procedure if there are no nodes.
On OPEN, report failure. Otherwise, pick
the node on OPEN with the lowest F' value.
Call it BESTNODE. Remove it from
OPEN. Place it on CLOSED. See if
BESTNODE is goal node. If so, exit and
report a solution (either BESTNODE if all
we want is the node or the path has been
created
between
initial state and
BESTNODE if we are interested in the
path. Otherwise, generate the successor of
BESTNODE but do not set BESTNODE to
point to them yet. (First we need to see if
any of them have already have been
generated). For such SUCCESSOR do the
following.
a) Set SUCCESSOR to point back to
BESTNODE. These backward links will
make it possible to recover the path once
the solution is found. b) Compute
g(SUCCESSOR) = g(BESTNODE) + the
cost of getting from BESTNODE to
SUCCESSOR. c) Set if SUCCESSOR is the
same as any node on OPEN i.e. it has
already been generated but not processed).
If so, call the node OLD. Since this node is
already exist in the graph, we can throw
SUCESSOR away and add OLD to the list
of BESTNODEs successor. Now we must
decide whether OLDs parent link should be
reset to point to BESTNODE. It should be
if the path we have just found to
SUCESSOR is cheaper than the current best

path to OLD (Since SUCCESSOR and


OLD are already same node). So see
whether it is cheaper to get to OLD via its
current path parent or to SUCCESSOR via
BESTNODE by comparing their g values.
If OLD is cheaper (or just as cheaper), then
we need do nothing. If SUCCESSOR is
cheaper, then reset OLDs parent link to
point to BESTNODE, record the new path
in g(OLD) and f'(OLD). d) if SUCCESSOR
was not on OPEN, see if it is on CLOSED.
If so, call the nodes on CLOSED OLD and
add OLD to the list of BESTNODEs
Successors. Check to see if the new path or
the old path is better just as in step 2(c), and
set the parent link and g and f' values
appropriately. If we have just found a better
path to OLD. We must propagate the
improvement to OLDs successors. This is a
bit tricky. OLD points to its successors.
Each successor in turn points to its
successors, and so forth, until each branch
terminates with a node that either is still on
OPEN or has no successors. So propagate
the new cost downward, do a depth-first
traversal of the starting at OLD. Changing
each nodes g value (and thus also f' value)
terminate each branch when you reach
either a node with no successors or node to
which an equivalent or better path has
already been found. This condition is easy
to check for, each nodes parent link points
back to its best known parent. As we
propagate down to a node, see its parent
point to the node we are coming from. If so,
continue the propagation. If not then its g
value already rejects the better path of
which it is part. So the propagation may
stop here. But it is possible the with the new
value of g weight propagated downward,
the path we are following may become
better then the path through the current
parent. So compare the two if the path
through the current parent is still better;
stop the propagation, if the path we are
propagating through is now better, reset the
parent is still better, stop the propagation, if
the path we are propagating through is now
better reset the parent and continue
propagation.
e) if SUCCESSOR was not already either
OPEN or CLOSED, then put it on OPEN,
and add it to the list of BESTNODEs
successors.
Compute
f'(SUCCESSOR)=g(SUCCESSOR)
+h'(SUCCESSOR).
Q21. What are the basic three
components are necessary to implement
A* algorithm.
Ans: The basic 3 components are as
following: 1) OPEN 2) Heuristic 3) CLOSE
1) OPEN: This is the first element of a*
algorithm in which it takes those elements
which to perform search on particular node
on tree. Means the work of searched start
from OPEN list having the nodes.
2) Heuristic: After getting those nodes
from OPEN list it performs search on that
node. As given in step2(a, b, c, d) means it
decides which node is the BESTNODE,
SUCCESSOR of that particular node, it
determines the OLD node which plays a
vital role to find the path from initial node
to goal node.
3) CLOSE: This element keeps the record
of searched elements. It means those
elements which has searched by using
OPEN & Heuristic elements those nodes
are kept in CLOSE list.
Lastly F' value can be calculated by:
f'(SUCCESSOR) = g'(SUCCESSOR) +
h'(SUCCESSOR)
Q22. What are the two components of
heuristic function f'? Explain with the
suitable example?
Ans: The heuristic function f' computes the
total path from initial node to the searched
node or goal node (if we have computed).
The are two functions of f ':
f' = g + h'
Where,
f' = Heuristic function
g = computed path from initial node to
searched node.

h' = This h' value indicate that the path from


searched node to goal node. Dash (')
indicate the value is varying time to time.
Example,
If we want to travel from Aurangabad to
Jalna, than by utilizing the value of f' can be
computed by:
f' = h' + g (Initially g is zero)
Suppose we have traveled from
Aurangabad to Chikalthana then following
figure shows the total area:

Q23. Explain with the suitable example


h' under estimates and overestimates and
value of h? Or What are the observations
can determine after evaluation of A*
algorithm?
Ans: 1) h' Underestimates h: Consider the

following example:
Figure: h' Underestimates h
Assume that the cost of all area is 1,
initially all nodes except A are on OPEN
(although the fig. shows the situations tow
steps later, after B& E have been expanded)
for each node f' is indicated as the sum of g
& h'. initially node B has lowest value F', 4,
so it is expanded first. Suppose it has only
one successor E, and its successor F, E is 3
moves away from goal. Now f'(E) is 5, the
same as f'(c). Now we will expand E next.
Suppose it has f single successor. And the
cost of f(F) is 6 which is grater than the
f(c) is 5. so we will expand c next. Thus
we see that by underestimating h(B) we
have wasted some effort.
2) h' Overestimates h:

In the figure we expand B on


the first step. On the second step we expand
E. At the next step we expand F, and finally
we generate G, for a solution path 4. But
suppose there is a direct path from D to a
solution, giving a length of 2. we will never
find it. By overestimating h'(D), we make D
look so bad that we may found some other,
whose solution without ever expanding D.
in general h' might overestimate h, we can
not guaranteed of finding the cheapest path
solution unless we expand the entire graph
until all paths are longer then the best
solution. Hence here we found h' is
overestimates h.
Q24. Write an algorithm of Agenda
driven search technique? What are the
two things mainly needed to implement
this algorithm?
Algorithm:1) Do until a goal state is
reached or agenda is empty: a) Choose the
most promising tasks from the agenda. (i)
Notice that this task can be represented in
any desired form. (ii)Item can be thought of
an explicit state of what to do next or
simply as an indication of the next code to
be expanded. (iii) Execute the task by
devoting to it the nos of resources
determine by its importance. The
importance resources to be considered as:1)
Time 2) Space
Executing the task will probably
generate additional task (Successor nodes)
for each of them, to the following.
See if it is already on agenda. If so,
a) Ignore this current evidence. If this
justification was not already present, add it
to the list. b) If the task was not on agenda,
insert it.
Compute the new task rating, combining
the evidence from all of its justification

need have equal weight. It is often useful to


associate with each justification a measure
are then combined at this step to produce an
overall rating for the task.
While writing or implementing the
agenda algorithm we have to keep in mind
the following two things.
1) Agenda 2) Time & Space
1) Agenda: Agenda is nothing but a list of
task. From the algorithm we have to choose
such an agenda which gives nodes most
promising one.
If the node is present in the list then
perform the searched operation on that
node.
2) Time & Space: This is also an important
thing to implement the agenda algorithm., it
will keep the track of time & space required
to the particular algorithm.
If the agenda is not having most
promising node then add this node into
agenda.
If it is already present then perform the
operation.
Then again compute the new task and
perform the above operation.

know there is not, the path through J is


better.
2) The another limitation is, it fails to take
into account any interation between
subgoals.
Consider
the
following example,
Assume that C & E
ultimately leads to a
solution,
our
algorithm will report
a complete solution
that includes both of them. The AND-OR
graph states that for A to be solved, both C
& D must be solved. But then the algorithm
consider the solution of D as a completely
separate process from the solution of C.
looking just at the alternatives from D, E is
the best path.
But it turns out that C is necessary
anyway, so it would be better also to use it
to satisfy D.
But since our algorithm does not consider
such interactions, it will find a non-optimal
path.

Q25. Write an algorithm AND-OR graph


or problem reduction with the example?
Algorithms:
1) Initialize the graph to the starting node.
2) Loop until the starting node is labeled
SOLVED or until its cost goes above
FUTILITY. A) Traverse the graph, starting
at the initial node and following the current
best path, and accumulate the set of nodes
that are on that path and have not yet been
expanded or labeled as solved. B) Pick one
of these unexpanded nodes & expand it. If
there are no successors and assign
FUTILITY as the value of this node.
Otherwise, add its successors to the graph
and for them compute f' (use only h' &
ignore g fore reasons we discuss below). If
f' of any node is 0, mark that node as
SOLVED.
a) Change the f' estimate of the newly
expanded node to reflect the new
information provided by its successor.
Propagate this change backward through the
graph. If any node contains a successor are
whose descendants are all solved, labeled
the node itself as SOLVED. At each node
that is visited while going up the graph
decide which of its successor area is the
most promising and mark it as part of the
current best path. This may cause the
current best path to change. This
propagation of revised cost estimate back
up the tree was not necessary in BFS
algorithm because only unexpanded nodes
were examined. But now expanded nodes
must be reexamined so that the best current
path can be selected. Thus it is important
that their f' values be the best estimates
available.
Example,
Fig: The operation of problem reduction.

Q27. Write an algorithm of AO*


searching technique & explain the
working of following terms in AO*.
1) Graph 2) INIT 3) FUTILITY
4) NODE 5) S 6) CURRENT
7) SUCCESSOR.
Ans: Algorithms: 1) Let GRAPH consists
only of the node representing the initial
state. (Call this node INIT) compute
h'(INIT).
2) Until INIT is labeled SOLVED or until
INITs h' value becomes greater than
FUTILITY, repeat the following procedure:
a) Trace the labeled arcs from INIT and
select for expansion one of the as yet
unexpanded nodes that occur on this path.
Call the selected node NODE. b) Generate
the successors of NODE. If there are none,
then assign FUTILITY as the h' value of
NODE. This is equivalent to saying that
NODE is not solvable. If there are
successors, then for each one (called
SUCCESSOR) that is not also ancestor of
NODE do the following: i) Add
SUCCESSOR to GRAPH
ii) If SUCCESSOR is a terminal node,
label it SOLVED & assign it an h' value of
0.
iii) If SUCCESSOR is not a terminal
node, compute h'.
c) Propagate the newly discovered
information up the graph by doing the
following: Let S be a set of nodes that have
been labeled SOLVED or whose h' value
have been changed and so need to have
values propagated back to their parents.
Initialize S to NODE. Until S is empty,
repeat the following procedure.
i) If possible, select from S & node of
whose descendants in GRAPH occur in S. if
there is no such node, select any node from
S. Call this node CURRENT, and remove it
from S.
ii) Compute the cost of each of the arcs
emerging from CURRENT. The cost of
each arc is equal to the sum of h' values of
each of the nodes at the end of the arc plus
whatever the cost of the arc itself is, Assign
as CURRENTs new h' value the minimum
of the costs just computed for the arcs
emerging from it.
iii) Mark the best path out of
CURRENT by marking the arcs that had the
minimum costs as computed in previous
step.
iv) Mark CURRENT SOLVED if all of
the nodes connected it through the new
labeled arc have been labeled SOLVED.
v) If CURRENT has been labeled
SOLVED or if the cost of CURRENT was
just changed, then its new status must be
propagated backup the graph. So add all the
ancestors of CURRENT to S.
1) GRAPH: The GRAPH consists of all the
nodes present in the graph. T saves all the

Q26. What are the limitations of problem


reduction
technique explain
with
example?
Ans: The following are the limitations of
problems reduction technique
1) Consider the following example:
Suppose that node J is expanded at the next
step and that one of its successors is node E,
production the graph shown in fig(b). This
new path to E is longer then previous path
to E going through C. But since the path
through C will only lead to a solution if
these is also a solution to D, which awe

nodes lies in the graph. T provides all the


nodes one by one whenever required.
2) INIT: Whenever a node is selected from
the graph it is initialize by some value, that
value is INIT for that particular node.
3) FUTILITY: It is used to search those
elements which is having values in the
FUTILITY. In short FUTILITY nothing but
the value through which we can search the
best path from the graph.
4) NODE: NODE is nothing but the
element or vertices of the GRAPH. NODEs
are those on which we applied the search
operation.
5) S: S contains the set of nodes on which
we are to apply the search or performing
AND-Or operation.
6) CURRENT: Through this we can
compute the paths cost and can find the
BESTPATH from the graph.
7) SUCCESSOR: The SUCCESSOR are
the child node of the current node. If we
have the following graph.

Q28. Explain the Constraints satisfaction


algorithm:
Algorithms:
1) Propagate available constraints. To do
this first as a OPEN to the set of all objects
that have values assigned in a complete
solution. Then do until an inconsistency is
detected or until OPEN is empty:
a) Select an object OB from OPEN.
Strengthen as much as possible the set of
constraints that apply to OB.
b) if this is different from the set that was
assigned the last OB was examined or if
this is the first time OB has been examined,
then add to OPEN all objects that share any
constraints with OB.
c) Remove OB from OPEN.
2) If the union of constraints discovered
above defines a solution, then quite and
report the solution.
3) If the union of constraints discovered
above defines a contradiction, then return
failure.
4) If neither of the above occurs, then it is
necessary to make a guess at some things in
order to proceed. To do this loop until a
solution is found or all possible solutions
have been eliminated:
a) Select an object whose value is not yet
determined and select a way of
strengthening the constraints on that object.
b)
Recursively
invoked
constraints
satisfaction with the current set of
constraints augmented by strengthening
constraints just selected.
Q29. What is Constraints satisfaction?
Trace the constraints
satisfaction
procedure to solve the following
Cryptarimatic problem:
Ans: SEND
+ MORE
MONEY
Initial state: No two letters have the same
value.
Constraints satisfaction is a search process
that operates in a space of constraints sets.
The initial state contains the constraints that
are originally given in the problem
description. A goal state is any state that has
been
constrained
enough,
where
enough must be defined for each
problem.
Constraint satisfaction is a two step
process. First Constraints are discovered
and propagated as far as possible.
Throughout the system. Then, if there is
still not a solution, propagate or begin
search. A guess about something is made
and added as a new constraint. Propagation
can then occur with this new constraint.
Initially
rules
for
propagating
constraints
generate
the
following
additional constraints.

M = 1, Since two single digit numbers


plus a carry cannot total more than 19.
S = 8 or 9, since S + M + C3 > 9 [to
generate carry and M = 1, S + 1 + C 3 > 9 so.
S + C3 > 8 and C3 is at most 1.
= 0, Since S + M(1) + C 3 (< = 1) must be
at least 10 to generate a carry and it can be
at most 11. But M is already 1, so O must
be 0.
N = E or E + 1, depending on the value
of C2. But N cannot have the same value as
E. So N = E + 1 and C2 is 1.
In order for C2 to be 1, the sum of N + R
+ C, must be greater than 9, so N + R must
be greater than 8.
N + R cannot be greater than 18, even
with a carry in, so E cannot be 9.
At this point let us assume that no more
Constraints can be generated. Then to make
progress from here, we must guess.
Suppose E is assign value 2 (we choose to
guess value for E, because E occurs 3 times
and thus interact highly with the other
letters). Now next cycle begins.
The Constraints propagator now
observes that,
N = 3 since N = E +1.
R = 8 or 9 since R + N(3) + C1(0 or 1) =
2 or 12. But since N is already 3, the sum of
these non-negative numbers cannot be less
than 3. Thus R + 3 + (0 or 1) = 12 and R =
8 or 9.
2 + D = Y or 2 + D = 10 + Y, from the
sum in the rightmost column.

Q10. Why in BFS technique memory is not efficiently


Sr
1)
2)

3)

REVIEW OF ALGORITHMS:
1) BFS 2) DFS 3) GENERATE & TEST
4) Hill Climbing 5) Steepest Hill Climbing
6) Simulated Annealing 7) Best First
Search 8) A* 9) Agenda 10) AND-OR 11)
AO* 12) Constraints Satisfaction 13)
Mean-end analysis.
Q. Compairision:
a) BFS & DFS b) Steepest Hill Climbing
& Best First Search c) Generate and Test
& Simple Hill Climbing:
Q. What are the similarities between BFS
& DFS?
Q. Explain the various criteria to built an
AI systems?
Ans: The answer of this questions is above
4 points (1) (2) (3) & (4) by giving example
of each step we can elaborate the answer.
Heuristic Search Techniques

DFS
Checks initial state
is goal, if yes quit &
return success.
DO:
Generate
successor E, if no
successor failure.
If success is return
then
stop
else
continue in loop.

4)

d) Simulated Annealing & A*


Sr.
1)

2)

Q30: Write an algorithm of means ends


Analysis?
Ans: Algorithm: 1) Compare CURRENT
to GOA2. If there are no differences
between then return. 2) Otherwise, select
the most important difference and reduce it
by doing the following until success or
failure is signaled. a) Select an as yet
untried operator O that is applicable to the
current differences. If there are no such
operators, then signal failure. b) Attempt to
apply O to CURRENT, Generate
description of two states: O-START, a state
in which Os preconditions are satisfied and
O-RESULT, the state that would if O were
applied in O-START. c) If (FIRST-PART
MEA (CURRENT, O-START) and
(LAST-PART MEA (O-RESULT,
GOAL))
are successful, then signal success and
return the result of concatenating FIRSTPART, and LAST-PART.

BFS
Creation
of
NODELIST as
initial node.
Remove
element call it
E.
if
node
NODELIST
empty quit.
Apply rule to
generate
new
state if goal quit
else add it in
NODELIST.

Simulated
Annealing
Find initial state,
if it is a goal
quit
else
continue
initial state as
current state.
Initialize BESTSO-FAR
to
current state.

3)

Initialize T
annealing
schedule.

4)

Loop: until goal

select
an
operator
E = (value of
current) (value
of new state)
is ! goal but
better then call it
BEST-SO-FAR
to this new state
If ! better then

set
P'(probability)
range [0, 1] if P'
< number then
accept otherwise
do nothing.
Revice T as
per S.A.
Return
BESTSO-FAR

5)

Keywords:
Initialize State,
goal
state,
current
state,
BEST-SO-FAR,
P', T, E = Value
of current New
Value

A*
Start with OPEN by
considering h', f'. h
values, set CLOSED
to empty list.
DO: if no node
failure else pick from
OPEN
paste
in
CLOSED
call
BESTNODE. If it is
goal quite. Else
generate successor of
BESTNODE.
Set backward line
to point back to
SUCCESSOR
from
BESTNODE.
Compute g = g + cost
of
getting
from
BESTNODE
IF SUCCESSOR
already present quit to
other and call OLD.
We can leave OLD
else check backward
links its parent node.
If successor node is
better than OLD then
set this node as
current by checking
the values f', g & h.

Keywords:
Keywords:
NODELIST E,
SUCCESSOR
SUCCESSOR
utilized? Explain?
Q11. Write an algorithm of DFS? & its advantages?
Q12. Why DFS search technique to most suitable than
BFS?
Q13. Using an appropriate heuristic search technique
solve the following 8-puzzle problem?
Q14. Explain generate and test algorithm with the
example.
Q15. Explain simple Hill climbing algorithm?
Q16. Explain Steepest- A scent Hill Climbing Algorithm?
Q17. Give the meaning of the following terms: a) Local
Maximum b) Plateau c) Ridge
Q18. Write down the Algorithm for simulated annealing.
Q19. Explain with the example Best-First-Search
Algorithms?
Q20. Explain A* Algorithm?
Q21. What are the basic three components are necessary
to implement A* algorithm.
Q22. What are the two components of heuristic function
f'? Explain with the suitable example?
Q23. Explain with the suitable example h' under estimates
and overestimates and value of h? Or What are the
observations can determine after evaluation of A*
algorithm?
Q24. Write an algorithm of Agenda driven search
technique? What are the two things mainly needed to
implement this algorithm?
Q25. Write an algorithm AND-OR graph or problem
reduction with the example
Q26. What are the limitations of problem reduction
technique explain with example?
Q27. Write an algorithm of AO* searching technique &
explain the working of following terms in AO*.
1) Graph 2) INIT 3) FUTILITY 4) NODE 5) S 6)
CURRENT 7) SUCCESSOR.
Q28. Explain the Constraints satisfaction algorithm:
Q29. What is Constraints satisfaction? Trace the
constraints satisfaction procedure to solve the following
Cryptarimatic problem:
Q30: Write an algorithm of means ends Analysis?
a) BFS & DFS b) Steepest Hill Climbing & Best First
Search c) Generate and Test & Simple Hill Climbing:
d) Simulated Annealing & A*
Q. What are the similarities between BFS & DFS
Q. Explain the various criteria to built an AI systems

Sr.
1)

2)
If successor is new
in OPEN chack it into
CLOSED LIST trns
call OLD node. Then
propagate DF finding
f' value. If ! present
put it into OPEN, add
it into BESTNODEs
successors compute: f'
= g + h'.
Keywords:
Initial
state goal current
OPEN,
CLOSE,
BESTNODE, OLD, f',
g, h' SUCCESSORS.

Keywords: No
such keywords
are used.
Sr.
No.
1)

2)
Q. Write in brief about the AI techniques which are
required for solving problem?
Q.4 Explain the brief various applications of A.I.?
Q2. Explain, what are the AI problem characteristics?
Discuss in brief.
Q3. Illustrate the decomposable technique for problem
solving?
Q5. Describe how branch and bound techniques could be
used to find an optimal solution of an AI problem
implement this technique to find the optimal solution of
the following problem.
Q6. Describe the four factor consisted by a production
system?
Or What do you mean by production system? What are
he elements of production system?
Or Set of rules, knowledge databases, control strategy
and rule applier really a important factor of production
system? Why, explain?
Q7. What are the requirements of a good control strategy
for an AI system? OR What are the contents of good
control strategy? Why good control strategy should cause
motion and why? It is systematic?
Q8. Explain the importance of state space search and
problem space search in AI?
Q9. What do you mean by BFS? Write an Algorithm for
BFS?

Generate and
Test
Generate
possible
solution
considering
problem
space.
Test if this is
the goal node
by comparing
initial & goal
node [which is
generated]

Steepest
Hill
Climbing
Find initial state, if
it is a goal return a
quit else initial
state.
LOOP: Solution is
found:
Apply operator to
generate new start.
New is goal, quit,
else mark it SUC if
it is better then
previous mark SUC
else leave.
If SUC is better
than current state
than mark current
state to SUCC.
Keywords:
initial
state, goal state,
SUC, current state,
successor.

Simple Hill Climbing


Find initial state, if it is
a goal return & quite
otherwise initial state.

Loop: Solution found:


Produce new state;
evaluate new state:
i) If it is a goal return
& quit. Ii) If not then
make current state
(better then previous)
If it is not better than
previous
state
repeat(1).
Keywords:
Initial
state,
goal
state,
current states goal.
Best First Search
Start with OPEN:

LOOP: Until goal!


Found or Empty
OPEN
Pick best node
from OPEN

Generate
its
successor
SUCCESSOR
Evaluate before?
If not add it OPEN.
If generated
before, change the
parent
and process
from step (2).
Keywords:
initial
states, goal state,
OPEN, current state,
successor

question set AI4


Learning with Macro-Operators
Learning from example
Boltzmann Machines
The Minimax Search Procedure
Algorithm: MINIMAX(Position, Depth, Player)

Adding Alpha-Beta Cutoffs


Non-Linear Planning using constraint posting:
Bayesian Networks:
FUZZY LOGIC:

You might also like