Professional Documents
Culture Documents
DEPARTMENT OF CSE
TWO MARKS
CS T71 ARTIFICIAL INTELLIGENCE
UNIT I
Introduction: History of AI - Intelligent agents Structure of agents and its functions - Problem
spaces and search - Heuristic Search techniques Best-first search - Problem reduction - Constraint
satisfaction - Means Ends Analysis.
UNIT II
Knowledge Representation: Approaches and issues in knowledge representation- Knowledge Based Agent- Propositional Logic Predicate logic Unification Resolution - Weak slot - filler
structure Strong slot - filler structure.
UNIT III
Reasoning under uncertainty: Logics of non-monotonic reasoning - Implementation- Basic
probability notation - Bayes rule Certainty factors and rule based systems-Bayesian networks
Dempster - Shafer Theory - Fuzzy Logic.
UNIT IV
Planning and Learning: Planning with state space search - conditional planning-continuous
planning - Multi-Agent planning. Forms of learning - inductive learning - Reinforcement Learning learning decision trees - Neural Net learning and Genetic learning
UNIT V
Advanced Topics: Game Playing: Minimax search procedure - Adding alpha-beta cutoffs.
Expert System: Representation - Expert System shells - Knowledge Acquisition.
Robotics: Hardware - Robotic Perception Planning - Application domains.
Swarm Intelligent Systems Ant Colony System, Development, Application and Working of Ant
Colony System.
(2 marks)
UNIT I
1. Define AI as Systems that think like humans?
The exciting new effort to make computers think . . . machines with minds, in the full
and literal sense.
The automation of activities that we associate with human thinking, activities such as
decision-making, problem solving, learning.
2. Define AI as Systems that think rationally?
The study of mental faculties through the use of computational models.
The study of the computations that make it possible to perceive, reason, and act.
3. Define AI as Systems that act like humans?
The art of creating machines that perform functions that require intelligence when
performed by people.
The study of how to make computers do things at which, at the moment, people are
better.
4. Define AI as Systems that act rationally?
Computational intelligence is the study of design of intelligent agents.
Artificial intelligence is concerned with intelligent behavior in artifacts.
5.Explain the turning test approach for acting humanly?
Turing defined intelligent behavior as the ability to achieve human-level performance in
all cognitive tasks, sufficient to fool an interrogator.
The test he proposed is that the computer should be interrogated by a human via a
teletype, and passes the test if the interrogator cannot tell if there is a computer or a human at the
other end.
6. What are the things the computer needs to act as human?
The computer would need to possess the following capabilities:
natural language processing to enable it to communicate successfully in English (or
some other human language).
knowledge representation to store information provided before or during the
interrogation.
automated reasoning to use the stored information to answer questions and to draw new
conclusions.
machine learning to adapt to new circumstances and to detect and extrapolate patterns.
7. To pass the total Turing Test, the computer needs what?
To pass the total Turing Test, the computer will need the, (COMPUTER VISION )
computer vision to perceive objects, and ROBOTICS robotics to move them about.
If we say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
There are two ways to do this: through introspectiontrying to catch our own thoughts as they
go byor through psychological experiments.
9. Explain the laws of thought approach for Thinking rationally?
The Greek philosopher Aristotle was one of the first to attempt to codify "right
thinking," that is, irrefutable reasoning processes. His famous syllogisms provided patterns for
argument structures that always gave correct conclusions given correct premises. For example,
"Socrates is a man; all men are mortal; therefore Socrates is mortal." These laws of thought were
supposed to govern the operation of the mind, and initiated the field of logic.
10. Explain the rational agent approach for acting rationally?
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An
agent is just something that perceives and acts. (This may be an unusual use of the word, but you
will get used to it.) In this approach, AI is viewed as the study and construction of rational
agents.
11.Give the advantages when we study AI as rational agent design?
o it is more general than the "laws of thought" approach, because correct inference
is only a useful mechanism for achieving rationality, and not a necessary one.
o it is more amenable to scientific development than approaches based on human
behavior or human thought, because the standard of rationality is clearly defined
and completely general.
12.What are all the fields from which AI can be inherited?
AI can be inherited from
Psychology
Linguistic
Philosophy
Mathematics
CS
Economics
Neuro science
Control theory and cybernetics
13.what is the first successful knowledge-intensive system. Explain?
the first successful knowledge-intensive system is DENDRAL.
DENDRAL determines the 3D structure of the difficult chemical compound.
14. what is the first successful rule based expert system. Explain?
the first successful knowledge-intensive system is MYCIN.
MYCIN is used to diagonise the blood infectious disease.
15.give the applications of AI?
The areas where AI can be applied are
Autonomous planning and scheduling
Game playing
Autonomous control
Diagnosis
Logistic planning
Robotics
16.Define an Intelligent agent?
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
Eg: vaccum cleaner
17. Give the sensors and actuators for human, robotics and software agent?
Robotics agent:
Sensors: infrared rays, cameras
Actuators:motor
software agent:
Sensors: keys in keyboard
Actuators:output displayed on screen
Human agent:
Sensors: eyes, ears, and other organs
Actuators: other body parts
18. How does an Agents interact with environments through sensors and effectors.?
PERCEPTS
SENSORS
ENVIRONMENT
ACTION
AGENT
ACTUATORS
19. Define an ideal rational agent?
For each possible percept sequence, an ideal rational agent should do whatever action
is expected to maximize its performance measure, on the basis of the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
20.what are the four things rational agent depend on?
the rational agent depends on four things:
The performance measure that defines degree of success.(P)
What the agent knows about the environment.(E)
Everything that the agent has perceived so far. We will call this complete perceptual history
the percept sequence.(A)
The actions that the agent can perform.(S)
21.what is an agent function?
An agent program tries to implement the agent architecture is called agent function.
Agent function=agent architecture + agent program
22.what are four types of environment?
If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise it is static. Static environments are easy to deal
with because the agent need not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time. If the environment does not change with the passage of
time but the agent's performance score does, then we say the environment is semidynamic.
27. explain Discrete vs. continuous environment?
If there are a limited number of distinct, clearly defined percepts and actions we
say that the environment is discrete. Chess is discretethere are a fixed number of possible
moves on each turn. Taxi driving is continuousthe speed and location of the taxi and the other
vehicles sweep through a range of continuous values.
28.what are the four types of agent program?
the four types of agent program:
Simple reflex agents
model based agent
Goal-based agents
Utility-based agents
29.what is simple reflex agent?
It is the simplest kind of agent.these agent selects action on the basis of the current
percept,ignoring the rest of the percept history.
Eg:vaccum agent
30. what is model based agent?
The most effective way to handle partial observability is for the agent to keep track
of the part of the world it cant see now. That is ,the agent should maintain some sort of internal
states that depends on the percept history and therby reflects atleast some of the unobserved
aspects of the current state.
31.what is Goal-based agents?
Knowing about the current state of the environment is not always enough to decide
what to do. For example, at a road junction, the taxi can turn left, right, or go straight on. The
right decision depends on where the taxi is trying to get to. In other words, as well as a current
state description, the agent needs some sort of goal information, which describes situations that
are desirable for example, being at the passenger's destination.
32.what is utility based agent?
Goals alone are not really enough to generate high-quality behavior. For example, there
are many action sequences that will get the taxi to its destination, thereby achieving the goal, but
some are quicker, safer, more reliable, or cheaper than others.
The customary terminology is to say that if one world state is preferred to another,
then it has higher utility for the agent.
Utility is therefore a function that maps a state9 onto a real number, which describes
the associated degree of happiness.
33.what is the advantage of learning agent?
It allows the agent to operate in initially unknown environment and to become more
competent than its initial knowledge alone might alow.
34.what are the four components of learning agent?
The four components of learning agent are
Learning element
Performance element
Critic
Problem generator
35.explain the components of learning agent?
Learning element-it is responsible for improvement of the agent.
Performance element-what action it does for particular percept.
Problem generator- one which improves the learning element.it is responsible for giving
suggestions and actions to learning element.
Critic-used to get feedback from environment.
Breadth-first search is a simple strategy in which the root node is expanded first, then
all the successors of the root node are expanded next, then their successors, and so on. In
general, all the nodes are expanded at a given depth in the search tree before any nodes at the
next level are expanded.
42. What is uniform cost search?
Breadth-first search is optimal when all step costs are equal, because it always
expands the shallowest unexpanded node. By a simple extension, we can find an algorithm
that is optimal with any step cost function. Instead of expanding the shallowest node,
uniform-cost search expands the node n with the lowest path cost. Note that if all step costs
are equal, this is identical to breadth-first search.
43.Define depth first search.
Depth-first search always expands the deepest node in the current fringe of the search
tree. The search proceeds immediately to the deepest level of the search tree, where the nodes
have no successors. As those nodes are expanded, they are dropped from the fringe, so then
the search "backs up" to the next shallowest node that still has unexplored successors.
44.Define depth limited search.
The problem of unbounded trees can be alleviated by supplying depth-first search
with a predetermined depth limit l. That is, nodes at depth l are treated as if they have no
successors. This approach is called depth-limited search. The depth limit solves the infinitepath problem.
45.Define iterative deepening depth first search.
Iterative deepening search (or iterative deepening depth-first search) is a general strategy,
often used in combination with depth-first search, that finds the best depth limit. It does this
by gradually increasing the limit-first 0, then 1, then 2, and so on-until a goal is found. This
will occur when the depth limit reaches d, the depth of the shallowest goal node.
46.What is bidirectional search?
The idea behind bidirectional search is to run two simultaneous searches-one forward
from the initial state and the other backward from the goal, stopping when the two searches
meet in the middle.
47.Compare all uninformed searches.
Criterion
Breadth
Uniform
first
cost
Completeness Yes
Yes
Optimality
Yes
Yes
d+1
Time
O(b )
O(b[c*/ ])
d+1
Space
O(b )
O(b[c*/ ])
Depth
first
No
No
O(bm)
O(bm)
Depth
limited
No
No
O(bl)
O(bl)
Iterative
deepening
Yes
Yes
O(bd)
O(bd)
Bidirectional
Yes
Yes
O(bd/2)
O(bd/2)
Unit II
1. What is knowledge based agent?
The central component of a knowledge-based agent is. Its knowledge base or KB.
Informally, a knowledge base is a set of representations of facts about the world. Each individual
representation. SENTENCE" is called a sentence the sentences are expressed in a language
called a knowledge representation language.
2. What are the three levels of KB?
The knowledge level or epistemological level
The logical level
The implementation level
3. Describe Wumpus world?
The Wumpus world is a grid of squares surrounded by walls, where each
Square can contain agents and objects. The agent always starts in the lower left corner, a square
that we will label [1, 1]. The agent's task is to find the gold, return to [1, 1] and climb out of the
cave.
written about everything in the universe at once. First-order logic seems to be able to capture a
good deal of what we know about the world, and has been studied for about a hundred years.
12. State Rules of inference for propositional logic?
The process by which the soundness of an inference is established through truth tables
can be extended to entire classes of inferences. There are certain patterns of inferences that occur
over and over again, and their soundness can be shown once and for all. Then the pattern can be
captured in what is called an inference rule.
13. What is Models?
Any world in which a sentence is true under a particular interpretation is called a model
of that sentence under that interpretation.
14. Define Monotonicity?
The use of inference rules to draw conclusions from a knowledge base relies implicitly on
a general property of certain logics (including prepositional and first-order logic) called
monotonicity.
15. Define Horn Sentences?
There is also a useful class of sentences for which a polynomial-time inference procedure
exists. This is the class called Horn sentences.8 A Horn sentence has the form:
PI A P2 A ... A Pn => Q Where the P, and Q are no negated atoms.
16. Define Terms?
A term is a logical expression that refers to an object.
17. Define Atomic Sentences?
An atomic sentence is formed from a predicate symbol followed by a parenthesized list of
terms. An atomic sentence is true if the relation referred to by the predicate symbol holds
between the objects referred to by the arguments.
18. Define Complex Sentences?
Use logical connectives to construct more complex sentences, just as in propositional
calculus. The semantics of sentences formed using logical connectives is identical to that in the
propositional case.
19. Define Ground Term?
A term with no variables is called a ground term.
20. Define Situation Calculus?
Situation calculus is the name for a particular way of describing change in first-order
logic. It conceives of the world as consisting of a sequence of situations, each of which is a
"snapshot" of the state of the world. Situations are generated from previous situations by actions.
Existential Elimination
Existential Introduction
22. What is Universal Elimination?
Universal Elimination: For any sentence a, variable v, and ground term g:
Vva
SUBST ({v/g}, )
For example, from x, Likes(x, Ice Cream), we can use the substitution {x/Ben} and infer
Likes (Ben, Ice Cream).
23. What is Existential Elimination?
Existential Elimination: For any sentence a, variable v, and constant symbol k that does
not appear elsewhere in the knowledge base:
va
SUBST ({v/k}, )
For example, from x Kill(x, Victim), we can infer Kill (Murderer, Victim), as long as
Murderer does not appear elsewhere in the knowledge base.
24. What is Existential Introduction?
Existential Introduction: For any sentence o, variable v that does not occur in a, and
Ground term g that does occur in a:
v SUBST ({g/v}, )
25. What is Unification?
Unification: The job of the unification routine, UNIFY, is to take two atomic sentences p and q
and return a substitution that would make p and q look the same. (If there is no such substitution,
then UNIFY should return fail.) Formally,
UNIFY (p, q) = 6 where SuBST ( , p) = SuBST ( , q)
is called the unifier of the two sentences.
26. What is forward chaining?
Start with the sentences in the knowledge base and generate new conclusions that in turn
can allow more inferences to be made. This is called forward chaining. Forward chaining is
usually used when a new fact is added to the database and we want to generate its consequences.
27. What is backward chaining?
Start with something we want to prove, find implication sentences that would allow us to
conclude it, and then attempt to establish their premises in turn. This is called backward
chaining, because it uses Modus Ponens backwards. Backward chaining is normally used when
there is a goal to be proved.
28. Define Renaming?
One sentence is a renaming of another if they are identical except for the names of the
variables. For example,
Likes(x, Ice Cream) and Likes(y, Ice Cream) are renaming of each other because they only
differing the choice of x or y, but Likes(x, x) and Likes(x, y) are not renaming of each other.
29. What is Semantic Net?
A Semantic net is a representation of nodes and links. It describes how the representation
is done in Artificial Intelligence.
30. Give the four fundamental parts of Semantic Net?
The four fundamental parts of Semantic net are
Lexical part
Structural part
Procedural part
Semantic part
31. What is Describe and Match method?
The basic idea is that you can identify an object by describing it and then searching for a
matching description in a description library.
32. What do you mean by Frames?
Each node and links that emanate from the node can be collected together, is called
Frame. Semantic net is either as a collection of nodes and links (or) collection of frames.
33. What do you mean by Chunk?
Each chunk is represented as a frame with slot and slot values.
34. What do you mean by Instances or Instance frames?
Frames which describe individual thing is called as Instances (or) Instances frames.
35. What do you mean bye classes (or) Class frames?
Frames which describe entire classes is called classes (or) class frames.
36. What do you mean by Access Procedures?
Access Procedures are required to construct and manipulate instances and classes of the
frame representation which include
Class constructor
Instance constructor
Slot Writer
Slot Reader
37. What is CPL?
CPL Class Precedence List
An ordered list of the class hierarchy, which has only one ako link between classes and
only one is-a link between an instances to the class.
Location Library
48. What do you mean by Time?
Time specifies when an action occurs.
Ex: John went to library before the noon
Time Noon
49. What is Duration?
Duration specifies how long an action occurs.
Ex:John travels for 3 hours in train
Duration 3 hours
50. What is forward chaining?
Forward Chaining is the process of moving from if pattern to the then pattern.
51. What is Backward Chaining?
Backward Chaining is the process of moving from then pattern to if pattern.
52. What are Ontological Commitments?
Ontological commitments have to do with the nature of reality. For example,
Propositional logic assumes that there are facts that either hold or do not in the world. Each fact
can be in one of two states: true or false. First-order logic assumes more: namely, that the world
consists of objects with certain relations between them that do or do not hold.
53. What are Epistemological commitments?
Epistemological commitments have to do with the possible states of knowledge an agent can
have using various types of logic. In both propositional and first-order logic, a sentence
represents a fact and the agent either believe the sentence to be true, believes it to be false, or is
unable to conclude either way. These logics therefore have three possible states of belief
regarding any sentence.
54. What is Higher-order logic?
Higher-order logic allows us to quantify over relations and functions as well as over
objects. For example, in higher-order logic we, can say that two objects are equal if and only if
all properties applied to them are equivalent:
Vx, y (x = y) & (Vp p(x) O p(y))
Or we could say that two functions are equal if and only if they have the same value for all
Arguments:
V/, (f = g) (V */ (*) = (*))
UNIT III
1. State the conditions under which an uncertainity can arise?
Ans: Uncertainity can arise because of incompleteness and incorrectness in the agent's
understanding of the properties of the environment.
2.What are the three reasons why FOL fails in medical diagnosis?
Ans: The three reasons why the FOL fails in medical diagonsis are:
(i) Laziness: Too much work to lilst the complete set of antecedents and consequents
needed.
(ii) Theoretical ignorance: Medical science has no complete theory for domain.
(iii) Pratical ignorance: Even if we know all the rules uncertainity arises because some
tests cannot be run on the patient's body.
3. What is the tool that is used to deal with degree of belief?
Ans: The tool that is used to deal with the degree of belief is the probability theory which assigns
a numeric degree of belief between 0 and 1.
4. For what the utility theory is useful and in what way it is related to decision theory?
Ans: The utility theory is useful to represent and reason with preferences.
The decision theory is related to utility theory as addition of the proability theory and the
utility theory.
5. What is the fundamental idea of the decision theory?
Ans: The fundamental idea of the decision theory is that an agent is rational if and only if it
chooses the action that yields the highest expected utility .
6.How are the probability over the simple and complex propositions are classified?
Ans: The probability over the simple and complex propositions are classified as
(i) Unconditional or Prior probabilities.
(ii)Conditional or Posterior probabilities.
7. State the axioms of probability.
Ans: The axioms are:
(i) All probability are between 0 and 1. 0<= P(A)<=1
(ii) Necessarily true propositions have probability 1 and necessarily false propositions
have probability 0.P(True)=1,P(False)=0.
(iii) The probability of a disjunction is given by
P(A V B)= P(A) + P(B) - P(A ^ B)
(iv) Let B=(negation)A in the axiom3.
P(A V (negation)A )= P(A) + P( (negation)A) - P(A ^ (negation)A)
(v)P(true)=P(A) + P( (negation)A)-P(false)(by logical equivalence)
(vi)1= P(A) + P( (negation)A) (by step2)
(vii)P((negation)A)= 1- P(A) (by algebra)
8.What is Joint Probability Distribution?
Ans: An agent's probabitlity assignments to all propositions in the domain(both simple and
complex) is called as Joint Probability Distribution.
9.What is the disadvantage of Baye's rule?
Ans: It requires three terms to compute one conditional probability (P(B/A))
18.What makes the nodes of the Bayesian network and how are they connected?
Ans: A set of random variables makes up the nodes of the network.Variables may be discrete or
continuous and a set of directed links or arrows connect pair of nodes.
19.What is Conditional Probability Table(CPT)?
Ans: The table representation of the Bayesian network is called the Conditional probability
table.
20.What is Conditioning Case?
Ans: A Conditioning case is just a possible combination of values for the parent nodes.
21.What are the Semantics of Bayesian Network?
Ans: The Semantics can be represented as the
1.Global Semantics(Full joint probability distribution)
2.Local Semantics(Conditional independent statements)
22.State the way to representing the Full joint Distribution.
Ans: A generic entry in the joint distribution is the probability of a conjunction of particular
assignments to each variables,siuch as
p(x1=x1^----^xn)xn)=p(x1,---- xn)
23.When will a Bayesian Network ia said as Compact?
Ans: A Bayesian network is compact,when the information is complete and non-redundant.
The Compactness of Bayesian Network is an example of a very general property of locally
structured systems.
24.Explain Node Ordering.
Ans: The correct order in which to add nodes is to add the root causes(parents first and then the
variables) and the addition process is continued,until we reach the leaves which have no direct
casual inference on other variables.
25.What is Topological Semantics?
Ans: Topological Semantics is a graph structure and from these we can gerive the numerical
statements.
26.What is the specification for the Topological Semantics?
Ans: 1.A node is conditionally independent of its non-descendants, given its parents
2.A node is conditionally independent of all other nodes in the network, given its parents
children and childrens parents and this called the Markov blanket.
27.What are the two ways to represent Canonical Distributions?
Ans:1.Deterministic nodes
2.Noisy-OR relation
28.What are Deterministic nodes?
Ans: It is used to represent
x=f(parents(x)) for some function.
29.Explain Noisy-OR Relationship.
the
certain
knowledge,
by
means
of
the
Ans: It is the generalization of logical OR and it is used to represent the uncertain knowledge.
30.What are the four types of inferences?
Ans: 1.Diagnostic Inferences
2.Casual Inferences
3.Intereasual Inferences
4.Mixed Inferences
31.What are the three basic classes of algorithms for evaluating multiply connected
networks?
Ans: The three basic classes of algorithms for evaluating multiply connected networks are
cluestring, conditioning methods and stochastic simulation methods.
32.What is done in clustering?
Ans: It transforms the network into a probabilistically equivalent polytrees by merging offending
nodes.
33.What is cutset?
Ans: A set of variable that can be instantiated to yield polytree is called a cutset i.e this method
transforms the network into several simpler polytrees.
34.What is a utility function?
Ans: An agent's preferences between world states are captured by a utility function which
assigns a single no. to express the derivability of a state utilities are combined with the outcome
probabilities for actions to give an expected utility for each action.
35.What is the principle behind the maximum expected utility?
Ans: The principle of MEU(Maximum Expected Utility) is that a rational agent should choose an
action that maximizes the agent's expected utility.
36.Explain orderability.
Ans: Given two states agent should prefer one or the other or else rate the two as equally
preferable i.e an agent should know what it was:(A>B)V(B>A)V(A-B)
37. What is meant by multiattribute utility theory-MAUT?
Ans: Problems in which outcomes are characterized by two or more attributes is known as
MAUT. The basic approach adopted in MAUT is to identify regularity in the preference
behavior, representation theorems to show that an agent with a certain kind of preference
structure has a utility function.
38. What are the two types of the dominance?
Ans: The two types of dominance are strict dominance and stochastic dominance.
39. What are the two roles of decision analysis?
Ans: The two roles of decision analysis are decision makers and decision analyst.
Ans: ATMS :
A) An ATMS simply labels all the states that has been considered at the same time.
B) An ATMS keeps track for each sentence of which assumptions would cause the sentence
to be true.
JTMS :
A) JTMS simply labels each sentence of being in and out.
B) The maintenance of justifications allows us to move quickly from one state to another by
making a few retractions and assertions, but only one state is represented at a time.
50.Define Markov Decision Process(MDP).
Ans:The specification of a sequential decision problem for a fully observable environment with
a Markovian Transition model and additive reward is called Markov Decision Process.
51.What are the 3 components that define MDP ?
Ans: The 3 components that define MDP are:
Initial state S
Transition model T(s, a, s)
Reward function R(s)
52.What is a policy and how is it denoted ?
Ans: A solution must specify what agent should do for any state that the agent may reach. This
solution is referred to as policy. It is denoted by .
53.When complex decisions are made?
Ans: Complex decisions are made when the outcomes of an action are uncertain. Decisions are
taken fully observable, partially observable and uncertain environments.
54.What does complex decisions deal with ?
Ans:Complex decisions deal with sequential decision problems where the agents utility depends
on a sequence of decisions.
55.Define transition model.
Ans:The specification of the outcome probabilities for each action in each possible state is called
a transition model.
56.What is meant by Markovian transition ?
Ans: When action a is done in state s, then probability of reaching s is denoted by T(s, a, s).
This is referred to as Markovian transition as the probability of reaching s from s depends only
on s and not on the environment history.
57.What is a optimal policy and how is it denoted ?
Ans: An optimal policy is a policy that yields the highest expected utility. It is denoted by
58.Define proper policy.
Ans:A policy that is guaranteed to reach a terminal state is called a proper policy.
Ans: An outcome is referred to as pareto optimal if there is no other outcome that all players
would prefer.
An outcome is pareto dominated by another if all players prefer the other outcome.
70.What are the 2 types of strategy ?
Ans: The 2 types of strategy are :
1. Pure strategy
2. Mixed strategy.
71.Define pure strategy.
Ans: A pure strategy is a deterministic policy specifying a particular action to take in each
situation.
72.Define mixed strategy.
Ans: A mixed strategy is a randomized policy that selects particular actions according to a
specific probability distribution over actions.
73.What are the components of a game in game theory ?
Ans: The components of a game in game theory are:
1. Players or agents who will make decisions.
2. Actions that the player can choose.
3. A patoff matrix that gives the utility for each combinations of actions by all the players.
74.What is a contraction ?
Ans: A contraction is a function of one argument, when applied to 2 different inputs, produce 2
output values, that are closer together by some constant amount than the original values.
75.What are the properties of contraction ?
aAns: The properties of contraction are :
A contradiction has only one fixed point. If 2 fixed points are there, they would not
get closer and hence a contraction.
When a function is applied to any argument, the value must get closer to the fixed
point and hence repeated application of a contraction always reaches the fixed point.
Unit-IV
1.What is planning in AI?
Planning is done by a planning agent or a planner to solve the current world problem.
It has certain algorithms to implement the solutions for the given problem and this is called ideal
planner algorithm
2.How does a planner implement the solution?
Unify action and goal representation to allow selection
(use logical language for both)
Divide-and-conquer by subgoaling
Relax requirement for sequential construction of solutions
3.What is the difference between planner and problem solving agents?
Problem solving agent:it will represent the task of the problem and solves it using search
techniques and any other algorithm.
Planner:It overcomes the difficulties that arise in the problem solving agent.
4.what are the effects of non- planning?
infinite branching factor in case of many tasks
choosing heuristic function and works in the same sequence/order
5. what are the key ideas of planning approach?
3 key ideas to approach planning are
open up the representation of states,goals ,actions using FOL
the planner is free to add actions to the plan whenever and wherever they are needed rather
than an incremental sequence
most part of the world are going to be independent of the other parts
6.State the 3 components of operators in representation of planning?
Action description
Pre-condition
Effect
7. what is STRIPS?
STandford Research Institute Problem Solver
Tidily arranged actions descriptions
Restricted language (function-free literals)
Efficient algorithms
8.What is situation space planner?
The planner takes the situation and searches for it in the KB and locates it.It is reused if
it is already present else a new plan is made for the situation and executed.
9. What are the drawbacks of programmers planner?
High branching factor during searchin
Huge search space
10. What are the Planning Algorithms for Searching a World Space:
Searching a World Space:
. There are two algorithms:
o Progression: An algorithm that searches for the goal state by searching through the states
generated by actions that can be performed in the given state, starting from the initial state.
o Regression: An algorithm that searches backward from the goal state by finding actions
whose effects satisfy one or more of the posted goals, and posting the chosen action's
preconditions as goals ( goal regression).
11.What is partial plan?
Partial plan is an incomplete plan which may be done during the Initial phase.
There are 2 main operation allowed in planning
Refinement operator
Modification operator
(Partial) Plan consists of
29. Explain the terms Positive example, negative example and training set.
An example is described by the values of the attributes and the value of the goal
predicate. We call the value of the goal predicate the classification of the example. If the goal
predicate is true for some example, we call it a ositive example; otherwise we call it a negative
example. A set of examples X\,... ,X\2 for the restaurant domain. The positive examples are ones
where the goal WillWait is true (X\,Xi,,...) and negative examples are ones where it is false
(X2,X5,...). The complete set of examples is called the training set.
30. Explain the methodology used for accessing the performance of learning algorithm.
Collect a large set of examples.
Divide it into two disjoint sets: the training set and the test set.
Use the learning algorithm with the training set as examples to generate a hypothesis H.
Measure the percentage of examples in the test set that are correctly classified by H.
Repeat steps 1 to 4 for different sizes of training sets and different randomly selected
training sets of each size.
31. Draw an example of training set.
In many domains, not all the attribute values will be known for every example. The values may
not have been recorded, or they may be too expensive to obtain. This gives rise to two problems.
First, given a complete decision tree, how should one classify an object that is missing one of the
test attributes? Second, how should one modify the information gain formula when some
examples have unknown values for the attribute?
37.What are multivalued attributes
When an attribute has a large number of possible values, the information gain measure gives an
inappropriate indication of the attribute's usefulness. Consider the extreme case where every
example has a different value for the attributefor instance, if we were to use an attribute
RestaurantName in the restaurant domain. In such a case, each subset of examples is a singleton
and therefore has a unique classification, so the information gain measure would have its highest
value for this attribute. However, the GAIN RATIO attribute may be irrelevant or useless.
38.What is continuous valued attribute
Attributes such as Height and Weight have a large or infinite set of possible values. They are
therefore not well-suited for decision-tree learning in raw form. An obvious way to deal with this
problem is to discretize the attribute. For example, the Price attribute for restaurants was
discretized into $, $$, and $$$ values. Normally, such discrete ranges would be defined by hand.
A better approach is to preprocess the raw attribute values during the tree-growing process in
order to find out which ranges give the most useful information for classification purposes.
39.What are version space methods.
version space methods are probably not practical in most real-world learning problems,
mainly because of noise, they provide a good deal of insight into the logical structure of
hypothesis space.
40. what is PAC learning?
Any hypothesis that is seriously wrong will almost certainly be "found out" with high
probability after a small number of examples, because it will make an incorrect prediction. Thus,
any hypothesis that is consistent with a sufficiently large set of training examples is unlikely to
be seriously wrongthat is, it must be Probably Approximately Correct. PAC-learning is the
subfield of computational learning theory that is devoted to this idea.
41.Difference between learning and performance agent?
Learning agents can be divided conceptually into a performance element, which is
responsiblefor selecting actions, and a learning element, which is responsible for modifying the
performance element.
42.. Give a function for decision list learning?
function DECisiON-LiST-LEARNiNG(e.ra;wp/<?.v) returns a decision list, No or
failure
if examples is empty then return the value No t a test that matches a nonempty subset
examples, of examples
such that the members of examples, are all positive or all negative if there is no such t then
return failure
if the examples in examples, are positive then o < Yes
else o No
return a decision list with initial test / and outcome o
UNIT V
1. Define expert system.
An expert system is a computer program that simulates the thought process of a human expert to
solve complex decision problems in a specific domain.
An expert system is an interactive computer-based decision tool that uses both facts and
heuristics to solve difficult decision problems based on knowledge acquired from an expert.
2. What are applications of expert systems?
Interpreting and identifying
Predicting
Diagnosing
Designing
Planning
Monitoring
Debugging and testing
Instructing and training
Controlling
3. What are the distinguishing characteristics of programming languages needed for expert
systems work?
Efficient mix of integer and real variables
Good memory-management procedures
Extensive data-manipulation routines
Incremental compilation
Tagged memory architecture
Optimization of the systems environment
Efficient search procedures
4. How are expert systems are organized?
1. Knowledge base consists of problem-solving rules, procedures, and intrinsic data relevant to
the problem domain.
2. Working memory refers to task-specific data for the problem under consideration.
3. Inference engine is a generic control mechanism that applies the axiomatic knowledge in the
knowledge base to the task-specific data to arrive at some solution or conclusion.
5. What are the Needs for Expert Systems?
1. Human expertise is very scarce.
2. Humans get tired from physical or mental workload.
3. Humans forget crucial details of a problem.
4. Humans are inconsistent in their day-to-day decisions.
Backward chaining is the reverse of forward chaining. It is used to backtrack from a goal to the
paths that lead to the goal. Backward chaining is very good when all outcomes are known and
the number of possible outcomes is not large. In this case, a goal is specified and the expert
system tries to determine what conditions are needed to arrive at the specified goal. Backward
chaining is thus also called goal-driven.
12. Write short notes on Data Uncertainties
Expert systems are capable of working with inexact data. An expert system allows the user to
assign probabilities, certainty factors, or confidence levels to any or all input data. This feature
closely represents how most problems are handled in the real world. An expert system can take
all relevant factors into account and make a recommendation based on the best possible solution
rather than the only exact solution.
13.What is robotics?
"A reprogrammable, multifunctional manipulator designed to move material, parts, tools,
or specialized devices through various programmed motions for the performance of a variety of
tasks
14. How does robots percepts? Or what is reception?
Perception is the process by which robots map sensor measurements into internal
representations of the environment.Perception is difficult because in general the sensors are
noisy, and the environment is partially observable, unpredictable, and often dynamic
15.