You are on page 1of 33

MANAKULA VINAYAGAR INSTITUTE OF TECHNOLOGY

KALITHEERTHAL KUPPAM, PUDHUCHERY -605 107

DEPARTMENT OF CSE
TWO MARKS
CS T71 ARTIFICIAL INTELLIGENCE
UNIT I
Introduction: History of AI - Intelligent agents Structure of agents and its functions - Problem
spaces and search - Heuristic Search techniques Best-first search - Problem reduction - Constraint
satisfaction - Means Ends Analysis.
UNIT II
Knowledge Representation: Approaches and issues in knowledge representation- Knowledge Based Agent- Propositional Logic Predicate logic Unification Resolution - Weak slot - filler
structure Strong slot - filler structure.
UNIT III
Reasoning under uncertainty: Logics of non-monotonic reasoning - Implementation- Basic
probability notation - Bayes rule Certainty factors and rule based systems-Bayesian networks
Dempster - Shafer Theory - Fuzzy Logic.
UNIT IV
Planning and Learning: Planning with state space search - conditional planning-continuous
planning - Multi-Agent planning. Forms of learning - inductive learning - Reinforcement Learning learning decision trees - Neural Net learning and Genetic learning
UNIT V
Advanced Topics: Game Playing: Minimax search procedure - Adding alpha-beta cutoffs.
Expert System: Representation - Expert System shells - Knowledge Acquisition.
Robotics: Hardware - Robotic Perception Planning - Application domains.
Swarm Intelligent Systems Ant Colony System, Development, Application and Working of Ant
Colony System.

CS T71 ARTIFICIAL INTELLIGENCE

(2 marks)
UNIT I
1. Define AI as Systems that think like humans?
The exciting new effort to make computers think . . . machines with minds, in the full
and literal sense.
The automation of activities that we associate with human thinking, activities such as
decision-making, problem solving, learning.
2. Define AI as Systems that think rationally?
The study of mental faculties through the use of computational models.
The study of the computations that make it possible to perceive, reason, and act.
3. Define AI as Systems that act like humans?
The art of creating machines that perform functions that require intelligence when
performed by people.
The study of how to make computers do things at which, at the moment, people are
better.
4. Define AI as Systems that act rationally?
Computational intelligence is the study of design of intelligent agents.
Artificial intelligence is concerned with intelligent behavior in artifacts.
5.Explain the turning test approach for acting humanly?
Turing defined intelligent behavior as the ability to achieve human-level performance in
all cognitive tasks, sufficient to fool an interrogator.
The test he proposed is that the computer should be interrogated by a human via a
teletype, and passes the test if the interrogator cannot tell if there is a computer or a human at the
other end.
6. What are the things the computer needs to act as human?
The computer would need to possess the following capabilities:
natural language processing to enable it to communicate successfully in English (or
some other human language).
knowledge representation to store information provided before or during the
interrogation.
automated reasoning to use the stored information to answer questions and to draw new
conclusions.
machine learning to adapt to new circumstances and to detect and extrapolate patterns.
7. To pass the total Turing Test, the computer needs what?
To pass the total Turing Test, the computer will need the, (COMPUTER VISION )
computer vision to perceive objects, and ROBOTICS robotics to move them about.

8. Explain the cognitive modelling approach for Thinking humanly?

If we say that a given program thinks like a human, we must have some way of
determining how humans think. We need to get inside the actual workings of human minds.
There are two ways to do this: through introspectiontrying to catch our own thoughts as they
go byor through psychological experiments.
9. Explain the laws of thought approach for Thinking rationally?
The Greek philosopher Aristotle was one of the first to attempt to codify "right
thinking," that is, irrefutable reasoning processes. His famous syllogisms provided patterns for
argument structures that always gave correct conclusions given correct premises. For example,
"Socrates is a man; all men are mortal; therefore Socrates is mortal." These laws of thought were
supposed to govern the operation of the mind, and initiated the field of logic.
10. Explain the rational agent approach for acting rationally?
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An
agent is just something that perceives and acts. (This may be an unusual use of the word, but you
will get used to it.) In this approach, AI is viewed as the study and construction of rational
agents.
11.Give the advantages when we study AI as rational agent design?
o it is more general than the "laws of thought" approach, because correct inference
is only a useful mechanism for achieving rationality, and not a necessary one.
o it is more amenable to scientific development than approaches based on human
behavior or human thought, because the standard of rationality is clearly defined
and completely general.
12.What are all the fields from which AI can be inherited?
AI can be inherited from
Psychology
Linguistic
Philosophy
Mathematics
CS
Economics
Neuro science
Control theory and cybernetics
13.what is the first successful knowledge-intensive system. Explain?
the first successful knowledge-intensive system is DENDRAL.
DENDRAL determines the 3D structure of the difficult chemical compound.
14. what is the first successful rule based expert system. Explain?
the first successful knowledge-intensive system is MYCIN.
MYCIN is used to diagonise the blood infectious disease.
15.give the applications of AI?
The areas where AI can be applied are
Autonomous planning and scheduling
Game playing
Autonomous control

Diagnosis
Logistic planning
Robotics
16.Define an Intelligent agent?
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
Eg: vaccum cleaner
17. Give the sensors and actuators for human, robotics and software agent?
Robotics agent:
Sensors: infrared rays, cameras
Actuators:motor
software agent:
Sensors: keys in keyboard
Actuators:output displayed on screen
Human agent:
Sensors: eyes, ears, and other organs
Actuators: other body parts
18. How does an Agents interact with environments through sensors and effectors.?

PERCEPTS
SENSORS
ENVIRONMENT

ACTION

AGENT

ACTUATORS
19. Define an ideal rational agent?
For each possible percept sequence, an ideal rational agent should do whatever action
is expected to maximize its performance measure, on the basis of the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
20.what are the four things rational agent depend on?
the rational agent depends on four things:
The performance measure that defines degree of success.(P)
What the agent knows about the environment.(E)
Everything that the agent has perceived so far. We will call this complete perceptual history
the percept sequence.(A)
The actions that the agent can perform.(S)
21.what is an agent function?
An agent program tries to implement the agent architecture is called agent function.
Agent function=agent architecture + agent program
22.what are four types of environment?

The four types of environment are


1. Accessible vs. inaccessible
2.deterministic vs nondeterministic
3.episodic vs nonepisodic
4.static vs dynamic
5.discrete vs continuous
6.single agent vs multi agent
23.explain Accessible vs. inaccessible environment?
If an agent's sensory apparatus gives it access to the complete state of the environment,
then we say that the environment is accessible to that agent. An environment is effectively
accessible if the sensors detect all aspects that are relevant to the choice of action. An accessible
environment is convenient because the agent need not maintain any internal state to keep track of
the world.
24. explain Deterministic vs. nondeterministic environment?
If the next state of the environment is completely determined by the current state and
the actions selected by the agents, then we say the environment is deterministic. In principle,
an agent need not worry about uncertainty in an accessible, deterministic environment. If
the environment is inaccessible, however, then it may appear to be nondeterministic.
25. explain Episodic vs. nonepisodic environment?

In an episodic environment, the agent's experience is divided into "episodes." Each


episode consists of the agent perceiving and then acting. The quality of its action depends just on
the episode itself, because subsequent episodes do not depend on what actions occur in previous
episodes. Episodic environments are much simpler because the agent does not need to think
ahead.
26. explain Static vs. dynamic environment?

If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise it is static. Static environments are easy to deal
with because the agent need not keep looking at the world while it is deciding on an action, nor
need it worry about the passage of time. If the environment does not change with the passage of
time but the agent's performance score does, then we say the environment is semidynamic.
27. explain Discrete vs. continuous environment?

If there are a limited number of distinct, clearly defined percepts and actions we
say that the environment is discrete. Chess is discretethere are a fixed number of possible
moves on each turn. Taxi driving is continuousthe speed and location of the taxi and the other
vehicles sweep through a range of continuous values.
28.what are the four types of agent program?
the four types of agent program:
Simple reflex agents
model based agent
Goal-based agents
Utility-based agents
29.what is simple reflex agent?

It is the simplest kind of agent.these agent selects action on the basis of the current
percept,ignoring the rest of the percept history.
Eg:vaccum agent
30. what is model based agent?
The most effective way to handle partial observability is for the agent to keep track
of the part of the world it cant see now. That is ,the agent should maintain some sort of internal
states that depends on the percept history and therby reflects atleast some of the unobserved
aspects of the current state.
31.what is Goal-based agents?
Knowing about the current state of the environment is not always enough to decide
what to do. For example, at a road junction, the taxi can turn left, right, or go straight on. The
right decision depends on where the taxi is trying to get to. In other words, as well as a current
state description, the agent needs some sort of goal information, which describes situations that
are desirable for example, being at the passenger's destination.
32.what is utility based agent?
Goals alone are not really enough to generate high-quality behavior. For example, there
are many action sequences that will get the taxi to its destination, thereby achieving the goal, but
some are quicker, safer, more reliable, or cheaper than others.
The customary terminology is to say that if one world state is preferred to another,
then it has higher utility for the agent.
Utility is therefore a function that maps a state9 onto a real number, which describes
the associated degree of happiness.
33.what is the advantage of learning agent?
It allows the agent to operate in initially unknown environment and to become more
competent than its initial knowledge alone might alow.
34.what are the four components of learning agent?
The four components of learning agent are
Learning element
Performance element
Critic
Problem generator
35.explain the components of learning agent?
Learning element-it is responsible for improvement of the agent.
Performance element-what action it does for particular percept.
Problem generator- one which improves the learning element.it is responsible for giving
suggestions and actions to learning element.
Critic-used to get feedback from environment.

36.Write the algorithm for simple problem solving agent?

The algorithm is,


function SIMPLE-PROBLEM -SOLVING- AGENT(percept) returns an action
inputs: percept, a percept
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state UPDATE-STATE(state, percept)
if seq is empty then do
goal FORMULATE-GOAL(state)
problem FORMULATE-PROBLEM(state, goal)
seq SEARCH(problem)
action FIRST(seq)
seq REST(seq)
return action
37.Define search?
In general, an agent with several immediate options of unknown value can decide
what to do by jrst examining diflerent possible sequences of actions that lead to states of
known value, and then choosing the best sequence. This process of looking for such a
sequence is called search.
38.What are the four components in a well defined problem?
The four components are,
Initial state
Successor function
Goal test
Path cost
39.What is uninformed search?
It means that they have no additional information about states beyond that provided in
the problem definition. All they can do is generate successors and distinguish a goal state
from a nongoal state.
40.Give the types of uninformed search?
Uninformed search types are,
Breadth first search
Uniform cost search
Depth first search
Depth limited search
Iterative deepening depth first search
Bidirectional search

41.Define breadth first search.

Breadth-first search is a simple strategy in which the root node is expanded first, then
all the successors of the root node are expanded next, then their successors, and so on. In
general, all the nodes are expanded at a given depth in the search tree before any nodes at the
next level are expanded.
42. What is uniform cost search?
Breadth-first search is optimal when all step costs are equal, because it always
expands the shallowest unexpanded node. By a simple extension, we can find an algorithm
that is optimal with any step cost function. Instead of expanding the shallowest node,
uniform-cost search expands the node n with the lowest path cost. Note that if all step costs
are equal, this is identical to breadth-first search.
43.Define depth first search.
Depth-first search always expands the deepest node in the current fringe of the search
tree. The search proceeds immediately to the deepest level of the search tree, where the nodes
have no successors. As those nodes are expanded, they are dropped from the fringe, so then
the search "backs up" to the next shallowest node that still has unexplored successors.
44.Define depth limited search.
The problem of unbounded trees can be alleviated by supplying depth-first search
with a predetermined depth limit l. That is, nodes at depth l are treated as if they have no
successors. This approach is called depth-limited search. The depth limit solves the infinitepath problem.
45.Define iterative deepening depth first search.
Iterative deepening search (or iterative deepening depth-first search) is a general strategy,
often used in combination with depth-first search, that finds the best depth limit. It does this
by gradually increasing the limit-first 0, then 1, then 2, and so on-until a goal is found. This
will occur when the depth limit reaches d, the depth of the shallowest goal node.
46.What is bidirectional search?
The idea behind bidirectional search is to run two simultaneous searches-one forward
from the initial state and the other backward from the goal, stopping when the two searches
meet in the middle.
47.Compare all uninformed searches.
Criterion
Breadth
Uniform
first
cost
Completeness Yes
Yes
Optimality
Yes
Yes
d+1
Time
O(b )
O(b[c*/ ])
d+1
Space
O(b )
O(b[c*/ ])

Depth
first
No
No
O(bm)
O(bm)

Depth
limited
No
No
O(bl)
O(bl)

Iterative
deepening
Yes
Yes
O(bd)
O(bd)

Bidirectional
Yes
Yes
O(bd/2)
O(bd/2)

48.Define informed search.


Informed search strategy--one that uses problem-specific knowledge beyond the definition of the
problem itself-can find solutions more efficiently than an uninformed strategy.
49.What is best first search?

Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH


algorithm in which a node is selected for expansion based on an evaluation function, f (n).
50.What is heuristic function?
A key component of informed searches is heuristic function, denoted by h(n).
h(n) = estimated cost of the cheapest path from node n to a goal node.
51.What is greedy best first search?
Greedy best-first search tries to expand the node that is closest to the goal, on the
grounds that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just
the heuristic function: f(n) = h(n).
52.What is A* search?
It evaluates nodes by combining g(n), the cost to reach the node, and h(n),the cost to
get from the node to the goal.
f(n)=g(n)+h(n).
Since g(n) gives the path cost from the start node to node n, and h(n) is the estirnated cost of
the cheapest path from n to the goal, we have
f (n) = estimated cost of the cheapest solution through n
53.What is recursive best first search?
It is a simple recursive algorithm that attempts to mimic the operation of standard
best-first search.
It keeps track of the f-value of the best alternative path available from any ancestor of
the current node. If the current node exceeds this limit, the recursion unwinds back to the
alternative path. As the recursion unwinds, RBFS replaces the f -value of each node along the
path with the best f -value of its children. In this way, RBFS remembers the f -value of the
best leaf in the forgotten subtree and can therefore decide whether it's worth reexpanding the
subtree at some later time.

Unit II
1. What is knowledge based agent?
The central component of a knowledge-based agent is. Its knowledge base or KB.
Informally, a knowledge base is a set of representations of facts about the world. Each individual
representation. SENTENCE" is called a sentence the sentences are expressed in a language
called a knowledge representation language.
2. What are the three levels of KB?
The knowledge level or epistemological level
The logical level
The implementation level
3. Describe Wumpus world?
The Wumpus world is a grid of squares surrounded by walls, where each
Square can contain agents and objects. The agent always starts in the lower left corner, a square
that we will label [1, 1]. The agent's task is to find the gold, return to [1, 1] and climb out of the
cave.

4. What is Knowledge Representation?


Knowledge representation is to express knowledge in computer-tractable
Form, such that it can be used to help agents perform well. A knowledge representation language
is defined by two aspects:
The syntax
The semantics
5. What is Sound OR Truth-Preserving?
An inference procedure that generates only entailed sentences is called sound or truthpreserving.
6. Define Inference?
The terms "reasoning" and "inference" are generally used to cover any process by which
conclusions are reached. In this chapter, we are mainly concerned with sound reasoning, which
we will call logical inference or deduction. Logical inference is a process that implements the
entailment relation between sentences.
7. Define Validity?
A sentence is valid or necessarily true if and only if it is true under all possible
interpretations in all possible worlds, that is, regardless of what it is supposed to mean and
regardless of the state of affairs in the universe being described.
8. Define Satisfiability?
A sentence is satisfiable if and only if there is some interpretation in some world for
which it is true. The sentence "there is a wumpus at [1, 2]" is satisfiable because there might well
be a wumpus in that square, even though there does not happen to be one in
Sentence that is not satisfiable is unsatisfiable. Self-contradictory sentences are unsatisfiable, if
the contradictoriness does not depend on the meanings of the symbols.
9. Define Logic?
Logic consists of the following:
1. A formal system for describing states of affairs, consisting of
(a) The syntax of the language, which describes how to make sentences, and
(b) The semantics of the language, which states the systematic constraints on how
sentences. Relate to states of affairs.
2. The proof theorya set of rules for deducing the entailments of a set of
sentences.
10. Define Propositional Logic?
In propositional logic, symbols represent whole propositions (facts); for example, D
might have the interpretation "the wumpus is dead." which may or may not be a true proposition.
Proposition symbols can be combined using Boolean connectives to generate sentences with
more complex meanings.
11. Define First-Order Logic?
First-order logic commits to the representation of worlds in terms of objects and
predicates on objects, as well as using connectives and quantifiers, which allow sentences to be

written about everything in the universe at once. First-order logic seems to be able to capture a
good deal of what we know about the world, and has been studied for about a hundred years.
12. State Rules of inference for propositional logic?
The process by which the soundness of an inference is established through truth tables
can be extended to entire classes of inferences. There are certain patterns of inferences that occur
over and over again, and their soundness can be shown once and for all. Then the pattern can be
captured in what is called an inference rule.
13. What is Models?
Any world in which a sentence is true under a particular interpretation is called a model
of that sentence under that interpretation.
14. Define Monotonicity?
The use of inference rules to draw conclusions from a knowledge base relies implicitly on
a general property of certain logics (including prepositional and first-order logic) called
monotonicity.
15. Define Horn Sentences?
There is also a useful class of sentences for which a polynomial-time inference procedure
exists. This is the class called Horn sentences.8 A Horn sentence has the form:
PI A P2 A ... A Pn => Q Where the P, and Q are no negated atoms.
16. Define Terms?
A term is a logical expression that refers to an object.
17. Define Atomic Sentences?
An atomic sentence is formed from a predicate symbol followed by a parenthesized list of
terms. An atomic sentence is true if the relation referred to by the predicate symbol holds
between the objects referred to by the arguments.
18. Define Complex Sentences?
Use logical connectives to construct more complex sentences, just as in propositional
calculus. The semantics of sentences formed using logical connectives is identical to that in the
propositional case.
19. Define Ground Term?
A term with no variables is called a ground term.
20. Define Situation Calculus?
Situation calculus is the name for a particular way of describing change in first-order
logic. It conceives of the world as consisting of a sequence of situations, each of which is a
"snapshot" of the state of the world. Situations are generated from previous situations by actions.

21. Give the three Inference Rules?


The three new inference rules are as follows:
Universal Elimination

Existential Elimination
Existential Introduction
22. What is Universal Elimination?
Universal Elimination: For any sentence a, variable v, and ground term g:
Vva
SUBST ({v/g}, )
For example, from x, Likes(x, Ice Cream), we can use the substitution {x/Ben} and infer
Likes (Ben, Ice Cream).
23. What is Existential Elimination?
Existential Elimination: For any sentence a, variable v, and constant symbol k that does
not appear elsewhere in the knowledge base:
va
SUBST ({v/k}, )
For example, from x Kill(x, Victim), we can infer Kill (Murderer, Victim), as long as
Murderer does not appear elsewhere in the knowledge base.
24. What is Existential Introduction?
Existential Introduction: For any sentence o, variable v that does not occur in a, and
Ground term g that does occur in a:

v SUBST ({g/v}, )
25. What is Unification?
Unification: The job of the unification routine, UNIFY, is to take two atomic sentences p and q
and return a substitution that would make p and q look the same. (If there is no such substitution,
then UNIFY should return fail.) Formally,
UNIFY (p, q) = 6 where SuBST ( , p) = SuBST ( , q)
is called the unifier of the two sentences.
26. What is forward chaining?
Start with the sentences in the knowledge base and generate new conclusions that in turn
can allow more inferences to be made. This is called forward chaining. Forward chaining is
usually used when a new fact is added to the database and we want to generate its consequences.
27. What is backward chaining?
Start with something we want to prove, find implication sentences that would allow us to
conclude it, and then attempt to establish their premises in turn. This is called backward
chaining, because it uses Modus Ponens backwards. Backward chaining is normally used when
there is a goal to be proved.
28. Define Renaming?
One sentence is a renaming of another if they are identical except for the names of the
variables. For example,

Likes(x, Ice Cream) and Likes(y, Ice Cream) are renaming of each other because they only
differing the choice of x or y, but Likes(x, x) and Likes(x, y) are not renaming of each other.
29. What is Semantic Net?
A Semantic net is a representation of nodes and links. It describes how the representation
is done in Artificial Intelligence.
30. Give the four fundamental parts of Semantic Net?
The four fundamental parts of Semantic net are
Lexical part
Structural part
Procedural part
Semantic part
31. What is Describe and Match method?
The basic idea is that you can identify an object by describing it and then searching for a
matching description in a description library.
32. What do you mean by Frames?
Each node and links that emanate from the node can be collected together, is called
Frame. Semantic net is either as a collection of nodes and links (or) collection of frames.
33. What do you mean by Chunk?
Each chunk is represented as a frame with slot and slot values.
34. What do you mean by Instances or Instance frames?
Frames which describe individual thing is called as Instances (or) Instances frames.
35. What do you mean bye classes (or) Class frames?
Frames which describe entire classes is called classes (or) class frames.
36. What do you mean by Access Procedures?
Access Procedures are required to construct and manipulate instances and classes of the
frame representation which include
Class constructor
Instance constructor
Slot Writer
Slot Reader
37. What is CPL?
CPL Class Precedence List
An ordered list of the class hierarchy, which has only one ako link between classes and
only one is-a link between an instances to the class.

38. What is Thematic Role?


It specifies how the object participates in the action.

39. What is an Agent?


The agent causes the action in the world.
Ex: Raj hits the ball
Agent Raj
40. What is Co agent?
The word with is introduced as a partner to the principal agent.
Ex: John played tennis with Ram
Co agent Ram
41. What do you mean by Beneficiary?
The Beneficiary is the person for whom an action is performed.
Ex: John bought the food for Ram
Beneficiary Ram
42. What is Thematic Object?
An object undergoing a change (or) a action performed on the object.
Ex: John hit the ball
Thematic object Ball
43. What is an Instrument?
A tool used by the agent to perform an action.
Ex: John hit the ball with a bat
Instrument Bat
44. What do you mean by source and destination?
The initial and final position of an agent.
Ex: John went to Delhi from Chennai
Source Chennai
Destination Delhi
45. What do you mean by Old and New Surroundings?
The place of the object changes from one to another new surroundings.
Ex: John picked the flower from the garden and put it in the flower vase
Old Surrounding garden
New surrounding flower vase
46. What do you mean by conveyance?
A way in which an agent or object travels.
Ex:John always goes by train
Conveyance train

47. What is Location?


The Location is where an action occurs.
Ex:John studied in the library

Location Library
48. What do you mean by Time?
Time specifies when an action occurs.
Ex: John went to library before the noon
Time Noon
49. What is Duration?
Duration specifies how long an action occurs.
Ex:John travels for 3 hours in train
Duration 3 hours
50. What is forward chaining?
Forward Chaining is the process of moving from if pattern to the then pattern.
51. What is Backward Chaining?
Backward Chaining is the process of moving from then pattern to if pattern.
52. What are Ontological Commitments?
Ontological commitments have to do with the nature of reality. For example,
Propositional logic assumes that there are facts that either hold or do not in the world. Each fact
can be in one of two states: true or false. First-order logic assumes more: namely, that the world
consists of objects with certain relations between them that do or do not hold.
53. What are Epistemological commitments?
Epistemological commitments have to do with the possible states of knowledge an agent can
have using various types of logic. In both propositional and first-order logic, a sentence
represents a fact and the agent either believe the sentence to be true, believes it to be false, or is
unable to conclude either way. These logics therefore have three possible states of belief
regarding any sentence.
54. What is Higher-order logic?
Higher-order logic allows us to quantify over relations and functions as well as over
objects. For example, in higher-order logic we, can say that two objects are equal if and only if
all properties applied to them are equivalent:
Vx, y (x = y) & (Vp p(x) O p(y))
Or we could say that two functions are equal if and only if they have the same value for all
Arguments:
V/, (f = g) (V */ (*) = (*))

UNIT III
1. State the conditions under which an uncertainity can arise?

Ans: Uncertainity can arise because of incompleteness and incorrectness in the agent's
understanding of the properties of the environment.
2.What are the three reasons why FOL fails in medical diagnosis?
Ans: The three reasons why the FOL fails in medical diagonsis are:
(i) Laziness: Too much work to lilst the complete set of antecedents and consequents
needed.
(ii) Theoretical ignorance: Medical science has no complete theory for domain.
(iii) Pratical ignorance: Even if we know all the rules uncertainity arises because some
tests cannot be run on the patient's body.
3. What is the tool that is used to deal with degree of belief?
Ans: The tool that is used to deal with the degree of belief is the probability theory which assigns
a numeric degree of belief between 0 and 1.
4. For what the utility theory is useful and in what way it is related to decision theory?
Ans: The utility theory is useful to represent and reason with preferences.
The decision theory is related to utility theory as addition of the proability theory and the
utility theory.
5. What is the fundamental idea of the decision theory?
Ans: The fundamental idea of the decision theory is that an agent is rational if and only if it
chooses the action that yields the highest expected utility .
6.How are the probability over the simple and complex propositions are classified?
Ans: The probability over the simple and complex propositions are classified as
(i) Unconditional or Prior probabilities.
(ii)Conditional or Posterior probabilities.
7. State the axioms of probability.
Ans: The axioms are:
(i) All probability are between 0 and 1. 0<= P(A)<=1
(ii) Necessarily true propositions have probability 1 and necessarily false propositions
have probability 0.P(True)=1,P(False)=0.
(iii) The probability of a disjunction is given by
P(A V B)= P(A) + P(B) - P(A ^ B)
(iv) Let B=(negation)A in the axiom3.
P(A V (negation)A )= P(A) + P( (negation)A) - P(A ^ (negation)A)
(v)P(true)=P(A) + P( (negation)A)-P(false)(by logical equivalence)
(vi)1= P(A) + P( (negation)A) (by step2)
(vii)P((negation)A)= 1- P(A) (by algebra)
8.What is Joint Probability Distribution?
Ans: An agent's probabitlity assignments to all propositions in the domain(both simple and
complex) is called as Joint Probability Distribution.
9.What is the disadvantage of Baye's rule?
Ans: It requires three terms to compute one conditional probability (P(B/A))

*One conditional probability -> P(A/B)


*Two unconditional probability -> P(B) & P(A)
10.What is the advantage of Baye's rule?
Ans: If three values are known then the unknown fourth value -> P(B/A) is computed easily.
11.What is a belief network?
Ans: A Belief network is a graph in which the following holds:
1. A set of random variables makes up the nodes of the network.
2. A set of directed links or arrows connects pairs of nodes X->Y, X has a direct influence on Y.
3. Each node has a conditional probability table that quantities the effects that the parents
have on the node.The parents of a node are all nodes that have arrows pointing to it.
4.Graph has no directed cycles -> DAG.
12.What is the task of any Probabilistic inference system?
Ans: The Task of any Probabilistic inference system is to compute the posterior probability
distribution for a set of query variables,given exact values of some evidence variable.
13. State the uses of belief networks.
Ans: 1.Making decisions based on probabilities in the network & on the agent's abilities.
2.Deciding which additional evidence variables should be observed in order to gain useful
information.
3.Sensitivity Analysis: which model has great impact.
4.Explaining the probabilistic inference result to user.
14. What are the two ways in which one can understand the semantics of belief networks?
Ans: The two ways are
1.Network as a representation of the Joint Probabiltity distribution
-used to know how to construct networks.
2.Encoding of a collection of conditional independence statements.
-designing inference procedures.
15.What is Probabilitic Reasoning?Explain
Ans: It explains how to build network models to reason under uncertainity according to the
laws of probability theory.
16.What are the disadvantages of Full joint Probability distribution?
Ans:The disadvantage is that, the interaction between the domain and the agent increases, the
number of variables is also increases and to overcome this we use a new datastructure called
Bayesian network.
17.Explain Bayesian Network.
Ans: Bayesian network is used to represnt the dependencies among variables and to give a
concise specification of any full joint probability distribution.

18.What makes the nodes of the Bayesian network and how are they connected?

Ans: A set of random variables makes up the nodes of the network.Variables may be discrete or
continuous and a set of directed links or arrows connect pair of nodes.
19.What is Conditional Probability Table(CPT)?
Ans: The table representation of the Bayesian network is called the Conditional probability
table.
20.What is Conditioning Case?
Ans: A Conditioning case is just a possible combination of values for the parent nodes.
21.What are the Semantics of Bayesian Network?
Ans: The Semantics can be represented as the
1.Global Semantics(Full joint probability distribution)
2.Local Semantics(Conditional independent statements)
22.State the way to representing the Full joint Distribution.
Ans: A generic entry in the joint distribution is the probability of a conjunction of particular
assignments to each variables,siuch as
p(x1=x1^----^xn)xn)=p(x1,---- xn)
23.When will a Bayesian Network ia said as Compact?
Ans: A Bayesian network is compact,when the information is complete and non-redundant.
The Compactness of Bayesian Network is an example of a very general property of locally
structured systems.
24.Explain Node Ordering.
Ans: The correct order in which to add nodes is to add the root causes(parents first and then the
variables) and the addition process is continued,until we reach the leaves which have no direct
casual inference on other variables.
25.What is Topological Semantics?
Ans: Topological Semantics is a graph structure and from these we can gerive the numerical
statements.
26.What is the specification for the Topological Semantics?
Ans: 1.A node is conditionally independent of its non-descendants, given its parents
2.A node is conditionally independent of all other nodes in the network, given its parents
children and childrens parents and this called the Markov blanket.
27.What are the two ways to represent Canonical Distributions?
Ans:1.Deterministic nodes
2.Noisy-OR relation
28.What are Deterministic nodes?
Ans: It is used to represent
x=f(parents(x)) for some function.
29.Explain Noisy-OR Relationship.

the

certain

knowledge,

by

means

of

the

Ans: It is the generalization of logical OR and it is used to represent the uncertain knowledge.
30.What are the four types of inferences?
Ans: 1.Diagnostic Inferences
2.Casual Inferences
3.Intereasual Inferences
4.Mixed Inferences
31.What are the three basic classes of algorithms for evaluating multiply connected
networks?
Ans: The three basic classes of algorithms for evaluating multiply connected networks are
cluestring, conditioning methods and stochastic simulation methods.
32.What is done in clustering?
Ans: It transforms the network into a probabilistically equivalent polytrees by merging offending
nodes.
33.What is cutset?
Ans: A set of variable that can be instantiated to yield polytree is called a cutset i.e this method
transforms the network into several simpler polytrees.
34.What is a utility function?
Ans: An agent's preferences between world states are captured by a utility function which
assigns a single no. to express the derivability of a state utilities are combined with the outcome
probabilities for actions to give an expected utility for each action.
35.What is the principle behind the maximum expected utility?
Ans: The principle of MEU(Maximum Expected Utility) is that a rational agent should choose an
action that maximizes the agent's expected utility.
36.Explain orderability.
Ans: Given two states agent should prefer one or the other or else rate the two as equally
preferable i.e an agent should know what it was:(A>B)V(B>A)V(A-B)
37. What is meant by multiattribute utility theory-MAUT?
Ans: Problems in which outcomes are characterized by two or more attributes is known as
MAUT. The basic approach adopted in MAUT is to identify regularity in the preference
behavior, representation theorems to show that an agent with a certain kind of preference
structure has a utility function.
38. What are the two types of the dominance?
Ans: The two types of dominance are strict dominance and stochastic dominance.
39. What are the two roles of decision analysis?
Ans: The two roles of decision analysis are decision makers and decision analyst.

40.What are the axioms of the utility theory ?

Ans: They are orderability, transitivity, continuity, substitutability, monotonocity and


decomposability.
41.What is non monotonic reasoning ?
Ans:Non monotonic reasoning is one in which axioms and / or rules of inference are extended
to make it possible to reason with incomplete information. These systems preserve the property
that at any given moment, a statement is either believed to be true, believed to be false, or not
believed to be either.
42.What is meant by nonmonotonic logic ?
Ans: One system that provides a basis for default reasoning is Nonmonotonic logic in which the
operator of first order logic is augmented with a modal operator M, which is read as
inconsistent.
43.Give an example for nonmonotonic logic.
x,y : Related(x,y) ^ M GetAlong(x,y) WillDefend(x,y)
It is read as For all x and y, if x and y are related and if the fact that gets along with y is
consistent with everything else that is believed, then conclude that x will defend y.
44.What are the 2 kinds of nonmonotonic reasoning ?
Ans: The 2 kinds of nonmonotonic reasoning are:
a) Abduction
b) Inheritance
45.What is meant by ATMS ?
Ans: ATMS Assumption based truth maintenance system. An ATMS simply labels all the
states that have been considered at the same time. An ATMS keeps track for each sentence of
which assumptions would cause the sentence to be true.
46.What is meant by JTMS ?
Ans: JTMS - Justification based truth maintenance system. JTMS simply labels each sentence of
being in and out. The maintenance of justifications allows us to move quickly from one state to
another by making a few retractions and assertions, but only one state is represented at a time.
47.Define belief revision.
Ans: Many of the inferences drawn by a knowledge representation system will have only default
status.Some of these inferred facts will turn out to be wrong and will have to be retracted in the
face of new information. This process is called belief revision.
48.List the limitations of CWA.
Ans: The limitations of CWA are:
1. It operates on individual predicates without considering the interactions among predicates
that are defined in the knowledge base.
2. It assumes that all predicates have all their instances listed. Although in many database
applications this is true, in many knowledge based systems it is not.

49.Give the difference between ATMS and JTMS.

Ans: ATMS :
A) An ATMS simply labels all the states that has been considered at the same time.
B) An ATMS keeps track for each sentence of which assumptions would cause the sentence
to be true.
JTMS :
A) JTMS simply labels each sentence of being in and out.
B) The maintenance of justifications allows us to move quickly from one state to another by
making a few retractions and assertions, but only one state is represented at a time.
50.Define Markov Decision Process(MDP).
Ans:The specification of a sequential decision problem for a fully observable environment with
a Markovian Transition model and additive reward is called Markov Decision Process.
51.What are the 3 components that define MDP ?
Ans: The 3 components that define MDP are:
Initial state S
Transition model T(s, a, s)
Reward function R(s)
52.What is a policy and how is it denoted ?
Ans: A solution must specify what agent should do for any state that the agent may reach. This
solution is referred to as policy. It is denoted by .
53.When complex decisions are made?
Ans: Complex decisions are made when the outcomes of an action are uncertain. Decisions are
taken fully observable, partially observable and uncertain environments.
54.What does complex decisions deal with ?
Ans:Complex decisions deal with sequential decision problems where the agents utility depends
on a sequence of decisions.
55.Define transition model.
Ans:The specification of the outcome probabilities for each action in each possible state is called
a transition model.
56.What is meant by Markovian transition ?
Ans: When action a is done in state s, then probability of reaching s is denoted by T(s, a, s).
This is referred to as Markovian transition as the probability of reaching s from s depends only
on s and not on the environment history.
57.What is a optimal policy and how is it denoted ?
Ans: An optimal policy is a policy that yields the highest expected utility. It is denoted by
58.Define proper policy.
Ans:A policy that is guaranteed to reach a terminal state is called a proper policy.

59.What are the types of horizons for decision making ?

Ans: The 2 types of horizons for decision making are :


Finite Horizon
Infinite horizon
60.What is meant by finite horizon ?
Ans: A finite horizon means that there is fixed time after which the game is over. With a finite
horizon, the optimal action in a given state could change over time. Therefore, the optimal policy
for a finite horizon is non stationary.
61.Differentiate between finite horizon and infinite horizon.
Ans: FINITE HORIZON :
With a finite horizon, the optimal action in a given state could change over time.
The optimal policy for a finite horizon is non stationary.
INFINITE HORIZON :
For an infinite horizon there is no fixed deadline.
Since the optimal policy deals with only the current state, it is stationary.
62.What is mechanism design (Inverse game theory) ?
Ans: Designing a game whose solutions consist of each agent having its own rational strategy is
called as Mechanical Design (Inverse game theory).It can be used to construct intelligent multi
agent systems that solves complex problems in distributed way.
63.What does a mechanism consists of ?
Ans: A mechanism consists of
1. A language for describing the set of allowable strategies that the agent may adopt.
2. An outcome rule G that determines the payoffs to the agents given in the strategy profile
of allowable strategies.
64.Define dominant strategy.
Ans: A dominant strategy is a strategy that dominates all others. A strategy s for player p
strongly dominates strategy s if the outcomes for p is better for p than the outcome for s.
65.Define weak strategy.
Ans: Strategy s weakly dominates s if s is better than s on atleast one strategy profileand no
worse on any other.
66.What is Nash equilibrium ?
Ans: The mathematical John Nash proved that every game has an equilibrium of the type defined
above and is called as Nash equilibrium.
67.What is maximin technique ?
Ans: In order to apply equilibrium for a fi, Von Neumann developed a method for finding the
optimal mixed strategy . This is called as technique.
68.What is dominant strategy equilibrium ?
Ans: In case of a two player game, when both the players have a dominanant strategy, the
combinations of those strategies is called dominant strategy equilibrium.
69.What is meant by pareto optimal and pareto dominated ?

Ans: An outcome is referred to as pareto optimal if there is no other outcome that all players
would prefer.
An outcome is pareto dominated by another if all players prefer the other outcome.
70.What are the 2 types of strategy ?
Ans: The 2 types of strategy are :
1. Pure strategy
2. Mixed strategy.
71.Define pure strategy.
Ans: A pure strategy is a deterministic policy specifying a particular action to take in each
situation.
72.Define mixed strategy.
Ans: A mixed strategy is a randomized policy that selects particular actions according to a
specific probability distribution over actions.
73.What are the components of a game in game theory ?
Ans: The components of a game in game theory are:
1. Players or agents who will make decisions.
2. Actions that the player can choose.
3. A patoff matrix that gives the utility for each combinations of actions by all the players.
74.What is a contraction ?
Ans: A contraction is a function of one argument, when applied to 2 different inputs, produce 2
output values, that are closer together by some constant amount than the original values.
75.What are the properties of contraction ?
aAns: The properties of contraction are :
A contradiction has only one fixed point. If 2 fixed points are there, they would not
get closer and hence a contraction.
When a function is applied to any argument, the value must get closer to the fixed
point and hence repeated application of a contraction always reaches the fixed point.

Unit-IV
1.What is planning in AI?
Planning is done by a planning agent or a planner to solve the current world problem.
It has certain algorithms to implement the solutions for the given problem and this is called ideal
planner algorithm
2.How does a planner implement the solution?
Unify action and goal representation to allow selection
(use logical language for both)
Divide-and-conquer by subgoaling
Relax requirement for sequential construction of solutions
3.What is the difference between planner and problem solving agents?

Problem solving agent:it will represent the task of the problem and solves it using search
techniques and any other algorithm.
Planner:It overcomes the difficulties that arise in the problem solving agent.
4.what are the effects of non- planning?
infinite branching factor in case of many tasks
choosing heuristic function and works in the same sequence/order
5. what are the key ideas of planning approach?
3 key ideas to approach planning are
open up the representation of states,goals ,actions using FOL
the planner is free to add actions to the plan whenever and wherever they are needed rather
than an incremental sequence
most part of the world are going to be independent of the other parts
6.State the 3 components of operators in representation of planning?
Action description
Pre-condition
Effect
7. what is STRIPS?
STandford Research Institute Problem Solver
Tidily arranged actions descriptions
Restricted language (function-free literals)
Efficient algorithms
8.What is situation space planner?
The planner takes the situation and searches for it in the KB and locates it.It is reused if
it is already present else a new plan is made for the situation and executed.
9. What are the drawbacks of programmers planner?
High branching factor during searchin
Huge search space
10. What are the Planning Algorithms for Searching a World Space:
Searching a World Space:
. There are two algorithms:
o Progression: An algorithm that searches for the goal state by searching through the states
generated by actions that can be performed in the given state, starting from the initial state.
o Regression: An algorithm that searches backward from the goal state by finding actions
whose effects satisfy one or more of the posted goals, and posting the chosen action's
preconditions as goals ( goal regression).
11.What is partial plan?
Partial plan is an incomplete plan which may be done during the Initial phase.
There are 2 main operation allowed in planning
Refinement operator
Modification operator
(Partial) Plan consists of

Set of operator applications Si


Partial (temporal) order constraints Si ->Sj
Causal links Si---c Sj
12.what is fully instantiated plan?
A fully instantiated plan is formmaly defined as a data structure consisting of the following 4
components
A set of plan steps
A set of step monitoring constraints
A set of variable binding constraints
A set of casual links
13. What is Complete plan
A plan is complete iff every precondition is achieved
A precondition c of a step Sj is achieved (by Si) if
Si < Sj
c effect(Si)
there is no Sk with Si <Sk < Sj and : c effect(Sk) (otherwise Sk is called a clobberer or threat)
14. What are the Properties of POP?
Non-deterministic search for plan,backtracks over choicepoints on failure:
. choice of Sadd to achieve Sneed
. choice of promotion or demotion for clobberer
Sound and complete
There are extensions for:
disjunction, universal quanti_cation, negation, conditionals,Efficient with good heuristics
from problem description But: very sensitive to subgoal ordering,Good for problems with
loosely related subgoals
15. What is conditional planning?
Conditional planning
Plan to obtain information (observation actions)
Subplan for each contingency
Example: [Check(Tire1); If(Intact(Tire1); [In_ate(Tire1)]; [CallHelp])]
Disadvantage: Expensive because it plans for many unlikely cases
Similar to POP If an open condition can be established by observation action
. add the action to the plan
. complete plan for each possible observation outcome
16. What is monitoring/ replanning
Monitoring / Replanning
Assume normal states / outcomes
Check progress during execution, re plan if necessary
Disadvantage: Unanticipated outcomes may lead to failure.

17. Distinguish between the learning element and performance element

Learning element is responsible for making improvements.


The learning element takes some knowledge about the learning element and some feedback on
how the agent is doing, and determines how the performance element should be modified to
(hopefully) do better in the future.
Performance element is responsible for selecting external actions.
The performance element is what we have previously considered to be the entire agent: it takes
in percepts and decides on actions.
18. What is the role of critic in the general model of learning.
The critic is designed to tell the learning element how well the agent is doing. The critic
employs a fixed standard of performance. This is necessary because the percepts themselves
provide no indication of the agent's success.
19. Describe the problem generator.
It is responsible for suggesting actions that will lead to new and informative experiences. The
point is that if the performance element had its way, it would keep doing the actions that are best,
given what it knows. But if the agent is willing to explore a little, and do some perhaps
suboptimal actions in the short run, it might discover much better actions for the long run.
.
20. What is meant by Speedup Learning.
The learning element is also responsible for improving the efficiency of the performance
element. For example, when asked to make a trip to a new destination, the taxi might take a
while to consult its map and plan the best route. But the next time a similar trip is requested, the
planning process should be much faster. This is called speedup learning
21. What are the issues in the design of learning agents.
The design of the learning element is affected by four major issues:
Which components of the performance element are to be improved.
What representation is used for those components.
What feedback is available.
What prior information is available.
22. List the Components of the performance element
A direct mapping from conditions on the current state to actions.
A means to infer relevant properties of the world from the percept sequence.
Information about the way the world evolves.
Information about the results of possible actions the agent can take.
Utility information indicating the desirability of world states.
Action-value information indicating the desirability of particular actions in particular
states.
Goals that describe classes of states whose achievement maximizes the agent's utility.
23. What are the types of learning
Supervised learning
Unsupervised learning
Reinforcement learning
24. What are the approaches for learning logical sentences.

The two approaches to learning logical sentences:


Decision tree methods, which use a restricted representation of logical sentences
specifically designed for learning,
Version-space approach, which is more general but often rather inefficient.
25. What are decision trees.
A decision tree takes as input an object or situation described by a set of properties, and outputs
a yes/no "decision." Decision trees therefore represent Boolean functions.
Each internal node in the tree corresponds to a test of the value of one of the properties,
and the branches from the node are labelled with the possible values of the test. Each leaf node in
the tree specifies the Boolean value to be returned if that leaf is reached.
26. Give an example for decision tree logical sentence.
The path for a restaurant full of patrons, with an estimated wait of 10-30 minutes when the agent
is not hungry is expressed by the logical sentence
Vr Patrons(r,Full)f\ WaitEstimate(r,0-\0) A Hungry(r,N) => WillWait(r)
27. What are Parity Function & Majority Function.
Parity Function is a finction which returns 1 if and only if an even number of inputs are 1,
then an exponentially large decision tree will be needed.
Majority function is one which returns 1 if more than half of its inputs are 1.

28. Draw an example decision tree

29. Explain the terms Positive example, negative example and training set.
An example is described by the values of the attributes and the value of the goal
predicate. We call the value of the goal predicate the classification of the example. If the goal
predicate is true for some example, we call it a ositive example; otherwise we call it a negative

example. A set of examples X\,... ,X\2 for the restaurant domain. The positive examples are ones
where the goal WillWait is true (X\,Xi,,...) and negative examples are ones where it is false
(X2,X5,...). The complete set of examples is called the training set.
30. Explain the methodology used for accessing the performance of learning algorithm.
Collect a large set of examples.
Divide it into two disjoint sets: the training set and the test set.
Use the learning algorithm with the training set as examples to generate a hypothesis H.
Measure the percentage of examples in the test set that are correctly classified by H.
Repeat steps 1 to 4 for different sizes of training sets and different randomly selected
training sets of each size.
31. Draw an example of training set.

32. What is Overfighting.


Whenever there is a large set of possible hypotheses, one has to be careful not to use the
resulting freedom to find meaningless "regularity" in the data. This problem is called overfitting.
It is a very general phenomenon, and occurs even when the target function is not at all random. It
afflicts every kind of learning algorithm, not just decision trees.
33. What is Decision tree pruning
Pruning works by preventing recursive splitting on attributes that are not clearly relevant, even
when the data at that node in the tree is not uniformly classified.
34. What is Pruning.
The probability that the attribute is really irrelevant can be calculated with the help of standard
statistical software. This is called pruning.
35. What is Cross Validation.
Cross-validation is another technique that eliminates the dangers of overfitting. The basic idea
of cross-validation is to try to estimate how well the current hypothesis will predict unseen data.
This is done by setting aside some fraction of the known data, and using it to test the prediction
performance of a hypothesis induced from the rest of the known data.
36. What is Missing Data

In many domains, not all the attribute values will be known for every example. The values may
not have been recorded, or they may be too expensive to obtain. This gives rise to two problems.
First, given a complete decision tree, how should one classify an object that is missing one of the
test attributes? Second, how should one modify the information gain formula when some
examples have unknown values for the attribute?
37.What are multivalued attributes
When an attribute has a large number of possible values, the information gain measure gives an
inappropriate indication of the attribute's usefulness. Consider the extreme case where every
example has a different value for the attributefor instance, if we were to use an attribute
RestaurantName in the restaurant domain. In such a case, each subset of examples is a singleton
and therefore has a unique classification, so the information gain measure would have its highest
value for this attribute. However, the GAIN RATIO attribute may be irrelevant or useless.
38.What is continuous valued attribute
Attributes such as Height and Weight have a large or infinite set of possible values. They are
therefore not well-suited for decision-tree learning in raw form. An obvious way to deal with this
problem is to discretize the attribute. For example, the Price attribute for restaurants was
discretized into $, $$, and $$$ values. Normally, such discrete ranges would be defined by hand.
A better approach is to preprocess the raw attribute values during the tree-growing process in
order to find out which ranges give the most useful information for classification purposes.
39.What are version space methods.
version space methods are probably not practical in most real-world learning problems,
mainly because of noise, they provide a good deal of insight into the logical structure of
hypothesis space.
40. what is PAC learning?
Any hypothesis that is seriously wrong will almost certainly be "found out" with high
probability after a small number of examples, because it will make an incorrect prediction. Thus,
any hypothesis that is consistent with a sufficiently large set of training examples is unlikely to
be seriously wrongthat is, it must be Probably Approximately Correct. PAC-learning is the
subfield of computational learning theory that is devoted to this idea.
41.Difference between learning and performance agent?
Learning agents can be divided conceptually into a performance element, which is
responsiblefor selecting actions, and a learning element, which is responsible for modifying the
performance element.
42.. Give a function for decision list learning?
function DECisiON-LiST-LEARNiNG(e.ra;wp/<?.v) returns a decision list, No or
failure
if examples is empty then return the value No t a test that matches a nonempty subset
examples, of examples
such that the members of examples, are all positive or all negative if there is no such t then
return failure
if the examples in examples, are positive then o < Yes
else o No
return a decision list with initial test / and outcome o

and remaining elements given by DEClsiON-LlST-LEARNING


43.What is decision list?
A decision list is a logical expression of a restricted form. It consists of a series of tests, each
of which is a conjunction of literals. If a test succeeds when applied to an example
description,the decision list specifies the value to be returned. If the test fails, processing
continues with the next test in the list.

UNIT V
1. Define expert system.
An expert system is a computer program that simulates the thought process of a human expert to
solve complex decision problems in a specific domain.
An expert system is an interactive computer-based decision tool that uses both facts and
heuristics to solve difficult decision problems based on knowledge acquired from an expert.
2. What are applications of expert systems?
Interpreting and identifying
Predicting
Diagnosing
Designing
Planning
Monitoring
Debugging and testing
Instructing and training
Controlling
3. What are the distinguishing characteristics of programming languages needed for expert
systems work?
Efficient mix of integer and real variables
Good memory-management procedures
Extensive data-manipulation routines
Incremental compilation
Tagged memory architecture
Optimization of the systems environment
Efficient search procedures
4. How are expert systems are organized?
1. Knowledge base consists of problem-solving rules, procedures, and intrinsic data relevant to
the problem domain.
2. Working memory refers to task-specific data for the problem under consideration.
3. Inference engine is a generic control mechanism that applies the axiomatic knowledge in the
knowledge base to the task-specific data to arrive at some solution or conclusion.
5. What are the Needs for Expert Systems?
1. Human expertise is very scarce.
2. Humans get tired from physical or mental workload.
3. Humans forget crucial details of a problem.
4. Humans are inconsistent in their day-to-day decisions.

5. Humans have limited working memory.


6. Humans are unable to comprehend large amounts of data quickly.
7. Humans are unable to retain large amounts of data in memory.
8. Humans are slow in recalling information stored in memory.
9. Humans are subject to deliberate or inadvertent bias in their actions.
10. Humans can deliberately avoid decision responsibilities.
11. Humans lie, hide, and die.
6. What are the effectiveness in implementing human-like decision processes
1. Are algorithmic in nature and depend only on raw machine power
2. Depend on facts that may be difficult to obtain
3. Do not make use of the effective heuristic approaches used by human experts
4. Are not easily adaptable to changing problem environments
5. Seek explicit and factual solutions that may not be possible
7. What are the Benefits of Expert Systems?
1. Increase the probability, frequency, and consistency of making good decisions
2. Help distribute human expertise
3. Facilitate real-time, low-cost expert-level decisions by the nonexpert
4. Enhance the utilization of most of the available data
5. Permit objectivity by weighing evidence without bias and without regard for the users
personal and emotional reactions
6. Permit dynamism through modularity of structure
7. Free up the mind and time of the human expert to enable him or her to concentrate on more
creative activities
8. Encourage investigations into the subtle areas of a problem
8. Write short notes on HEURISTIC REASONING?
Human experts use a type of problem-solving technique called heuristic reasoning. Commonly
called rules of thumb or expert heuristics, it allows the expert to arrive at a good solution quickly
and efficiently. Expert systems base their reasoning process on symbolic manipulation and
heuristic inference procedures that closely match the human thinking process. Conventional
programs can only recognize numeric or alphabetic strings and manipulate them only in a
preprogrammed manner.
9. Write short notes on Search Control Methods
All expert systems are searching intensive. Many techniques have been employed to make these
intensive searches more efficient. Branch and bound, pruning, depth-first search, and breadthfirst search are some of the search techniques that have been explored. Because of the intensity
of the search process, it is important that good search control strategies be used in the expert
systems inference process.
10. Write short notes on Forward Chaining
This method involves checking the condition part of a rule to determine whether it is true or
false. If the condition is true, then the action part of the rule is also true. This procedure
continues until a solution is found or a dead end is reached. Forward chaining is commonly
referred to as data-driven reasoning.
11. Write short notes on Backward Chaining

Backward chaining is the reverse of forward chaining. It is used to backtrack from a goal to the
paths that lead to the goal. Backward chaining is very good when all outcomes are known and
the number of possible outcomes is not large. In this case, a goal is specified and the expert
system tries to determine what conditions are needed to arrive at the specified goal. Backward
chaining is thus also called goal-driven.
12. Write short notes on Data Uncertainties
Expert systems are capable of working with inexact data. An expert system allows the user to
assign probabilities, certainty factors, or confidence levels to any or all input data. This feature
closely represents how most problems are handled in the real world. An expert system can take
all relevant factors into account and make a recommendation based on the best possible solution
rather than the only exact solution.
13.What is robotics?
"A reprogrammable, multifunctional manipulator designed to move material, parts, tools,
or specialized devices through various programmed motions for the performance of a variety of
tasks
14. How does robots percepts? Or what is reception?
Perception is the process by which robots map sensor measurements into internal
representations of the environment.Perception is difficult because in general the sensors are
noisy, and the environment is partially observable, unpredictable, and often dynamic
15.

How does robots pan to move?


In robotics, decisions ultimately involve motion of effectors. The pointto point
motion problem is to deliver the robot or its end-effector to a designated target location. A
greater challenge is the compliant motion problem, in which a robot moves while being in
physical Contact with an obstacle. An example of compliant motion is a robot manipulator that
screws in a light bulb, or a robot that pushes a box across a table top.
16. What is Swarm intelligence?
Swarm intelligence research originates from work into the simulation of the emergence of
collective intelligent behaviors of real ants. Ants are able to find good solutions to the shortest
path problems between the nest and a food source by laying down, on their way back from the
food source, a trail of an attracting substancea pheromone. Based on the pheromone level
communication,the shortest path is considered that with the greatest density of pheromone
and the ants will tend to follow the path with more pheromone
17.What is ant system and ant colony system?
Inspired by the food-seeking behavior of real ants, the ant system [1,2] is a cooperative
population-based search algorithm. As each ant construct a route from nest to food by
stochastically following the quantities of pheromone level,the intensity of laying pheromone will
bias the path-choosing decision-make of subsequent ants.

18.what are the essential characteristics that is need for a robot?

A robot has these essential characteristics:


Sensing First of all your robot would have to be able to sense its surroundings. It would
do this in ways that are not unsimilar to the way that you sense your surroundings. Giving
your robot sensors: light sensors (eyes), touch and pressure sensors (hands), chemical
sensors (nose), hearing and sonar sensors (ears), and taste sensors (tongue) will give your
robot awareness of its environment.
Movement A robot needs to be able to move around its environment. Whether rolling on
wheels, walking on legs or propelling by thrusters a robot needs to be able to move. To
count as a robot either the whole robot moves, like the Sojourner or just parts of the robot
moves, like the Canada Arm.
Energy A robot needs to be able to power itself. A robot might be solar powered,
electrically powered, battery powered. The way your robot gets its energy will depend on
what your robot needs to do.
Intelligence A robot needs some kind of "smarts." This is where programming enters the
pictures. A programmer is the person who gives the robot its 'smarts.' The robot will have
to have some way to receive the program so that it knows what it is to do.
19. What is a sensor?
Sensors are the perceptural interface between robots and their environments. Passive
sensors. Such as cameras are true observers of the environment: they capture signals that are
generated by other sources in the environment. Active sensors, such as sonar, send energy into
the environment
20. What is an Effectors
Effectors are the means by which robots move and change the shape of their bodies. To
understand the design of effectors,, it will help to talk about motion and shape in the abstract,
using the concept of a degree of freedom(DOF).
21.What is Expert system shell.
A rule-based, expert system maintains a separation between its Knowledge-base and that
part of the system that executes rules, often referred to as the expert system shell. The system
shell is indifferent to the rules it executes.
22. What is meta data in Knowledge base
An expert system is, typically, composed of two major components, the Knowledge-base
and the Expert System Shell. The Knowledge-base is a collection of rules encoded as metadata
in a file
23.Explain the shell portion in software modules
The shell portion includes software modules whose purpose it is to,
Process requests for service from system users and application layer modules;
Support the creation and modification of business rules by subject matter experts;
Translate business rules, created by a subject matter experts, into machine-readable
forms; Execute business rules; and
Provide low-level support to expert system components (e.g., retrieve metadata from and
save metadata to knowledge base, build Abstract Syntax Trees during rule translation of
business rules, etc.).

You might also like