Professional Documents
Culture Documents
Introduction
Gheorghe Tecuci tecuci@gmu.edu http://lalab.gmu.edu/
Overview
Intelligent agents
Recommended reading
Herbert Simon at GMU (1991) John McCarthy visiting GMUs demo booth at AAAI-2002
2003, G.Tecuci, Learning Agents Laboratory
Failure of perceptrons to learn such a simple function as the disjunctive OR. Work on neural network stopped. Realization of the difficulty of the learning process and of the limitations of the explored methods. Symbolic concept learning (Winstons influential thesis, 1972)
Change of problem solving paradigm: from search-based problem solving to knowledge-based problem solving The first expert systems: Dendral: infers molecular structure from the information provided by a mass spectrometer Mycin: diagnoses blood infections Prospector: recommended exploratory drilling at a geological site that proved to contain a large molybdenum deposit.
2003, G.Tecuci, Learning Agents Laboratory
The first successful commercial expert systems. Many AI companies. Exploration of different learning strategies (Explanation-based learning, Case-based Reasoning, Genetic algorithms, Neural networks, etc.)
10
11
12
13
14
Overview
Intelligent agents
Recommended reading
15
acts upon that environment to realize a set of goals or tasks for which it was designed.
input/ sensors
user/ environment
2003, G.Tecuci, Learning Agents Laboratory
output/ effectors
Intelligent Agent
16
The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat.
17
An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests. It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence.
18
19
20
represents
If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object.
CUP
BOOK INSTANCE-OF
TABLE
CUP1
ON
BOOK1
ON
TABLE1
Application Domain
2003, G.Tecuci, Learning Agents Laboratory
21
Intelligent Agent
Input/ Sensors
User/ Environment
Output/ Effectors
Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.
2003, G.Tecuci, Learning Agents Laboratory
22
The agent should have the ability to give explanations of its behavior, what decisions it is making and why.
Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent. The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity.
24
Ability to communicate
An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication.
25
Illustration: Ambiguity of natural language Words and sentences have multiple meanings
Diamond a mineral consisting of nearly pure carbon in crystalline form, usually colorless, the hardest natural substance known; a gem or other piece cut from this mineral; a lozenge-shaped plane figure (); in Baseball, the infield or the whole playing field. Visiting relatives can be boring. To visit relatives can be boring. The relatives that visit us can be boring. She told the man that she hated to run alone. She told the man: I hate to run alone ! She told the man whom she hated: run alone !
2003, G.Tecuci, Learning Agents Laboratory
26
Ellipsis: Use of sentences that appear ill-formed because they are incomplete. Typically the parts that are missing have to be extracted from the previous sentences. Bob: What is the length of the ship USS J.F.Kennedy ? Bob: The beam ? John: 1072 John: 130
28
29
2 6 10 14 18 22 26 30 31 23 15 7
3 8 11 16 19 24 27 32
4 12 20 28
30
win
win
win lose
win
lose
win
lose
win
31
Checkers is far simpler than chess which, in turn, is generally far simpler than business competitions or military games.
2003, G.Tecuci, Learning Agents Laboratory
32
Use of heuristics
Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods. A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces. Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all. A useful heuristic is one that offers solutions which are good enough most of the time.
2003, G.Tecuci, Learning Agents Laboratory
33
2. Es timate the values of the leaf nodes by using a s tatic evaluation function
Heuristic function for board position evaluation: w1.f1 + w2.f2 + w3.f3 + where wi are real-valued weights and fi are board features (e.g. center control, total mobility, relative exchange advantage.
2003, G.Tecuci, Learning Agents Laboratory
34
The reasoning of a physician in an intensive care unit. Planning a military course of action.
If the EKG test results are not available, but the patient is suffering chest pains, I might still suspect a heart problem.
35
Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.
36
Ability to learn
An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its efficiency if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.
37
Concept examples
+
2003, G.Tecuci, Learning Agents Laboratory
+
38
39
Intelligent Agent
Input/ Sensors
User/ Environment
Output/
Effectors
Knowledge Base
Ontology Rules/Cases/Methods
40
Critiquing: Expressing judgments about something according to certain standards. Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations.
41
Diagnosis: Inferring system malfunctions from observables. Examples: Determining the disease of a patient from the observed symptoms. Locating faults in electrical circuits. Finding defective components in the cooling system of nuclear reactors.
Design: Configuring objects under constraints. Example: Designing integrated circuits layouts.
2003, G.Tecuci, Learning Agents Laboratory
42
Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes.
2003, G.Tecuci, Learning Agents Laboratory
43
Any useful task: Information fusion. Information assurance. Travel planning. Email management.
2003, G.Tecuci, Learning Agents Laboratory
44
Knowledge Engineer
Inference Engine
Knowledge Base
Results
A knowledge engineer attempts to understand how a subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base.
2003, G.Tecuci, Learning Agents Laboratory
45
Why it is hard
The knowledge engineer has to become a kind of subject matter expert in order to properly understand experts problem solving knowledge. This takes time and effort. Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete).
This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck of the AI systems development process).
2003, G.Tecuci, Learning Agents Laboratory
46
DISCIPLE RKF/COG
Interface
Problem Solving
Learning
Ontology + Rules
47
Software systems developed and used by persons that are not computer experts Software systems developed by computer experts and used by persons that are not computer experts Software systems developed and used by computer experts
2003, G.Tecuci, Learning Agents Laboratory
48
Humans have limitations that agents may alleviate (e.g. memory for the details that isnt effected by stress, fatigue or time constraints).
Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.
49
The evolution of information technology makes intelligent agents essential components of our future systems and organizations.
Our future computers and most of the other systems and tools will gradually become intelligent agents.
We have to be able to deal with intelligent agents either as users, or as developers, or as both.
50
Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods.
Intelligent agents are necessary to cope with the increasing challenges of the information society.
51
Overview
Intelligent agents
Recommended reading
52
If a combatant eliminates or influences the enemys strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. (Giles and Galvin, USAWC 1996).
2003, G.Tecuci, Learning Agents Laboratory
53
Based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine of USA (Strange , 1996).
Applied to current war scenarios (e.g. War on terror 2003, Iraq 2003) with state and non-state actors (e.g. Al Qaeda). Identification of COG candidates
Identify potential primary sources of moral or physical strength, power and resistance from: Government Military People Economy Alliances
2003, G.Tecuci, Learning Agents Laboratory
Etc.
people
military
receive communication from the highest level leadership communicate desires to the highest level leadership support the goal
communicate
be influential
be a driving force
have support
be irreplaceable
2003, G.Tecuci, Learning Agents Laboratory
Republican Guard Protection Unit loyalty not based on conviction and can be influenced by US-led coalition Iraqi Military loyalty can be influenced by US-led coalition can be destroyed by US-led coalition
Complex of Iraqi Bunkers location known to US led coalition design known to US led coalition can be destroyed by US-led coalition System of Saddam Doubles loyalty of Saddam Doubles to Saddam can be influenced by US-led coalition
2003, G.Tecuci, Learning Agents Laboratory
57
Demonstration
Teaching Disciple how to determine whether a strategic leader has the critical capability to be protected.
Disciple Demo
58
Search
Knowledge Acquisition
DISCIPLE RKF/COG
Interface
Human-Agent Interaction
Machine Learning
Ontology + Rules
Natural Language Generation
Knowledge Engineering
Intelligent Tutoring
59
Reading assignments
Russell S., and Norvig P., Artificial Intelligence: A Modern Approach, Prentice Hall, Second edition, pp. 1-58 (survey, recommended). Tecuci G., Boicu M., Marcu D., Stanescu B., Boicu C., Comello J., Training and Using Disciple Agents: A Case Study in the Military Center of Gravity Analysis Domain, in AI Magazine, 24, 4, 2002, pp.51-68, AAAI Press, Menlo Park, California, 2002, http://lalab.gmu.edu/publications/default.htm (recommended).
60
Tecuci G., Introduction to Common Lisp, http://lalab.gmu.edu/cs580fa03/cs580-fa03.htm (required). Wilensky R., Common LISPcraft, Norton & Company, Chapters 1-12, (recommended). The Lecture Notes are based primarily on this book.
Graham P., ANSI Common Lisp, Chapters 1-3, 5-9 (recommended). Also available on line at http://www.paulgraham.com/onlisptext.html
61