You are on page 1of 60

1.

Introduction
Gheorghe Tecuci tecuci@gmu.edu http://lalab.gmu.edu/

Learning Agents Laboratory Department of Computer Science George Mason University


2003, G.Tecuci, Learning Agents Laboratory

Overview

Artificial Intelligence and its evolution

Intelligent agents

Intelligent agent demo

Recommended reading

2003, G.Tecuci, Learning Agents Laboratory

What is Artificial Intelligence

2003, G.Tecuci, Learning Agents Laboratory

Central goals of Artificial Intelligence


Understand the principles that make intelligence possible (in humans, animals, and artificial agents) Developing intelligent machines or agents (no matter whether they operate as humans or not) Formalizing knowledge and mechanizing reasoning in all areas of human endeavor Making the working with computers as easy as working with people Developing human-machine systems that exploit the complementariness of human and automated reasoning
2003, G.Tecuci, Learning Agents Laboratory

History of Artificial Intelligence


Stone age (1943-1956) Early work on neural networks and logic. The Logic Theorist (Alan Newell and Herbert Simon) Birth of AI: Dartmouth workshop - summer 1956 John McCarthys name for the field: artificial intelligence

Herbert Simon at GMU (1991) John McCarthy visiting GMUs demo booth at AAAI-2002
2003, G.Tecuci, Learning Agents Laboratory

History of Artificial Intelligence


Early enthusiasm, great expectations (1952-1969) McCarthy (1958) defined Lisp invented time-sharing Advice Taker Learning without knowledge Neural modeling Evolutionary learning Samuels checkers player: learning Robinsons resolution method. Minsky: the microworlds (e.g. the blocks world). Many small demonstrations of intelligent behavior. Simons over-optimistic predictions.
2003, G.Tecuci, Learning Agents Laboratory

History of Artificial Intelligence


Dark ages (1966-1973) AI did not scale up: combinatorial explosion The fact that a program can find a solution in principle does not mean that the program contains any of the mechanisms needed to find it in practice. Failure of natural language translation approach based on simple grammars and word dictionary. The famous retranslation English->Russian->English of the spirit is willing by the flash is weak into the vodka is good but the meat is rotten. Funding for natural language processing stopped.
2003, G.Tecuci, Learning Agents Laboratory

History of Artificial Intelligence

Failure of perceptrons to learn such a simple function as the disjunctive OR. Work on neural network stopped. Realization of the difficulty of the learning process and of the limitations of the explored methods. Symbolic concept learning (Winstons influential thesis, 1972)

2003, G.Tecuci, Learning Agents Laboratory

History of Artificial Intelligence


Renaissance (1969-1979)

Change of problem solving paradigm: from search-based problem solving to knowledge-based problem solving The first expert systems: Dendral: infers molecular structure from the information provided by a mass spectrometer Mycin: diagnoses blood infections Prospector: recommended exploratory drilling at a geological site that proved to contain a large molybdenum deposit.
2003, G.Tecuci, Learning Agents Laboratory

History of Artificial Intelligence


Industrial age (1980-present)

The first successful commercial expert systems. Many AI companies. Exploration of different learning strategies (Explanation-based learning, Case-based Reasoning, Genetic algorithms, Neural networks, etc.)

2003, G.Tecuci, Learning Agents Laboratory

10

History of Artificial Intelligence


The return of neural networks (1986-present) The reinvention of the back propagation learning algorithm for neural networks first found in 1969 by Bryson and Ho. Many successful applications of neural networks. Some disillusionment with respect to the difficult of building expert systems (the knowledge acquisition bottleneck).

2003, G.Tecuci, Learning Agents Laboratory

11

History of Artificial Intelligence


Maturity (1987-present) Change in the content and methodology of AI research: build on existing theories rather than propose new ones; base claims on theorems and experiments rather than on intuition; show relevance to real-world applications rather than toy examples.

2003, G.Tecuci, Learning Agents Laboratory

12

History of Artificial Intelligence


Intelligent agents (1995-present) The realization that the previously isolated subfields of AI (speech recognition, planning, robotics, computer vision, machine learning, knowledge representation, etc.) need to be reorganized when their results are to be tied together into a single agent design. A process of reintegration of different sub-areas of AI to build a whole agent: agent perspective of AI agent architectures (e.g. SOAR, Disciple); multi-agent systems; agents for different types of applications, web agents.
2003, G.Tecuci, Learning Agents Laboratory

13

State of the Art in Artificial Intelligence


Deep Blue defeated Kasparov, the chess world champion. PEGASUS, a speech understanding system is able to handle transactions such as finding the cheapest air faire. MARVEL: a real-time expert system monitors the stream of data from the Voyager spacecraft and signals any anomalies. A robotic system drives a car at 55mph on the public highway. A diagnostic expert system is correcting the diagnosis of a reputable expert. Intelligent agents for a variety of domains are proliferating at a very high rate. Subject matter experts teach a learning agent their reasoning in military center of gravity determination.
2003, G.Tecuci, Learning Agents Laboratory

14

Overview

Artificial Intelligence and its evolution

Intelligent agents

Intelligent agent demo

Recommended reading

2003, G.Tecuci, Learning Agents Laboratory

15

What is an intelligent agent


An intelligent agent is a system that: perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and

acts upon that environment to realize a set of goals or tasks for which it was designed.

input/ sensors

user/ environment
2003, G.Tecuci, Learning Agents Laboratory

output/ effectors

Intelligent Agent

16

What is an intelligent agent (cont.)


Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent. At the low end of being an agent is a thermostat. It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range.

The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat.

2003, G.Tecuci, Learning Agents Laboratory

17

What is an intelligent agent (cont.)

An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests. It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence.

2003, G.Tecuci, Learning Agents Laboratory

18

What an intelligent agent can do

An intelligent agent can :


collaborate with its user to improve the accomplishment of his or her tasks;
carry out tasks on users behalf, and in so doing employs some knowledge of the user's goals or desires;

monitor events or procedures for the user;


advise the user on how to perform a task; train or teach the user;

help different users collaborate.

2003, G.Tecuci, Learning Agents Laboratory

19

Characteristic features of intelligent agents


Knowledge representation and reasoning Transparency and explanations Ability to communicate Use of huge amounts of knowledge Exploration of huge search spaces Use of heuristics Reasoning with incomplete or conflicting data Ability to learn and adapt
2003, G.Tecuci, Learning Agents Laboratory

20

Knowledge representation and reasoning


An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions. This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain.
ONTOLOGY
OBJECT SUBCLASS-OF

represents
If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object.

CUP

BOOK INSTANCE-OF

TABLE

CUP1

ON

BOOK1

ON

TABLE1

RULE x,y,z OBJECT,


(ON x y) & (ON y z) (ON x z)

Application Domain
2003, G.Tecuci, Learning Agents Laboratory

Model of the Domain

21

Separation of knowledge from control


Implements a general method of interpreting the input problem based on the knowledge from the knowledge base

Intelligent Agent
Input/ Sensors

Problem Solving Engine Knowledge Base


Ontology Rules/Cases/Methods

User/ Environment

Output/ Effectors

Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.
2003, G.Tecuci, Learning Agents Laboratory

22

Transparency and explanations


The knowledge possessed by the agent and its reasoning processes should be understandable to humans.

The agent should have the ability to give explanations of its behavior, what decisions it is making and why.
Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent. The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity.

2003, G.Tecuci, Learning Agents Laboratory

24

Ability to communicate
An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication.

2003, G.Tecuci, Learning Agents Laboratory

25

Illustration: Ambiguity of natural language Words and sentences have multiple meanings
Diamond a mineral consisting of nearly pure carbon in crystalline form, usually colorless, the hardest natural substance known; a gem or other piece cut from this mineral; a lozenge-shaped plane figure (); in Baseball, the infield or the whole playing field. Visiting relatives can be boring. To visit relatives can be boring. The relatives that visit us can be boring. She told the man that she hated to run alone. She told the man: I hate to run alone ! She told the man whom she hated: run alone !
2003, G.Tecuci, Learning Agents Laboratory

26

Other difficulties with natural language processing


Paraphrase: The same meaning may be expressed by many sentences. Ann gave Bob a cat. Bob was given a cat by Ann. What Ann gave Bob was a cat. Ann gave a cat to Bob. A cat was given to Bob by Ann. Bob received a cat from Ann.

Ellipsis: Use of sentences that appear ill-formed because they are incomplete. Typically the parts that are missing have to be extracted from the previous sentences. Bob: What is the length of the ship USS J.F.Kennedy ? Bob: The beam ? John: 1072 John: 130

Reference: Entities may be referred to without giving their names.


Bob: What is the length of the ship USS J.F.Kennedy ? Bob: Who is her commander ?
2003, G.Tecuci, Learning Agents Laboratory

John: 1072 John: Captain Nelson.


27

Use of huge amounts of knowledge


In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base). Example of human-agent dialog: User: The toolbox is locked. Agent: The key is in the drawer. In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve.
2003, G.Tecuci, Learning Agents Laboratory

28

Use of huge amounts of knowledge (example)


User: The toolbox is locked. Agent: Why is he telling me this? I already know that the box is locked. I know he needs to get in. Perhaps he is telling me because he believes I can help. To get in requires a key.

He knows it and he knows I know it.


The key is in the drawer. If he knew this, he would not tell me that the toolbox is locked. So he must not realize it. To make him know it, I can tell him. I am supposed to help him. The key is in the drawer.
2003, G.Tecuci, Learning Agents Laboratory

29

Exploration of huge search spaces


An intelligent agent usually needs to search huge spaces in order to find solutions to problems.

Example 1: A search agent on the internet


Example 2: A checkers playing agent
1 5 9 13 17 21 25 29
2003, G.Tecuci, Learning Agents Laboratory

2 6 10 14 18 22 26 30 31 23 15 7

3 8 11 16 19 24 27 32

4 12 20 28

30

Exploration of huge search spaces: illustration

Determining the best move with minimax:


I Opponent I win lose win win lose win win win

win

win

win lose

win

lose

lose draw win lose

win

lose

win

2003, G.Tecuci, Learning Agents Laboratory

31

Exploration of huge search spaces: illustration


The tree of possibilities is far too large to be fully generated and searched backward from the terminal nodes, for an optimal move.

Size of the search space


A complete game tree for checkers has been estimated as having 1040 nonterminal nodes. If one assumes that these nodes could be generated at a rate of 3 billion per second, the generation of the whole tree would still require around 1021 centuries !

Checkers is far simpler than chess which, in turn, is generally far simpler than business competitions or military games.
2003, G.Tecuci, Learning Agents Laboratory

32

Use of heuristics
Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods. A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces. Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all. A useful heuristic is one that offers solutions which are good enough most of the time.
2003, G.Tecuci, Learning Agents Laboratory

33

Use of heuristics: illustration


node corres ponding to the current board s ituation

3. Back propagate the estimated values

1. Generate a partial game tree

2. Es timate the values of the leaf nodes by using a s tatic evaluation function

Heuristic function for board position evaluation: w1.f1 + w2.f2 + w3.f3 + where wi are real-valued weights and fi are board features (e.g. center control, total mobility, relative exchange advantage.
2003, G.Tecuci, Learning Agents Laboratory

34

Reasoning with incomplete data


The ability to provide some solution even if not all the data relevant to the problem is available at the time a solution is required. Example:

The reasoning of a physician in an intensive care unit. Planning a military course of action.
If the EKG test results are not available, but the patient is suffering chest pains, I might still suspect a heart problem.

2003, G.Tecuci, Learning Agents Laboratory

35

Reasoning with conflicting data


The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors).

Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.

2003, G.Tecuci, Learning Agents Laboratory

36

Ability to learn

The ability to improve its competence and efficiency.

An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its efficiency if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.

2003, G.Tecuci, Learning Agents Laboratory

37

Illustration: concept learning


Learn the concept of ill cell by comparing examples of ill cells with examples of healthy cells, and by creating a generalized description of the similarities between the ill cells :
Learned concept
((1 ? ) (? dark))

Concept examples

((1 light) (2 dark))

((1 dark) (2 dark))

((1 light) (2 light))

+
2003, G.Tecuci, Learning Agents Laboratory

((1 dark) (2 light))

((1 dark) (1 dark))

+
38

Ability to learn: classification


The learned concept is used to diagnose other cells
Ill cell concept ((1 ?) (? dark))

Is this cell ill? No


((1 light) (1 light)) ((1 dark) (1 light))

Is this cell ill? Yes

This is an example of reasoning with incomplete information.


2003, G.Tecuci, Learning Agents Laboratory

39

Extended agent architecture


The learning engine implements methods for extending and refining the knowledge in the knowledge base.

Intelligent Agent
Input/ Sensors

Problem Solving Engine Learning Engine

User/ Environment

Output/

Effectors

Knowledge Base
Ontology Rules/Cases/Methods
40

2003, G.Tecuci, Learning Agents Laboratory

Sample tasks for intelligent agents


Planning: Finding a set of actions that achieve a certain goal. Example: Determine the actions that need to be performed in order to repair a bridge.

Critiquing: Expressing judgments about something according to certain standards. Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations.

Interpretation: Inferring situation description from sensory data.


Example: Interpreting gauge readings in a chemical process plant to infer the status of the process.
2003, G.Tecuci, Learning Agents Laboratory

41

Sample tasks for intelligent agents (cont.)


Prediction: Inferring likely consequences of given situations. Examples: Predicting the damage to crops from some type of insect. Estimating global oil demand from the current geopolitical world situation.

Diagnosis: Inferring system malfunctions from observables. Examples: Determining the disease of a patient from the observed symptoms. Locating faults in electrical circuits. Finding defective components in the cooling system of nuclear reactors.

Design: Configuring objects under constraints. Example: Designing integrated circuits layouts.
2003, G.Tecuci, Learning Agents Laboratory

42

Sample tasks for intelligent agents (cont.)


Monitoring: Comparing observations to expected outcomes. Examples: Monitoring instrument readings in a nuclear reactor to detect accident conditions. Assisting patients in an intensive care unit by analyzing data from the monitoring equipment. Debugging: Prescribing remedies for malfunctions. Examples: Suggesting how to tune a computer system to reduce a particular type of performance problem. Choosing a repair procedure to fix a known malfunction in a locomotive. Repair: Executing plans to administer prescribed remedies.

Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes.
2003, G.Tecuci, Learning Agents Laboratory

43

Sample tasks for intelligent agents (cont.)


Instruction: Diagnosing, debugging, and repairing student behavior. Examples: Teaching students a foreign language. Teaching students to troubleshoot electrical circuits. Teaching medical students in the area of antimicrobial therapy selection. Control: Governing overall system behavior. Example: Managing the manufacturing and distribution of computer systems.

Any useful task: Information fusion. Information assurance. Travel planning. Email management.
2003, G.Tecuci, Learning Agents Laboratory

44

How are agents built


Intelligent Agent Domain Expert
Dialog Programming

Knowledge Engineer

Inference Engine

Knowledge Base

Results

A knowledge engineer attempts to understand how a subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base.
2003, G.Tecuci, Learning Agents Laboratory

45

Why it is hard
The knowledge engineer has to become a kind of subject matter expert in order to properly understand experts problem solving knowledge. This takes time and effort. Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete).
This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck of the AI systems development process).
2003, G.Tecuci, Learning Agents Laboratory

46

Artificial Intelligence research: Disciple


Disciple is a theory, methodology and agent shell for rapid development of end to end knowledge bases and agents, by subject matter experts, with limited assistance from knowledge engineers
The expert teaches Disciple in a way that resembles how the expert would teach a person. Disciple learns from the expert, building, verifying and improving its knowledge base

DISCIPLE RKF/COG

Interface

Problem Solving
Learning

Ontology + Rules

2003, G.Tecuci, Learning Agents Laboratory

47

Vision on the evolution of computer systems


Mainframe Computers Personal Computers Learning Agents

Software systems developed and used by persons that are not computer experts Software systems developed by computer experts and used by persons that are not computer experts Software systems developed and used by computer experts
2003, G.Tecuci, Learning Agents Laboratory

48

Why are intelligent agents important

Humans have limitations that agents may alleviate (e.g. memory for the details that isnt effected by stress, fatigue or time constraints).

Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.

2003, G.Tecuci, Learning Agents Laboratory

49

Why are intelligent agents important (cont)

The evolution of information technology makes intelligent agents essential components of our future systems and organizations.

Our future computers and most of the other systems and tools will gradually become intelligent agents.

We have to be able to deal with intelligent agents either as users, or as developers, or as both.

2003, G.Tecuci, Learning Agents Laboratory

50

Intelligent agents: Conclusion

Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods.

Intelligent agents are helpful, enabling us to do our tasks better.

Intelligent agents are necessary to cope with the increasing challenges of the information society.

2003, G.Tecuci, Learning Agents Laboratory

51

Overview

Artificial Intelligence and its evolution

Intelligent agents

Intelligent agent demo

Recommended reading

2003, G.Tecuci, Learning Agents Laboratory

52

An intelligent agent for Center of Gravity analysis


The center of gravity of an entity (state, alliance, coalition, or group) is the foundation of capability, the hub of all power and movement, upon which everything depends, the point against which all the energies should be directed. Carl Von Clausewitz, On War, 1832.

If a combatant eliminates or influences the enemys strategic center of gravity, then the enemy will lose control of its power and resources and will eventually fall to defeat. If the combatant fails to adequately protect his own strategic center of gravity, he invites disaster. (Giles and Galvin, USAWC 1996).
2003, G.Tecuci, Learning Agents Laboratory

53

Approach to Center of Gravity (COG) analysis

Based on the concepts of critical capabilities, critical requirements and critical vulnerabilities, which have been recently adopted into the joint military doctrine of USA (Strange , 1996).
Applied to current war scenarios (e.g. War on terror 2003, Iraq 2003) with state and non-state actors (e.g. Al Qaeda). Identification of COG candidates
Identify potential primary sources of moral or physical strength, power and resistance from: Government Military People Economy Alliances
2003, G.Tecuci, Learning Agents Laboratory

Testing of COG candidates


Test each identified COG candidate to determine whether it has all the necessary critical capabilities:

Which are the critical capabilities?


Are the critical requirements of these capabilities satisfied? If not, eliminate the candidate. If yes, do these capabilities have any vulnerability?
54

Etc.

Critical capabilities needed to be a COG


leader

people

military

be protected stay informed communicate be influential

receive communication from the highest level leadership communicate desires to the highest level leadership support the goal

be deployable exert power be indispensable

industrial capacity financial capacity


external support will of multi member force ideology
55

be a driving force have support be irreplaceable


2003, G.Tecuci, Learning Agents Laboratory

support the highest level leadership have a positive impact be influential

Leader who is a COG


Critical capability to be protected stay informed Corresponding critical requirement Have means to be protected from all threats

Have means to receive essential intelligence


Have means to communicate with the government, the military and the people Have means to influence the government, the military and the people Have reasons and determination for pursuing the goal Have means to secure continuous support from the government, the military and the people Be the only leader to maintain the goal
56

communicate

be influential

be a driving force

have support

be irreplaceable
2003, G.Tecuci, Learning Agents Laboratory

Illustration: Saddam Hussein (Iraq 2003)


Critical capability to be protected Corresponding critical requirement Have means to be protected from all threats Means Vulnerabilities

Republican Guard Protection Unit loyalty not based on conviction and can be influenced by US-led coalition Iraqi Military loyalty can be influenced by US-led coalition can be destroyed by US-led coalition

Complex of Iraqi Bunkers location known to US led coalition design known to US led coalition can be destroyed by US-led coalition System of Saddam Doubles loyalty of Saddam Doubles to Saddam can be influenced by US-led coalition
2003, G.Tecuci, Learning Agents Laboratory

57

Demonstration
Teaching Disciple how to determine whether a strategic leader has the critical capability to be protected.

Disciple Demo

2003, G.Tecuci, Learning Agents Laboratory

58

Using Disciple to learn Artificial Intelligence


Understanding and using the Disciple learning agent shell will lead to a better understanding of the following Artificial Intelligence areas:
Automatic Reasoning Knowledge Representation

Search

Knowledge Acquisition

DISCIPLE RKF/COG

Interface

Human-Agent Interaction

Problem Solving Learning

Machine Learning

Ontology + Rules
Natural Language Generation

Applications of Artificial Intelligence


2003, G.Tecuci, Learning Agents Laboratory

Knowledge Engineering

Intelligent Tutoring
59

Reading assignments

Russell S., and Norvig P., Artificial Intelligence: A Modern Approach, Prentice Hall, Second edition, pp. 1-58 (survey, recommended). Tecuci G., Boicu M., Marcu D., Stanescu B., Boicu C., Comello J., Training and Using Disciple Agents: A Case Study in the Military Center of Gravity Analysis Domain, in AI Magazine, 24, 4, 2002, pp.51-68, AAAI Press, Menlo Park, California, 2002, http://lalab.gmu.edu/publications/default.htm (recommended).

2003, G.Tecuci, Learning Agents Laboratory

60

Reading assignments (cont.)


We will be able to reserve only a 3-hour lecture for Common Lisp, during which we will mainly discuss some Lisp programs. Therefore you have to start learning the language by yourself, reading the lecture notes on Common Lisp, doing the enclosed exercises, and surveying any LISP book.

Tecuci G., Introduction to Common Lisp, http://lalab.gmu.edu/cs580fa03/cs580-fa03.htm (required). Wilensky R., Common LISPcraft, Norton & Company, Chapters 1-12, (recommended). The Lecture Notes are based primarily on this book.

Graham P., ANSI Common Lisp, Chapters 1-3, 5-9 (recommended). Also available on line at http://www.paulgraham.com/onlisptext.html

2003, G.Tecuci, Learning Agents Laboratory

61

You might also like