You are on page 1of 33

What is Artificial Intelligence

Artificial Intelligence is the Science and Engineering


that is concerned with the theory and practice of
developing systems that exhibit the characteristics
we associate with intelligence in human behavior:
perception, natural language processing, reasoning,
planning and problem solving, learning and
adaptation, etc.

2004, G.Tecuci, Learning Agents Center


Central goals of Artificial Intelligence

Understand the principles that make intelligence possible


(in humans, animals, and artificial agents)

Developing intelligent machines or agents


(no matter whether they operate as humans or not)

Formalizing knowledge and mechanizing reasoning


in all areas of human endeavor

Making the working with computers


as easy as working with people

Developing human-machine systems that exploit the


complementariness of human and automated reasoning

2004, G.Tecuci, Learning Agents Center


What is an intelligent agent
An intelligent agent is a system that:
perceives its environment (which may be the physical
world, a user via a graphical user interface, a collection of
other agents, the Internet, or other complex environment);
reasons to interpret perceptions, draw inferences, solve
problems, and determine actions; and
acts upon that environment to realize a set of goals or
tasks for which it was designed.

input/
sensors Intelligent
output/ Agent
user/ effectors
environment
2004, G.Tecuci, Learning Agents Center
What is an intelligent agent (cont.)

Humans, with multiple, conflicting drives, multiple


senses, multiple possible actions, and complex
sophisticated control structures, are at the highest end of
being an agent.

At the low end of being an agent is a thermostat.


It continuously senses the room temperature, starting or
stopping the heating system each time the current
temperature is out of a pre-defined range.

The intelligent agents we are concerned with are in


between. They are clearly not as capable as humans, but
they are significantly more capable than a thermostat.

2004, G.Tecuci, Learning Agents Center


What is an intelligent agent (cont.)

An intelligent agent interacts with a human or some other


agents via some kind of agent-communication language
and may not blindly obey commands, but may have the
ability to modify requests, ask clarification questions, or
even refuse to satisfy certain requests.

It can accept high-level requests indicating what the user


wants and can decide how to satisfy each request with
some degree of independence or autonomy, exhibiting
goal-directed behavior and dynamically choosing which
actions to take, and in what sequence.

2004, G.Tecuci, Learning Agents Center


What an intelligent agent can do

An intelligent agent can :


collaborate with its user to improve the accomplishment of
his or her tasks;
carry out tasks on users behalf, and in so doing employs
some knowledge of the user's goals or desires;
monitor events or procedures for the user;
advise the user on how to perform a task;
train or teach the user;
help different users collaborate.

2004, G.Tecuci, Learning Agents Center


Knowledge representation and reasoning
An intelligent agent contains an internal representation of its external
application domain, where relevant elements of the application
domain (objects, relations, classes, laws, actions) are represented
as symbolic expressions.
Application Domain Model of the Domain
ONTOLOGY
OBJECT

SUBCLASS-OF

CUP BOOK TABLE


represents
INSTANCE-OF
If an object is on top of
ON BOOK1 ON
another object that is CUP1 TABLE1
itself on top of a third
RULE
object then the first object
x,y,z OBJECT,
is on top of the third (ON x y) & (ON y z) (ON x z)
object.

(cup1 on book1) & (book1 on table1)


(cup1 on table1) (cup1 on table1)

This mapping allows the agent to reason about the application domain by performing reasoning
processes
2004, in the
G.Tecuci, Learning Agents domain model, and transferring the conclusions back into the application domain.
Center
Basic agent architecture
Implements a general method of
interpreting the input problem based on
the knowledge from the knowledge base

Intelligent Agent
Input/
Problem Solving
Sensors Engine

User/ Knowledge Base


Environment Output/
Ontology
ONTOLOGY
OBJECT

SUBCLASS-OF

Effectors CUP BOOK

INSTANCE-OF
TABLE

Rules/Cases/
CUP1 ON BOOK1 ON TABLE1

RULE
x,y,z OBJECT,
(ON x y) & (ON y z) (ON x z)

Data structures that represent the objects from the application domain,
general laws governing them, action that can be performed with them, etc.
2004, G.Tecuci, Learning Agents Center
Transparency and explanations

The knowledge possessed by the agent and its reasoning


processes should be understandable to humans.

The agent should have the ability to give explanations of


its behavior, what decisions it is making and why.

Without transparency it would be very difficult to accept,


for instance, a medical diagnosis performed by an
intelligent agent.

The need for transparency shows that the main goal of


artificial intelligence is to enhance human capabilities and
not to replace human activity.

2004, G.Tecuci, Learning Agents Center


Ability to communicate

An agent should be able to communicate with its users


or other agents.

The communication language should be as natural to


the human users as possible. Ideally, it should be free
natural language.

The problem of natural language understanding and


generation is very difficult due to the ambiguity of words
and sentences, the paraphrases, ellipses and references
which are used in human communication.

2004, G.Tecuci, Learning Agents Center


Use of huge amounts of knowledge

In order to solve "real-world" problems, an intelligent agent


needs a huge amount of domain knowledge in its memory
(knowledge base).

Example of human-agent dialog:


User: The toolbox is locked.
Agent: The key is in the drawer.

In order to understand such sentences and to respond


adequately, the agent needs to have a lot of knowledge
about the user, including the goals the user might want to
achieve.

2004, G.Tecuci, Learning Agents Center


Use of huge amounts of knowledge (example)
User:
The toolbox is locked.
Agent:
Why is he telling me this?
I already know that the box is locked.
I know he needs to get in.
Perhaps he is telling me because he believes I can help.
To get in requires a key.
He knows it and he knows I know it.
The key is in the drawer.
If he knew this, he would not tell me that the toolbox is locked.
So he must not realize it.
To make him know it, I can tell him.
I am supposed to help him.
The key is in the drawer.
2004, G.Tecuci, Learning Agents Center
Exploration of huge search spaces

An intelligent agent usually needs to search huge spaces


in order to find solutions to problems.
Example: A search agent on the Internet.

2004, G.Tecuci, Learning Agents Center


Use of heuristics

Intelligent agents generally attack problems for which


no algorithm is known or feasible, problems that require
heuristic methods.

A heuristic is a rule of thumb, strategy, trick, simplification,


or any other kind of device which drastically limits the
search for solutions in large problem spaces.

Heuristics do not guarantee optimal solutions. In fact they


do not guarantee any solution at all.

A useful heuristic is one that offers solutions which are good


enough most of the time.

2004, G.Tecuci, Learning Agents Center


Reasoning with incomplete or conflicting data
The ability to provide some solution even if not all the
data relevant to the problem is available at the time a
solution is required.

Examples:
The reasoning of a physician in an intensive care unit.
Planning a military course of action.

The ability to take into account data items that are more
or less in contradiction with one another (conflicting data
or data corrupted by errors).

Example:
The reasoning of a military intelligence analyst that has
to cope with the deception actions of the enemy.
2004, G.Tecuci, Learning Agents Center
Ability to learn

The ability to improve its competence and performance.

An agent is improving its competence if it learns to solve


a broader class of problems, and to make fewer
mistakes in problem solving.

An agent is improving its performance if it learns to solve


more efficiently (for instance, by using less time or space
resources) the problems from its area of competence.

2004, G.Tecuci, Learning Agents Center


Extended agent architecture

The learning engine implements methods


for extending and refining the knowledge
in the knowledge base.

Intelligent Agent

Input/ Problem Solving


Sensors Engine

Learning
User/ Engine
Environment Output/
Knowledge Base
Effectors
Ontology

Rules/Cases/Methods

2004, G.Tecuci, Learning Agents Center


Sample tasks for intelligent agents

Planning: Finding a set of actions that achieve a certain goal.

Example: Determine the actions that need to be performed in order to


repair a bridge.

Critiquing: Expressing judgments about something according to certain


standards.
Example: Critiquing a military course of action (or plan) based on the
principles of war and the tenets of army operations.

Interpretation: Inferring situation description from sensory data.

Example: Interpreting gauge readings in a chemical process plant to infer


the status of the process.

2004, G.Tecuci, Learning Agents Center


Sample tasks for intelligent agents (cont.)

Prediction: Inferring likely consequences of given situations.

Examples:
Predicting the damage to crops from some type of insect.
Estimating global oil demand from the current geopolitical world situation.

Diagnosis: Inferring system malfunctions from observables.

Examples:
Determining the disease of a patient from the observed symptoms.
Locating faults in electrical circuits.
Finding defective components in the cooling system of nuclear reactors.

Design: Configuring objects under constraints.

Example: Designing integrated circuits layouts.


2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)
Monitoring: Comparing observations to expected outcomes.

Examples:
Monitoring instrument readings in a nuclear reactor to detect accident
conditions.
Assisting patients in an intensive care unit by analyzing data from the
monitoring equipment.

Debugging: Prescribing remedies for malfunctions.


Examples:
Suggesting how to tune a computer system to reduce a particular type of
performance problem.
Choosing a repair procedure to fix a known malfunction in a locomotive.

Repair: Executing plans to administer prescribed remedies.

Example: Tuning a mass spectrometer, i.e., setting the instrument's


operating controls to achieve optimum sensitivity consistent with correct
peak ratios and shapes.
2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)

Instruction: Diagnosing, debugging, and repairing student behavior.


Examples:
Teaching students a foreign language.
Teaching students to troubleshoot electrical circuits.
Teaching medical students in the area of antimicrobial therapy selection.

Control: Governing overall system behavior.


Example:
Managing the manufacturing and distribution of computer systems.

Any useful task:


Information fusion.
Information assurance.
Travel planning.
Email management.
Help in choosing a Ph.D. Dissertation Advisor
2004, G.Tecuci, Learning Agents Center
Why are intelligent agents important

Humans have limitations that agents may alleviate


(e.g. memory for the details that isnt effected by
stress, fatigue or time constraints).

Humans and agents could engage in mixed-initiative


problem solving that takes advantage of their
complementary strengths and reasoning styles.

2004, G.Tecuci, Learning Agents Center


Why are intelligent agents important (cont)

The evolution of information technology makes


intelligent agents essential components of our future
systems and organizations.

Our future computers and most of the other systems


and tools will gradually become intelligent agents.

We have to be able to deal with intelligent agents either


as users, or as developers, or as both.

2004, G.Tecuci, Learning Agents Center


Intelligent agents: Conclusion

Intelligent agents are systems which can perform


tasks requiring knowledge and heuristic methods.

Intelligent agents are helpful, enabling us to do our


tasks better.

Intelligent agents are necessary to cope with the


increasing complexity of the information society.

2004, G.Tecuci, Learning Agents Center


Problem: Choosing a Ph.D. thesis Advisor

Choosing a Ph.D. thesis Advisor is a crucial decision for a


successful thesis and for ones future career.

An informed decision requires a lot of knowledge about the


potential advisors.

In this course we will develop an agent that interacts with a


student to help selecting the best Ph.D. advisor for that
student.

See the project notes: 1. Problem

2004, G.Tecuci, Learning Agents Center


How are agents built: Manual knowledge acquisition

Intelligent Agent

Subject Problem Solving


Knowledge
Matter Expert Engine
Engineer

Dialog
Programming
Knowledge Base

Results

A knowledge engineer attempts to understand how a subject


matter expert reasons and solves problems and then encodes
the acquired expertise into the agent's knowledge base.
The expert analyzes the solutions generated by the agent
(and often the knowledge base itself) to identify errors, and
the knowledge engineer corrects the knowledge base.
2004, G.Tecuci, Learning Agents Center
Why it is hard

The knowledge engineer has to become a kind of subject


matter expert in order to properly understand experts problem
solving knowledge. This takes time and effort.

Experts express their knowledge informally, using natural


language, visual representations and common sense, often
omitting essential details that are considered obvious. This
form of knowledge is very different from the one in which
knowledge has to be represented in the knowledge base
(which is formal, precise, and complete).

This transfer and transformation of knowledge, from the


domain expert through the knowledge engineer to the agent, is
long, painful and inefficient (and is known as "the knowledge
acquisition bottleneck of the AI systems development
process).
2004, G.Tecuci, Learning Agents Center
Mixed-initiative knowledge acquisition

Intelligent Problem Solving


Subject Engine
Learning
Matter Expert
Agent
Dialog Knowledge
Learning Knowledge Base
Engine

Results

The expert teaches the agent how to perform various tasks, in


a way that resembles how an expert would teach a human
apprentice when solving problems in cooperation.
This process is based on mixed-initiative reasoning that
integrates the complementary knowledge and reasoning styles
of the subject matter expert and the agent, and on a division of
responsibility for those elements of knowledge engineering for
which they have the most aptitude, such that together they

form a complete team for knowledge base development.


2004, G.Tecuci, Learning Agents Center
Mixed-initiative knowledge acquisition (cont.)

This is the most promising approach to overcome the


knowledge acquisition bottleneck.

DARPAs Rapid Knowledge Formation Program (2000-2004):


Emphasized the development of knowledge bases directly by the
subject matter experts.
Central objective: Enable distributed teams of experts to enter and modify
knowledge directly and easily, without the need for prior knowledge
engineering experience. The emphasis was on content and the means of
rapidly acquiring this content from individuals who possess it with the goal
of gaining a scientific understanding of how ordinary people can work with
formal representations of knowledge.
Programs primary requirement: Development of functionality enabling
experts to understand the contents of a knowledge base, enter new
theories, augment and edit existing knowledge, test the adequacy of the
knowledge base under development, receive explanations of theories
contained in the knowledge base, and detect and repair errors in content.
2004, G.Tecuci, Learning Agents Center
Autonomous knowledge acquisition

Autonomous Problem Solving


Learning Engine
Agent

Learning Knowledge
Data Knowledge Base
Data Base Engine

Results

The learning engine builds the knowledge base from a data


base of facts or examples.
In general, the learned knowledge consists of concepts,
classification rules, or decision trees. The problem solving
engine is a simple one-step inference engine that classifies a
new instance as being or not an example of a learned concept.
Defining the Data Base of examples is a significant challenge.

Current practical applications limited to classification tasks.


2004, G.Tecuci, Learning Agents Center
Autonomous knowledge acquisition (cont.)

Autonomous Language Problem Solving


Understanding and Learning Agent Engine

Text Text Data Knowledge


Learning Knowledge Base
Understanding
Engine Engine

Results

The knowledge base is built by the learning engine from data


provided by the text understanding system able to understand
textbooks.
In general, the data consists of facts acquired from the books.
This is not yet a practical approach, even for simpler agents.

2004, G.Tecuci, Learning Agents Center


Demonstration

Teaching Disciple how to determine whether a strategic


leader has the critical capability to be protected.

Disciple
Demo

2004, G.Tecuci, Learning Agents Center


Vision on the future of software development

Mainframe Personal Learning


Computers Computers Agents

Software systems
developed and used
by persons that are
not computer experts
Software systems developed
by computer experts
and used by persons that
are not computer experts
DISCIPLE
Software systems
developed and used Problem

Interface
Solving Ontology
by computer experts
+ Rules
Learning
2004, G.Tecuci, Learning Agents Center

You might also like