You are on page 1of 24

UNIT 5

EXPERT SYSTEMS
Introduction
ExpertSystemsarecomputerprogramsthatarederivedfromabranchofcomputerscienceresearch
calledArtificialIntelligence(AI).AI'sscientificgoalistounderstandintelligencebybuildingcomputer
programsthatexhibitintelligentbehavior.Itisconcernedwiththeconceptsandmethodsofsymbolic
inference,orreasoning,byacomputer,andhowtheknowledgeusedtomakethoseinferenceswillbe
representedinsidethemachine.

Ofcourse,thetermintelligencecoversmanycognitiveskills,includingtheabilityto
solveproblems,learn,andunderstandlanguage;AIaddressesallofthose.Butmost
progresstodateinAIhasbeenmadeintheareaofproblemsolvingconceptsand
methods for building programs thatreasonabout problems rather than calculate a
solution.
AIprogramsthatachieveexpertlevelcompetenceinsolvingproblemsintaskareas
bybringingtobearabodyofknowledgeaboutspecifictasksarecalledknowledge
basedorexpert systems. Often, the term expert systems is reserved for programs
whoseknowledgebasecontainstheknowledgeusedbyhumanexperts,incontrastto
knowledge gathered from textbooks or nonexperts. More often than not, the two
terms, expert systems (ES) and knowledgebased systems (KBS), are used
synonymously. Taken together, they represent the most widespread type of AI
application. The area of human intellectual endeavor to be captured in an expert
systemiscalledthetaskdomain.Taskreferstosomegoaloriented,problemsolving
activity.Domainreferstotheareawithinwhichthetaskisbeingperformed.Typical
tasksarediagnosis,planning,scheduling,configurationanddesign.Anexampleofa
taskdomainisaircraftcrewscheduling,discussedinChapter2.
Buildinganexpertsystemisknownasknowledgeengineeringanditspractitionersare
calledknowledge engineers. The knowledge engineer must make sure that the
computerhasalltheknowledgeneededtosolveaproblem.Theknowledgeengineer
must choose one or more forms inwhich torepresent the required knowledgeas
symbolpatternsinthememoryofthecomputerthatis,he(orshe)mustchoose
aknowledge representation. He must also ensure that the computer can use the

knowledgeefficientlybyselectingfromahandfulofreasoningmethods.Thepractice
of knowledge engineering is described later. We first describe the components of
expertsystems.

THEAPPLICATIONSOFEXPERTSYSTEMS

The spectrum of applications of expert systems technology to industrial and


commercialproblemsissowideastodefyeasycharacterization.Theapplications
findtheirwayintomostareasofknowledgework.Theyareasvariedashelping
salespersonssellmodularfactorybuilthomestohelpingNASAplanthemaintenance
ofaspaceshuttleinpreparationforitsnextflight.
Applicationstendtoclusterintosevenmajorclasses.
DiagnosisandTroubleshootingofDevicesandSystemsofAllKinds
PlanningandScheduling
ConfigurationofManufacturedObjectsfromSubassemblies
FinancialDecisionMaking
KnowledgePublishing
ProcessMonitoringandControl
DesignandManufacturing

ARCHITECTURE OF EXPERT SYSTEM


Expert systems typically contain the following four components:

Knowledge-Acquisition Interface
User Interface
Knowledge Base
Inference Engine

This architecture differs considerably from traditional computer programs,


resulting
in
several
characteristics
of
expert
systems.

Expert System Components


Knowledge-Acquisition Interface
The knowledge-acquisition interface controls how the expert and knowledge
engineer interact with the program to incorporate knowledge into the knowledge
base. It includes features to assist experts in expressing their knowledge in a form
suitable for reasoning by the computer.This process of expressing knowledge in the
knowledge base is called knowledge acquisition. Knowledge acquisition turns out
to be quite difficult in many cases--so difficult that some authors refer to the
knowledge acquisition bottleneck to indicate that it is this aspect of expert system
development which often requires the most time and effort.

Debugging faulty knowlege bases is facilitated by traces (lists of rules in the


order they were fired), probes (commands to find and edit specific rules, facts, and
so on), and bookkeeping functions and indexes (which keep track of various
features of the knowledge base such as variables and rules). Some rule-based
expert system shells for personal computers monitor data entry, checking the
syntactic validity of rules. Expert systems are typically validated by testing their
preditions for several cases against those of human experts. Case facilities-permitting a file of such cases to be stored and automatically evaluated after the
program is revised--can greatly speed the vaidation process. Many features that are
useful for the user interface, such as on-screen help and explanations, are also of
benefit to the developer of expert systems and are also part of knowledgeacquisition interfaces.
Expert systems in the literature demonstrate a wide range of modes of
knowledge acquisition (Buchanan, 1985). Expert system shells on microcomputers
typically require the user to either enter rules explicitly or enter several examples
of cases with appropriate conclusions, from which the program will infer a rule.
User Interface
The user interface is the part of the program that interacts with the user. It prompts
the user for information required to solve a problem, displays conclusions, and
explains its reasoning.
Features of the user interface often include:

Doesn't ask "dumb" questions


Explains its reasoning on request

Provides documentation and references


Defines technical terms
Permits sensitivity analyses, simulations, and what-if analyses
Detailed report of recommendations
Justifies recommendations
Online help
Graphical displays of information
Trace or step through reasoning
The user interface can be judged by how well it reproduces the kind of interaction
one might expect between a human expert and someone consulting that expert.
Knowledge Base
The knowledge base consists of specific knowledge about some substantive
domain. A knowledge base differs from a data base in that the knowledge base
includes both explicit knowledge and implicit knowledge. Much of the knowledge
in the knowledge base is not stated explicitly, but inferred by the inference engine
from explicit statements in the knowledge base. This makes knowledge bases have
more efficient data storage than data bases and gives them the power to
exhaustively represent all the knowledge implied by explicit statements of
knowledge.
There are several important ways in which knowledge is represented in a
knowledge base. For more information, see knowledge representation strategies.

Knowledge bases can contain many different types of knowledge and the process
of acquiring knowledge for the knowledge base (this is often called knowledge
acquisition) often needs to be quite different depending on the type of knowledge
sought.
Types of Knowledge
There are many different kinds of knowledge considered in expert systems. Many
of these form dimensions of contrasting knowledge:

explicit knowledge
implicit knowledge
domain knowledge
common sense or world knowledge
heuristics
algorithms
procedural knowledge
declarative or semantic knowledge
public knowledge
private knowledge
shallow knowledge
deep knowledge
metaknowledge

Inference Engine
The inference engine uses general rules of inference to reason from the knowledge
base and draw conclusions which are not explicitly stated but can be inferred from
the knowledge base. Inference engines are capable of symbolic reasoning, not just
mathematical reasoning. Hence, they expand the scope of fruitful applications of
computer programs. The specific forms of inference permitted by different

inference engines vary, depending on several factors, including the knowledge


representation strategies employed by the expert system.
ROLES OF EXPERT SYSTEMS
Roles in Expert System Development
Three fundamental roles in building expert systems are:
1. Expert - Successful ES systems depend on the experience and application of
knowledge that the people can bring to it during its development. Large systems
generally require multiple experts.
2. Knowledge engineer - The knowledge engineer has a dual task. This person
should be able to elicit knowledge from the expert, gradually gaining an
understanding of an area of expertise. Intelligence, tact, empathy, and proficiency
in specific techniques of knowledge acquisition are all required of a knowledge
engineer. Knowledge-acquisition techniques include conducting interviews with
varying degrees of structure, protocol analysis, observation of experts at work, and
analysis of cases.
On the other hand, the knowledge engineer must also select a tool appropriate for
the project and use it to represent the knowledge with the application of the
knowledge acquisition facility.
3. User - A system developed by an end user with a simple shell, is built rather
quickly an inexpensively. Larger systems are built in an organized development
effort. A prototype-oriented iterative development strategy is commonly used. ESs
lends themselves particularly well to prototyping.
KNOWLEDGE ACQUISITION

Knowledge Acquisition is concerned with the development of knowledge bases


based on the expertise of a human expert. This requires to express knowledge in a
formalism suitable for automatic interpretation. Within this field, research at
UNSW focusses on incremental knowledge acquisition techniques, which allow a
human expert to provide explanations of their decisions that are automatically
integrated into sophisticated knowledge bases.
Introduction
Knowledge acquisition is the process of extracting, structuring and organizing
knowledge from one source, usually human experts, so it can be used in software
such as an ES. This is often the major obstacle in building an ES.
There are three main topic areas central to knowledge acquisition that require
consideration in all ES projects. First, the domain must be evaluated to determine if
the type of knowledge in the domain is suitable for an ES. Second, the source of
expertise must be identified and evaluated to ensure that the specific level of
knowledge required by the project is provided. Third, if the major source of
expertise is a person, the specific knowledge acquisition techniques and
participants need to be identified.
Theoretical Considerations
An ES attempts to replicate in software the reasoning/pattern-recognition abilities
of human experts who are distinctive because of their particular knowledge and
specialized intelligence. ES should be heuristic and readily distinguishable from
algorithmic programs and databases. Further, ES should be based on expert
knowledge, not just competent or skillful behavior.
Domains
Several domain features are frequently listed for consideration in determining
whether an ES is appropriate for a particular problem domain. Several of these
caveats relate directly to knowledge acquisition. First, bona fide experts, people
with generally acknowledge expertise in the domain, must exist. Second, there
must be general consensus among experts about the accuracy of solutions in a
domain. Third, experts in the domain must be able to communicate the details of

their problem solving methods. Fourth, the domain should be narrow and well
defined and solutions within the domain must not require common sense.
Experts
Although an ES knowledge base can be developed from a range of sources such as
textbooks, manuals and simulation models, the knowledge at the core of a well
developed ES comes from human experts. Although multiple experts can be used,
the ideal ES should be based on the knowledge of a single expert. In light of the
pivotal role of the expert, caveats for choosing a domain expert are not surprising.
First, the expert should agree with the goals of the project. Second, the expert
should be cooperative and easy to work with. Third, good verbal communication
skills are needed. Fourth, the expert must be willing and able to make the required
time commitment (there must also be adequate administrative/managerial support
for this too).
Knowledge Acquisition Technique
At the heart of the process is the interview. The heuristic model of the domain is
usually extracted through a series of intense, systematic interviews, usually
extending over a period of many months. Note that this assumes the expert and the
knowledge engineer are not the same person. It is generally best that the expert and
the knowledge engineer not be the same person since the deeper the experts'
knowledge, the less able they are in describing their logic. Furthermore, in their
efforts to describe their procedures, experts tend to rationalize their knowledge and
this can be misleading.
General suggestions about the knowledge acquisition process are summarized in
rough chronological order below:
1. Observe the person solving real problems.
2. Through discussions, identify the kinds of data, knowledge and procedures
required to solve different types of problems.
3. Build scenarios with the expert that can be associated with different problem
types.
4. Have the expert solve a series of problems verbally and ask the rationale
behind each step.

5. Develop rules based on the interviews and solve the problems with them.
6. Have the expert review the rules and the general problem solving procedure.
7. Compare the responses of outside experts to a set of scenarios obtained from
the project's expert and the ES.
Note that most of these procedures require a close working relationship between
the knowledge engineer and the expert.
Practical Considerations
The preceding section provided an idealized version of how ES projects might be
conducted. In most instances, the above suggestions are considered and modified
to suit the particular project. The remainder of this section will describe a range of
knowledge acquisition techniques that have been successfully used in the
development of ES.
Operational Goals
After an evaluation of the problem domain shows that an ES solution is appropriate
and feasible, then realistic goals for the project can be formulated. An ES's
operational goals should define exactly what level of expertise its final product
should be able to deliver, who the expected user is and how the product is to be
delivered. If participants do not have a shared concept of the project's operational
goals, knowledge acquisition is hampered.
Pre-training
Pre-training the knowledge engineer about the domain can be important. In the
past, knowledge engineers have often been unfamiliar with the domain. As a result,
the development process was greatly hindered. If a knowledge engineer has limited
knowledge of the problem domain, then pre-training in the domain is very
important and can significantly boost the early development of the ES.
Knowledge Document
Once development begins on the knowledge base, the process should be well
documented. In addition to tutorial a document, a knowledge document that
succinctly state the project's current knowledge base should be kept. Conventions
should be established for the document such as keeping the rules in quasi-English

format, using standard domain jargon, giving descriptive names to the rules and
including supplementary, explanatory clauses with each rule. The rules should be
grouped into natural subdivisions and the entire document should be kept current.
Scenarios
An early goal of knowledge acquisition should be the development of a series of
well developed scenarios that fully describe the kinds of procedures that the expert
goes through in arriving at different solutions. If reasonably complete case studies
do not exist, then one goal of pre-training should be to become so familiar with the
domain that the interviewer can compose realistic scenarios. Anecdotal stories that
can be developed into scenarios are especially useful because they are often
examples of unusual interactions at the edges of the domain. Familiarity with
several realistic scenarios can be essential to understanding the expert in early
interviews and the key to structuring later interviews. Finally, they are ultimately
necessary for validation of the system.
Interviews
Experts are usually busy people and interviews held in the expert's work
environment are likely to be interrupted. To maximize access to the expert and
minimize interruptions it can be helpful to hold meetings away from the expert's
workplace. Another possibility is to hold meetings after work hours and on
weekends. At least initially, audiotape recordings ought to be made of the
interviews because often times notes taken during an interview can be incomplete
or suggest inconsistencies that can be clarified by listening to the tape. The
knowledge engineer should also be alert to fatigue and limit interviews
accordingly.
META RULES (KNOWLEDGE)
In designing an expert system, it is necessary to select the conflict
resolutionmethod that will be used, and quite possibly it will be necessary to
usedifferent methods to resolve different types of conflicts.
For example, in some situations it may make most sense to use the method

that involves firing the most recently added rules.


This method makes most sense in situations in which the timeliness of data
isimportant. It might be, for example, that as research in a particular field of
medicine develops, and new rules are added to the system that contradicts
some of the older rules.
It might make most sense for the system to assume that these newer rules are
more accurate than the older rules.
It might also be the case, however, that the new rules have been added by an
expert whose opinion is less trusted than that of the expert who added the
earlier rules.
In this case, it clearly makes more sense to allow the earlier rules priority.
This kind of knowledge is called meta knowledgeknowledge about
knowledge. The rules that define how conflict resolution will be used, and
how other aspects of the system itself will run, are called meta rules.
The knowledge engineer who builds the expert system is responsible for
building appropriate meta knowledge into the system (such as expert A is
tobe trusted more than expert B or any rule that involves drug X is not to
betrusted as much as rules that do not involve X).
Meta rules are treated by the expert system as if they were ordinary rules but
are given greater priority than the normal rules that make up the expert
system.
In this way, the meta rules are able to override the normal rules, if necessary,
and are certainly able to control the conflict resolution process.

META HEURISTICS
Local Search and Metaheuristics
Local search methods work by starting from some initial configuration
(usuallyrandom) and making small changes to the configuration until a state
is reached from
which no better state can be achieved.
Hill climbing is a good example of a local search technique.
Local search techniques, used in this way, suffer from the same problems as
hill
climbing and, in particular, are prone to finding local maxima that are not
the best
solution possible.
The methods used by local search techniques are known as metaheuristics.
Examples of metaheuristics include simulated annealing, tabu search,
geneticalgorithms, ant colony optimization, and neural networks.
This kind of search method is also known as local optimization because it is
attempting to optimize a set of values but will often find local maxima rather
than aglobal maximum.
A local search technique applied to the problem of allocating teachers to
classroomswould start from a random position and make small changes until
a configuration wasreached where no inappropriate allocations were made.

TYPICAL EXPERT SYSTEMS


MYCIN

MYCIN was an early expert system that used artificial intelligence to identify
bacteria causing severe infections, such as bacteremia and meningitis, and to
recommend antibiotics, with the dosage adjusted for patient's body weight the
name derived from the antibiotics themselves, as many antibiotics have the suffix
"-mycin". The Mycin system was also used for the diagnosis of blood clotting
diseases.
MYCIN was developed over five or six years in the early 1970s at Stanford
University. It was written in Lisp as the doctoral dissertation of Edward
Shortliffe under the direction ofBruce G. Buchanan, Stanley N. Cohen and others.
It arose in the laboratory that had created the earlier Dendral expert system.
MYCIN was never actually used in practice but research indicated that it proposed
an acceptable therapy in about 69% of cases, which was better than the
performance of infectious disease experts who were judged using the same criteria.
Method
MYCIN operated using a fairly simple inference engine, and a knowledge base of
~600 rules. It would query the physician running the program via a long series of
simple yes/no or textual questions. At the end, it provided a list of possible culprit
bacteria ranked from high to low based on the probability of each diagnosis, its
confidence in each diagnosis' probability, the reasoning behind each diagnosis (that
is, MYCIN would also list the questions and rules which led it to rank a diagnosis a
particular way), and its recommended course of drug treatment.
Despite MYCIN's success, it sparked debate about the use of its ad hoc, but
principled, uncertainty framework known as "certainty factors". The developers
performed studies showing that MYCIN's performance was minimally affected by
perturbations in the uncertainty metrics associated with individual rules, suggesting
that the power in the system was related more to its knowledge representation and
reasoning scheme than to the details of its numerical uncertainty model. Some
observers felt that it should have been possible to use classical Bayesian statistics.

MYCIN's developers argued that this would require either unrealistic assumptions
of probabilistic independence, or require the experts to provide estimates for an
unfeasibly large number of conditional probabilities.Subsequent studies later
showed that the certainty factor model could indeed be interpreted in a
probabilistic sense, and highlighted problems with the implied assumptions of such
a model. However the modular structure of the system would prove very
successful, leading to the development of graphical models such as Bayesian
networks.
Results
Research conducted at the Stanford Medical School found MYCIN to propose an
acceptable therapy in about 69% of cases, which was better than the performance
of infectious disease experts who were judged using the same criteria. This study is
often cited as showing the potential for disagreement about thereapeutic decisions,
even among experts, when there is no "gold standard" for correct treatment.[4]
Practical use
MYCIN was never actually used in practice. This wasn't because of any weakness
in its performance. As mentioned, in tests it outperformed members of the Stanford
medical school faculty. Some observers raised ethical and legal issues related to the
use of computers in medicine if a program gives the wrong diagnosis or
recommends the wrong therapy, who should be held responsible? However, the
greatest problem, and the reason that MYCIN was not used in routine practice, was
the state of technologies for system integration, especially at the time it was
developed. MYCIN was a stand-alonesystem that required a user to enter all
relevant information about a patient by typing in responses to questions MYCIN
posed. The program ran on a large time-shared system, available over the early

Internet (ARPANet), before personal computers were developed. In the modern


era, such a system would be integrated with medical record systems, would extract
answers to questions from patient databases, and would be much less dependent on
physician entry of information. In the 1970s, a session with MYCIN could easily
consume 30 minutes or morean unrealistic time commitment for a busy
clinician.

MYCIN's greatest influence was accordingly its demonstration of the power of its
representation and reasoning approach. Rule-based systems in many non-medical
domains were developed in the years that followed MYCIN's introduction of the
approach. In the 1980s, expert system "shells" were introduced (including one
based on MYCIN, known as E-MYCIN (followed by KEE)) and supported the
development of expert systems in a wide variety of application areas.

A difficulty that rose to prominence during the development of MYCIN and


subsequent complex expert systems has been the extraction of the necessary
knowledge for the inference engine to use from the human expert in the relevant
fields into the rule base (the so-called "knowledge acquisition bottleneck")
DART
Introduction We describe an application of artificial intelligence techniques to
computer system fault diagnosis, in particular, we have implemented an automated
consultant that advises IB M field service personnel on the diagnosis of faults
occurring in computer installations.

The consultant identifies specific system components (both hardware and software)
likely to be responsible for an observed fault and offers a brief explanation of the
major factors and evidence supporting these indictments. The consultant, called
DART , was constructed using HMYCI N and is part of a larger research effort
investigating automated diagnosis of machine faults .
Project Motivation and Scope of Effort A typical, large-scale computer installation
is composed of numerous subsystems including CPUs, primary and secondary
storage, peripherals, and supervisory software. Each of these subsystems, in turn,
consists of a richly connected set of both hardware and software components such
as disk drives, controllers, CPUs, memory modules, and access methods.
Generally, each individual component has an associated set of diagnostic aids
designed to test its own specific integrity. However, very few maintenance tools
and established diagnostic strategics arc aimed at identifying faults on the system
or subsystem level.
As a result, identification of single or multiple faults from systemic manifestations
remains a difficult task. The non-specialist field service engineer is trained to use
the existing component-specific tools and, as a result, is often unable to attack the
the failure at the systemic level. Expert assistance is then required, increasing both
the time and cost required to determine and repair the fault. The design of DAR T
reflects the expert's ability to take a systemic viewpoint on problems and to use
that viewpoint to indict a specific components, thus making more effective use of
the existing maintenance capabilities. For our initial design, we chose to
concentrate on problems occurring within the teleprocessing (TP) subsystems for
the IBM 370-class computers. This subsystem includes various network
controllers, terminals, remote-job entry facilities, modems, and several software
access methods. In addition to these well-defined components there are numerous

available test points the program can use during diagnosis. We have focussed our
effort on handling two of the most frequent TP problems, (1) when a user is unable
to log on to the system from a remote terminal, and (2) when the system operator is
unable to initialize the TP network itself.
In a new system configuration, these two problems constitute a significant
percentage of service calls received. Interviews with field-service experts made it
apparent that much of their problem-solving expertise is derived from their
knowledge of several well-defined communications protocols. Often composed of
simple request-acknowledge sequences, these protocols represent the transactions
between components that are required to perform various TP tasks.
Although based on information found in reference manuals it is significant that
these protocols are not explicitly detailed anywhere in the standard maintenance
documentation. Knowledge of the basic contents of these protocols and their
common sequence forms the basis of a diagnostic strategy: use the available
tracing facilities to capture the actual communications occurring in the network,
and analyze this data to determine which link in the protocol chain has broken.
This procedure is sufficient to identify specific faulty components in the network.
The DAR T Consultation During a DAR T consultation session, the field engineer
focusses on a particular computer system that is experiencing a problem. Many
installations arc composed of numerous CPUs that partially share peripherals, thus,
the term "system" is taken to mean a single CPUcomplex and its attached
peripherals. Within each such system, the user describes one or more problems by
indicating a failure symptom, currently using a list of keywords.
Using this description, the consultant makes an initial guess about which of the
major subsystems might be involved in the fault. The user is then given the

opportunity to select which of these implicated subsystems are to be pursued and in


which order. Each subsystem serves as a focal point for tests and findings
associated with that segment of the diagnostic activity.
These subsystems currently correspond to various input/output facilities
(e.g..DISK.TAPE. TP) or the CPU-complex itself, lo r each selected subsystem, the
user is asked to identify one or more logical pathways which might be involved in
the situation. Kach of these logical pathways correspond to a line of
communication between a peripheral and an application program.
On the basis of this information and details of the basic composition of the
network, the appropriate communications protocol can be selected. The user is also
asked to indicate which diagnostic tools (e.g., traces, dumps, logic probes) arc
available for examining each logical pathway.
Once the logical pathway and protocol have been determined, descriptions arc
gathered of the often multiple physical pathways that actually implement the
logical pathway, it is on this level that diagnostic test results arc presented and
actual component indictments occur. For DAR T to be useful at this level, the field
engineer must be familiar with the diagnostic equipment and software testing and
tracing facilities which can be requested, and, of course, must also have access to
information about the specific system hardware and softwate configuration of the
installation. Finally, at the end of the consultation session, DART summarizes its
findings and recommends additional tests and procedures to follow
XOON
The R1 (internally called XCON, for eXpert CONfigurer) program was a
production-rule-based system written in OPS5 by John P. McDermott of CMU in
1978 to assist in the ordering of DEC's VAX computer systems by automatically

selecting the computer system components based on the customer's requirements.


The development of XCON followed two previous unsuccessful efforts to write an
expert system for this task, in FORTRAN and BASIC.
In developing the system McDermott made use of experts from both DEC's
PDP/11 and VAX computer systems groups. These experts sometimes even
disagreed amongst themselves as to an optimal configuration. The resultant
"sorting it out" had an additional benefit in terms of the quality of VAX systems
delivered.
XCON first went into use in 1980 in DEC's plant in Salem, New Hampshire. It
eventually had about 2500 rules. By 1986, it had processed 80,000 orders, and
achieved 95-98% accuracy. It was estimated to be saving DEC $25M a year by
reducing the need to give customers free components when technicians made
errors, by speeding the assembly process, and by increasing customer satisfaction.
Before XCON, when ordering a VAX from DEC, every cable, connection, and bit
of software had to be ordered separately. (Computers and peripherals were not sold
complete in boxes as they are today). The sales people were not always very
technically expert, so customers would find that they had hardware without the
correct cables, printers without the correct drivers, a processor without the correct
language chip, and so on. This meant delays and caused a lot of customer
dissatisfaction and resultant legal action. XCON interacted with the sales person,
asking critical questions before printing out a coherent and workable system
specification/order slip.
XCON's success led DEC to rewrite XCON as XSEL- a version of XCON
intended for use by DEC's salesforce to aid a customer in properly configuring
their VAX (so they wouldn't, say, choose a computer too large to fit through their
doorway or choose too few cabinets for the components to fit in). Location
problems and configuration were handled by yet another expert system, XSITE.
McDermott's 1980 paper on R1 won the AAAI Classic Paper Award in 1999.
Legendarily, the name of R1 comes from McDermott, who supposedly said as he
was writing it, "Three years ago I couldn't spell knowledge engineer, now I are
one."
THE EXPERT SYSTEM SHELLS

The parts of the expert system that do not contain domain-specific or case-specific
information are contained within the expert system shell. This shell is a general
toolkit that can be used to build a number of different expert systems, depending
on which knowledge base is added to the shell. An example of such a shell is
CLIPS (C Language Integrated Production System). Other examples in common
use include OPS5, ART, JESS, and Eclipse.
CLIPS (C Language Integrated Production System)
CLIPS is a freely available expert system shell that has been implemented in C.
It provides a language for expressing rules and mainly uses forward chaining to
deriveconclusions from a set of facts and rules.
The notation used by CLIPS is very similar to that used by LISP.
The following is an example of a rule specified using CLIPS:
(defrule birthday
(firstname ?r1 John)
(surname ?r1 Smith)
(haircolor ?r1 Red)
=>
(assert (is-boss ?r1)))
?r1 is used to represent a variable, which in this case is a person.
Assert is used to add facts to the database, and in this case the rule is used to draw
aconclusion from three facts about the person:
If the person has the first name John, has the surname Smith, and has red hair, then
heis the boss.
This can be tried in the following way:
(assert (firstname x John))

(assert (surname x Smith))


(assert (haircolor x Red))
(run)
At this point, the command (facts) can be entered to see the facts that are contained
in
the database:
CLIPS> (facts)
f-0 (firstname x John)
f-1 (surname x Smith)
f-2 (haircolor x Red)
f-3 (is-boss x)
So CLIPS has taken the three facts that were entered into the system and used the
ruleto draw a conclusion, which is that x is the boss. Although this is a simple
example, CLIPS, like other expert system shells, can beused to build extremely
sophisticated and powerful tools.For example, MYCIN is a well-known medical
expert system that was developed atStanford University in 1984.
MYCIN was designed to assist doctors to prescribe antimicrobial drugs for blood
infections.In this way, experts in antimicrobial drugs are able to provide their
expertise to otherdoctors who are not so expert in that field. By asking the doctor a
series of questions,MYCIN is able to recommend a course of treatment for the
patient. Importantly, MYCIN is also able to explain to the doctor which rules fired
andtherefore is able to explain why it produced the diagnosis and recommended
treatmentthat it did.
MYCIN has proved successful: for example, it has been proven to be able to
providemore accurate diagnoses of meningitis in patients than most
doctors.MYCIN was developed using LISP, and its rules are expressed as LISP
expressions.

The following is an example of the kind of rule used by MYCIN, translated into
English:
IF the infection is primary-bacteria
AND the site of the culture is one of the sterile sites
AND the suspected portal of entry is the gastrointestinal tract
THEN there is suggestive evidence (0.7) that infection is bacteroid
The following is a very simple example of a CLIPS session where rules are
defined tooperate an elevator:
CLIPS> (defrule rule1
(elevator ?floor_now)
(button ?floor_now)
=>
(assert (open_door)))
CLIPS> (defrule rule2
(elevator ?floor_now)
(button ?other_floor)
=>
(assert (goto ?other_floor)))
CLIPS> (assert (elevator floor1))
==> f-0 (elevator floor1)
<Fact-0>
CLIPS> (assert (button floor3))
==> f-1 (button floor3)

<Fact-1>
<CLIPS> (run)
==>f-2 (goto floor3)
The segments in bold are inputs by the knowledge engineer, and the plain
textsections are CLIPS.
Note that ?floor_now is an example of a variable within CLIPS, which means
Thatany object can match it for the rule to trigger and fire.
In our example, the first rule simply says: If the elevator is on a floor, and the
buttonis pressed on the same floor, then open the door.
The second rule says: If the elevator is on one floor, and the button is pressed on a
different floor, then go to that floor.
After the rules, two facts are inserted into the database. The first fact says that the
elevator is on floor 1, and the second fact says that the button has been pressed on
floor 3.
When the (run) command is issued to the system, it inserts a new fact into the
database, which is a command to the elevator to go to floor 3.

You might also like