You are on page 1of 7

14 1541-1672/09/$26.

00 2009 IEEE IEEE INTELLIGENT SYSTEMS


Published by the IEEE Computer Society
H I S T O R I E S A N D F U T U R E S
Editors: Robert R. Hoffman, Jeffrey M. Bradshaw, and Kenneth M. Ford,
Institute for Human and Machine Cognition, rhoffman@ihmc.us
H U M A N - C E N T E R E D C O M P U T I N G
that the robot had complex perception and rea-
soning skills equivalent to a child and that robots
were subservient to humans. Although the laws
were simple and few, the stories attempted to dem-
onstrate just how diffcult they were to apply in
various real-world situations. In most situations,
although the robots usually behaved logically,
they often failed to do the right thing, typically
because the particular context of application re-
quired subtle adjustments of judgment on the part
of the robot (for example, determining which law
took priority in a given situation, or what consti-
tuted helpful or harmful behavior).
The three laws have been so successfully incul-
cated into the public consciousness through enter-
tainment that they now appear to shape societys
expectations about how robots should act around
humans. For instance, the media frequently refer
to humanrobot interaction in terms of the three
laws. Theyve been the subject of serious blogs,
events, and even scientifc publications. The Sin-
gularity Institute organized an event and Web
site, Three Laws Unsafe, to try to counter pub-
lic expectations of robots in the wake of the movie
I, Robot. Both the philosophy
1
and AI
2
commu-
nities have discussed ethical considerations of ro-
bots in society using the three laws as a reference,
with a recent discussion in IEEE Intelligent Sys-
tems.
3
Even medical doctors have considered ro-
botic surgery in the context of the three laws.
4
With few notable exceptions,
5,6
there has been
relatively little discussion of whether robots, now
or in the near future, will have suffcient percep-
tual and reasoning capabilities to actually follow
the laws. And there appears to be even less serious
discussion as to whether the laws are actually vi-
able as a framework for humanrobot interaction,
outside of cultural expectations.
Following the defnitions in Moral Machines:
Teaching Robots Right from Wrong,
7
Asimovs
laws are based on functional morality, which as-
sumes that robots have suffcient agency and cog-
nition to make moral decisions. Unlike many of
his successors, Asimov is less concerned with the
details of robot design than in exploiting a clever
literary device that lets him take advantage of the
large gaps between aspiration and reality in robot
autonomy. He uses the situations as a foil to ex-
plore issues such as
the ambiguity and cultural dependence of lan-
guage and behaviorfor example, whether
what appears to be cruel in the short run can
actually become a kindness in the longer term;
social utilityfor instance, how different indi-
viduals roles, capabilities, or backgrounds are
valuable in different ways with respect to each
other and to society; and
the limits of technologyfor example, the im-
possibility of assuring timely, correct actions in
all situations and the omnipresence of trade-offs.
In short, in a variety of ways the stories test the
lack of resilience in humanrobot interactions.
The assumption of functional morality, while ef-
S
ince their codifcation in 1947 in the col-
lection of short stories I, Robot, Isaac Asi-
movs three laws of robotics have been a staple
of science fction. Most of the stories assumed
Robin R. Murphy, Texas A&M University
David D. Woods, Ohio State University
Beyond Asimov:
The Three Laws of
Responsible Robotics
JULY/AUGUST 2009 www.computer.org/intelligent 15
fective for entertaining storytelling,
neglects operational morality. Oper-
ational morality links robot actions
and inactions to the decisions, as-
sumptions, analyses, and investments
of those who invent and make ro-
botic systems and of those who com-
mission, deploy, and handle robots in
operational contexts. No matter how
far the autonomy of robots ultimately
advances, the important challenges of
these accountability and liability link-
ages will remain.
8
This essay reviews the three laws
and briefy summarizes some of the
practical shortcomingsand even
dangersof each law for framing
humanrobot relationships, includ-
ing reminders about what robots
cant do. We then propose an alter-
native, parallel set of laws based on
what humans and robots can real-
istically accomplish in the foresee-
able future as joint cognitive systems,
and their mutual accountability for
their actions from the perspectives of
human-centered design and human
robot interaction.
Applying Asimovs
Laws to Todays Robots
When we try to apply Asimovs laws
to todays robots, we immediately
run into problems. Just as for Asi-
mov in his short stories, these prob-
lems arise from the complexities of
situations where we would use ro-
bots, the limits of physical systems
acting with limited resources in un-
certain changing situations, and the
interplay between the different social
roles as different agents pursue mul-
tiple goals.
First Law
Asimovs frst law of robotics states,
A robot may not injure a human be-
ing or, through inaction, allow a hu-
man being to come to harm. This
law is already an anachronism given
the militarys weaponization of ro-
bots, and discussions are now shifting
to the question of whether weaponized
robots can be humane.
9,10
Such
weaponization is no longer limited to
situations in which humans remain in
the loop for control. The South Ko-
rean government has published vid-
eos on YouTube of robotic border-
security guards. Scenarios have been
proposed where it would be permis-
sible for a military robot to fre upon
anything moving (presumably target-
ing humans) without direct human
permission.
11
Even if current events hadnt made
the law irrelevant, its moot because
robots cannot infallibly recognize hu-
mans, perceive their intent, or reli-
ably interpret contextualized scenes.
A quick review of the computer vi-
sion literature shows that scientists
continue to struggle with many fun-
damental perceptual processes. Cur-
rent commercial security packages
for recognizing the face of a person
standing in a fxed position continue
to fall short of expectations in prac-
tice. Many robots that recognize
humans use indirect cues such as
heat and motion, which only work
in constrained contexts. These prob-
lems confrm Norbert Wieners warn-
ings about such failure possibilities.
8

Just as he envisioned many years ago,
todays robots are literal-minded
agentsthat is, they cant tell if their
world model is the world theyre
really in.
All this aside, the biggest problem
with the frst law is that it views safety
only in terms of the robotthat is,
the robot is the responsible safety
agent in all matters of humanrobot
interaction. While some speculate on
what it would mean for a robot to be
able to discharge this responsibility,
there are serious practical, theoreti-
cal, social-cognitive, and legal limi-
tations.
8,12
For example, from a legal
perspective the robot is a product, so
its not the responsible agent. Rather,
the robots owner or manufacturer is
liable for its actions. Unless robots
are granted a person-equivalent sta-
tus, somewhat like corporations are
now legally recognized as individual
entities, its diffcult to imagine stan-
dard product liability law not apply-
ing to them. When a failure occurs,
violating Asimovs frst law, the hu-
man stakeholders affected by that
failure will engage in the processes
of causal attribution. Afterwards,
theyll see the robot as a device and
will look for the person or group who
set up or instructed the device erro-
neously or who failed to supervise
(that is, stop) the robot before harm
occurred. Its still commonplace af-
ter accidents for manufacturers and
organizations to claim the result was
due only to human error, even when
the system in question was operating
autonomously.
8,13
Accountability is bound up with
the way we maintain our social re-
lationships. Human decision-making
always occurs in a context of expec-
tations that one might be called to
account for his or her decisions. Ex-
pectations for whats considered an
adequate explanation and the con-
sequences for people when their
explanation is judged inadequate are
Asimovs laws are based
on functional morality,
which assumes that
robots have suffcient
agency and cognition
to make moral decisions.
16 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
critical parts of accountability sys-
temsa reciprocating cycle of being
prepared to provide an accounting for
ones actions and being called by oth-
ers to provide an account. To be con-
sidered moral agents, robots would
have to be capable of participating
personally in this reciprocating cycle
of accountabilityan issue that, of
course, concerns more than any sin-
gle agents capabilities in isolation.
Second Law
Asimovs second law of robotics states,
A robot must obey orders given to it
by human beings, except where such
orders would confict with the frst
law. Although the law itself takes no
stand on how humans would give or-
ders, Asimovs robots relied on their
understanding of verbal directives.
Unfortunately, robust natural lan-
guage understanding still continues to
lie just beyond the frontiers of todays
AI.
14
Its true that, after decades of re-
search, computers can now construct
words from phonemes with some con-
sistencyas improvements in voice
dictation and call centers attest. Lan-
guage-understanding capabilities also
work well for specifc types of well-
structured tasks. However, the goal
of meaningful machine participation
in open-ended conversational con-
texts remains elusive. Additionally, we
must account for the fact that not all
directives are given verbally. Humans
use gestures and add affect through
body posture, facial expressions, and
motions for clarifcation and empha-
sis. Indeed, high-performance, expe-
rienced teams use highly pointed and
coded forms of verbal and nonverbal
communication in fuid, interdepen-
dent, and idiosyncratic ways.
Whats more interesting about the
second law from a humanrobot in-
teraction standpoint is that at its core,
it almost captures the more important
idea that intelligent robots should no-
tice and take stock of humans (and
that the people robots encounter or
interact with can notice pertinent as-
pects of robots behavior).
15
For ex-
ample, is it acceptable for a robot to
merely not hit a person in a hospi-
tal hall, or should it conform to so-
cial convention and acknowledge the
person in some way (excuse me or
a nod of a camera pan-tilt)? Or if a
robot operating in public places in-
cluded two-way communication de-
vices, could a bystander recognize
that the robot provided a means to
report a crime or a fre?
Third Law
The third law states, A robot must
protect its own existence as long as
such protection does not confict with
the frst or second law. Because to-
days robots are expensive, youd
think designers would be naturally
motivated to incorporate some form
of the third law into their products.
For example, even the inexpensive
iRobot Roomba detects stairs that
could cause a fatal fall. Surprisingly,
however, many expensive commercial
robots lack the means to fully protect
their owners investment.
An extreme example of this is in
the design of robots for military ap-
plications or bomb squads. Such ro-
bots are designed to be teleoperated
by a person who bears full responsi-
bility for all safety matters. Human-
factors studies show that remote
operators are immediately at a dis-
advantage, working through a me-
diated interface with a time delay.
Worse yet, remote operators are re-
quired to operate the robot through
poor humancomputer interfaces and
in contexts where the operator can be
fatigued, overloaded, or under high
stress. As a result, when an abnormal
event occurs, they may be distracted
or not fully engaged and thus might
not respond adequately in time. The
result for a robot is akin to expecting
an astronaut on a planets surface to
request and wait for permission from
mission control to perform even sim-
ple refexes such as ducking.
What is puzzling about todays lim-
ited attempts to conform to the third
law is that there are well-established
technological solutions for basic ro-
bot survival activities that work for
autonomous and human-controlled
robots. For instance, since the 1960s
weve had technology to assure
guarded motion, where the human
drives the robot but onboard software
will not allow the robot to make po-
tentially dangerous moves (for exam-
ple, collide with obstacles or exceed
speed limits or boundaries) without
explicit orders (an implicit invocation
of the second law). By the late 1980s,
guarded motion was encapsulated
into tactical reactive behaviors, essen-
tially giving robots refexes and tac-
tical authority. Perhaps the most im-
portant reason that guarded motion
and refexive behaviors havent been
more widely deployed is that they re-
quire additional sensors, which would
add to the cost. This increase in cost
may not appear to be justifed to cus-
tomers, who tend to be wildly over-
confdent that trouble and complexi-
ties outside the bounds of expected
behavior rarely arise.
The goal of
meaningful machine
participation in
open-ended
conversational contexts
remains elusive.
JULY/AUGUST 2009 www.computer.org/intelligent 17
The Alternative Three Laws
of Responsible Robotics
To address the diffculties of apply-
ing Asimovs three laws to the cur-
rent generation of robots while re-
specting the laws general intent, we
suggest the three laws of responsible
robotics.
Alternative First Law
Our alternative to Asimovs frst law
is A human may not deploy a ro-
bot without the humanrobot work
system meeting the highest legal and
professional standards of safety and
ethics. Since robots are indeed sub-
ject to safety regulations and liability
laws, the requirement of meeting legal
standards for safety would seem self-
evident. For instance, the medical-
device community has done extensive
research to validate robot sensing of
scalpel pressures and tissue contact
parameters, and it invests in failure
mode and effect analyses (consistent
with FDA medical-device standards).
In contrast, mobile roboticists
have a somewhat infamous history
of disregarding regulations. For ex-
ample, robot cars operating on pub-
lic roads, such as those used in the
DARPA Urban Grand Challenge, are
considered by US federal and state
transportation regulations as experi-
mental vehicles. Deploying such vehi-
cles requires voluminous and tedious
permission applications. Regrettably,
the 1995 CMU No Hands Across
America team neglected to get all
appropriate permissions while driv-
ing autonomously from Pittsburgh
to Los Angeles, and were stopped in
Kansas. The US Federal Aviation Ad-
ministration makes a clear distinc-
tion between fying unmanned aerial
vehicles as a hobby and fying them
for R&D or commercial practices,
effectively slowing or stopping many
R&D efforts. In response to these
diffculties, a culture preferring for-
giveness to permission has grown
up in some research groups. Such at-
titudes indicate a poor safety culture
at universities that could, in turn,
propagate to government or industry.
On the positive side, the robot com-
petitions sponsored by the Associa-
tion for Unmanned Vehicle Systems
International are noteworthy in their
insistence on having safe areas of op-
eration, clear emergency plans, and
safety offcers present.
Meeting the minimal legal require-
ments is not enoughthe alternative
frst law demands the highest profes-
sional ethics in robot deployment. A
failure or accident involving a robot
can effectively end an entire branch
of robotics research, even if the op-
erators arent legally culpable. Re-
sponsible communities should proac-
tively consider safety in the broadest
sense, and funding agencies should
fnd ways to increase the priority and
scope of research funding specifcally
aimed at relevant legal concerns.
The highest professional ethics
should also be applied in product de-
velopment and testing. Autonomous
robots have known vulnerabilities to
problems stemming from interrupted
wireless communications. Signal re-
ception is impossible to predict, yet
robust return to home if signal lost
and stop movement if GPS lost
functionality hasnt yet become an
expected component of built-in robot
behavior. This means robots are oper-
ating counter to reasonable and pru-
dent assumptions. Worse yet, when
theyre operating experimentally, ro-
bots often encounter unanticipated
factors that affect their control. Sim-
ply saying an unfortunate event was
unpredictable doesnt relieve the de-
signers of responsibility. Even if a
specifc disturbance is unpredictable
in detail, the fact that there will be
disturbances is virtually guaranteed,
and designing for resilience in the
face of these is fundamental.
As a matter of professional com-
mon sense, robot design should start
with safety frst, then add the inter-
esting software and hardware. Ro-
bots should carry black boxes or
recorders to show what they were
doing when a disturbance occurred,
not only for the sake of an accident
investigation but also to trace the ro-
bots behavior in context to aid diag-
nosis and debugging. There should be
a formal safety plan and checklists
for contingencies. These do not have
to be extensive and time consuming
to be effective. A litmus test for de-
velopers might be If a group of ex-
perts from the IEEE were to write
about your robot after an accident,
what would they say about system
safety and your professionalism?
Fundamentally, the alternative frst
law places responsibility for safety
and effcacy on humans within the
larger social and environmental con-
text in which robots are developed,
deployed, and operated.
Alternative Second Law
As an alternative to Asimovs sec-
ond law, we propose the follow-
ing: A robot must respond to hu-
mans as appropriate for their roles.
The capability to respond appropri-
atelyresponsivenessmay be more
The fact that
there will be disturbances
is virtually guaranteed,
and designing for
resilience in the face
of these is fundamental.
18 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
important to humanrobot interac-
tion than the capability of autonomy.
Not all robots will be fully autono-
mous over all conditions. For exam-
ple, a robot might be constrained to
follow waypoints but will be expected
to generate appropriate responses to
people it encounters along the way.
Responsiveness depends on the social
environment, the kinds of people and
their expectations that a robot might
encounter in its work envelope. Rather
than assume the relationship is hier-
archical with the human as the supe-
rior and the robot as the subordinate
so that all communication is a type
of order, the alternative second law
states that robots must be built so that
the interaction fts the relationships
and roles of each member in a given
environment. The relationship deter-
mines the degree to which a robot is
obligated to respond. It might ignore
a hacker completely. Orders exceeding
the authority of the speaker might be
disposed of politely (please have your
superior confrm your request) or
with a warning (interference with a
law enforcement robot may be a viola-
tion). Note that defning appropri-
ate response may address concerns
about robots being abused.
16
The relationship also determines the
mode of the response. How the robot
signals or expresses itself should be
consistent with that relationship. Ca-
sual relationships might rely on natu-
ral language, whereas trained teams
performing specifc tasks could coor-
dinate activities through other signals
such as body position and gestures.
The requirement for responsive-
ness captures a new form of autonomy
(not as isolated action but the more
diffcult behavior of engaging appro-
priately with others). However, devel-
oping robots capability for respon-
siveness requires a signifcant research
effort, particularly in how robots can
perceive and identify the different
members, roles, and cues of a social
environment.
Alternative Third Law
Our third law is A robot must be
endowed with suffcient situated au-
tonomy to protect its own existence
as long as such protection provides
smooth transfer of control to other
agents consistent with the frst and
second laws. This law specifes that
a humanrobot system should be able
to transition smoothly from whatever
degree of autonomy or roles the ro-
bots and humans were inhabiting to a
new control relationship given the na-
ture of the disruption, impasse, or op-
portunity encountered or anticipated.
When developers focus narrowly on
the goal of isolated autonomy and fall
prey to overconfdence by underesti-
mating the potential for surprises to
occur, they tend to minimize the im-
portance of transfer of control. But
bumpy transfers of control have been
noted as a basic diffculty in human
interaction with automation that can
contribute to failures.
17
The alternative third law addresses
situated autonomy and smooth trans-
fer of control, both of which interact
with the prescriptions of the other
laws. To be consistent with the second
law requires that humans in a given
role might not always have complete
control of the robot (for example,
when conditions require very short
reaction times, a pilot may not be al-
lowed to override some commands
generated by algorithms that attempt
to provide envelope protection for
the aircraft). This in turn implies that
an aspect of the design of roles is the
identifcation of classes of situations
that demand transfer of control, so
that the exchange processes can be
specifed as part of roles. This is when
the human takes control from the ro-
bot for a specialized aspect of the mis-
sion in anticipation of conditions that
will challenge the limits of the robots
capabilities, or in an emergency. De-
cades of human factors research on
human out-of-the-loop control prob-
lems, handling of anomalies, cascades
of disturbances, situation awareness,
and autopilot/pilot transfer of control
can inform such designs.
To be consistent with the frst law
requires designers to explicitly ad-
dress what is the appropriate situated
autonomy (for example, identifying
when the robot is better informed or
more capable than the human owing
to latency, sensing, and so on) and
to provide mechanisms that permit
smooth transfer of control. To disre-
gard the large body of literature on
resilience and failure due to bumpy
transfer of control would violate the
designers ethical obligation.
The alternative second and third
laws encourage some forms of in-
creased autonomy related to respon-
siveness and the ability to engage
in various forms of smooth transfer
of control. To be able to engage in
these activities with people in vari-
ous roles, the robot will need more
situated intelligence. The result is an
irony that has been noted before: in-
creased capability for autonomy and
authority leads to the need to partici-
pate in more sophisticated forms of
coordinated activity.
8
Increased capability
for autonomy and
authority leads to the
need to participate in
more sophisticated forms
of coordinated activity.
JULY/AUGUST 2009 www.computer.org/intelligent 19
Discussion
Our critique reveals that robots need
two key capabilities: responsiveness
and smooth transfer of control. Our
proposed alternative laws remind ro-
botics researchers and developers of
their legal and professional responsi-
bilities. They suggest how people can
conduct humanrobot interaction re-
search safely, and they identify criti-
cal research questions.
Table 1 places Asimovs three laws
side by side with our three alternative
laws. Asimovs laws assume functional
moralitythat robots are capable of
making (or are permitted to make)
their own decisionsand ignore the
legal and professional responsibility
of those who design and deploy them
(operational morality). More impor-
tantly for humanrobot interaction,
Asimovs laws ignore the complexity
and dynamics of relationships and re-
sponsibilities between robots and peo-
ple and how those relationships are
expressed. In contrast, the alternative
three laws emphasize responsibility
and resilience, starting with enlight-
ened, safety-oriented designs (alter-
native frst law), then adding respon-
siveness (alternative second law) and
smooth transfer of control (alternative
third law).
The alternative laws are designed
to be more feasible to implement than
Asimovs laws given current technol-
ogy, although they also raise critical
questions for research. For example,
the alternative frst law isnt concerned
with technology per se but with the
need for robot developers to be aware
of human systems design principles
and to take responsibility proac-
tively for the consequences of errors
and failures in humanrobot systems.
Standard tools from the aerospace,
medical, and chemical manufacturing
safety cultures, including training,
formal processes, checklists, black
boxes, and safety offcers, can be ad-
opted. Network and physical security
should be incorporated into robots,
even during development.
The alternative second and third
laws require new research directions
for robotics to leverage and build on
existing results in social cognition,
cognitive engineering, and resilience
engineering. The laws suggest that
the ability for robots to express re-
lationships and obligations through
social roles will be essential to all
humanrobot interaction. For exam-
ple, work on entertainment robots
and social robots provides insights
about how robots can express emo-
tions or affect appropriate to people
they encounter. The extensive litera-
ture from cognitive engineering on
transfer of control and general human
out-of-the-loop control problems can
be redirected at robotic systems. The
techniques for resilience engineering
are beginning to identify new control
architectures for distributed, multi-
echelon systems that include systems
that include robots.
The fundamental difference be-
tween Asimovs laws, which focus
on robots functional morality and
full moral agency, and the alternative
laws, which focus on system respon-
sibility and resilience, illustrates why
the robotics community should re-
sist public pressure to frame current
humanrobot interaction in terms of
Asimovs laws. Asimovs laws dis-
tract from capturing the diversity of
robotic missions and initiative. Un-
derstanding these diversities and
complexities is critical for designing
the right interaction scheme for a
given domain.
Ironically, Asimovs laws really are
robot-centric because most of the ini-
tiative for safety and effcacy lies in
the robot as an autonomous agent.
The alternative laws are human-
centered because they take a systems
approach. They emphasize that
responsibility for the consequences
of robots successes and failures lies
in the human groups that have a
stake in the robots activities, and
capable robotic agents still exist in
a web of dynamic social and cogni-
tive relationships.
Ironically, meeting the requirements
of the alternative laws leads to the
need for robots to be more capable
agentsthat is, more responsive to
others and better at interaction with
others.
We propose the alternative laws as
a way to stimulate debate about ro-
bots accountability when their ac-
tions can harm people or human
interests. We also hope that these
Table 1. Asimovs laws of robotics versus the alternative laws of responsible robotics
Asimovs laws Alternative laws
1 A robot may not injure a human being or, through inaction,
allow a human being to come to harm.
A human may not deploy a robot without the humanrobot work system
meeting the highest legal and professional standards of safety and ethics.
2 A robot must obey orders given to it by human beings,
except where such orders would conflict with the first law.
A robot must respond to humans as appropriate for their roles.
3 A robot must protect its own existence as long as such
protection does not conflict with the first or second law.
A robot must be endowed with sufficient situated autonomy to protect
its own existence as long as such protection provides smooth transfer
of control to other agents consistent the first and second laws.
20 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
laws can serve to direct R&D to en-
hance humanrobot systems. Finally,
while perhaps not as entertaining as
Asimovs laws, we hope the alterna-
tive laws of responsible robotics can
better communicate to the general
public the complex mix of opportu-
nities and challenges of robots in to-
days world.
Acknowledgments
We thank Jeff Bradshaw, Cindy Bethel,
Jenny Burke, Victoria Groom, and Leila
Takayama for their helpful feedback and
Sung Huh for additional references. The
second authors contribution was based on
participation in the Advanced Decision Ar-
chitectures Collaborative Technology Alli-
ance, sponsored by the US Army Research
Laboratory under cooperative agreement
DAAD19-01-2-0009.

References
1. S.L. Anderson, Asimovs Three Laws
of Robotics and Machine Metaethics,
AI and Society, vol. 22, no. 4, 2008,
pp. 477493.
2. A. Sloman, Why Asimovs Three Laws
of Robotics are Unethical, 27 July
2006; www.cs.bham.ac.uk/research/
projects/cogaff/misc/asimov-three-laws.
html.
3. C. Allen, W. Wallach, and I. Smit, Why
Machine Ethics? IEEE Intelligent
Systems, vol. 21, no. 4, 2006, pp. 1217.
4. M. Moran, Three Laws of Robotics
and Surgery, J. Endourology, vol. 22,
no. 8, 2008, pp. 15571560.
5. R. Clarke, Asimovs Laws of Robotics:
Implications for Information Technol-
ogy Part 1, Computer, vol. 26, no. 12,
1993, pp. 5361.
6. R. Clarke, Asimovs Laws of Robotics:
Implications for Information Technol-
ogy Part 2, Computer, vol. 27, no. 1,
1994, pp. 5766.
7. W. Wallach and C. Allen, Moral Ma-
chines: Teaching Robots Right from
Wrong, Oxford Univ. Press, 2009.
8. D. Woods and E. Hollnagel, Joint
Cognitive Systems: Patterns in Cogni-
tive Systems Engineering, Taylor and
Francis, 2006.
9. R.C. Arkin and L. Moshkina, Le-
thality and Autonomous Robots: An
Ethical Stance, Proc. IEEE Intl Symp.
Technology and Society (ISTAS 07),
IEEE Press, 2007, pp. 13.
10. N. Sharkey, The Ethical Frontiers of
Robotics, Science, vol. 322, no. 5909,
2008, pp. 18001801.
11. M.F. Rose et al., Technology Develop-
ment for Army Unmanned Ground
Vehicles, Natl Academy Press, 2002.
12. D. Woods, Conficts between Learn-
ing and Accountability in Patient
Safety, DePaul Law Rev., vol. 54,
2005, pp. 485502.
13. S.W.A. Dekker, Just Culture: Bal-
ancing Safety and Accountability,
Ashgate, 2008.
14. J. Allen et al., Towards Conversa-
tional HumanComputer Interaction,
AI Magazine, vol. 22, no. 4, 2001, pp.
2738.
15. J.M. Bradshaw et al., Dimensions
of Adjustable Autonomy and Mixed-
Initiative Interaction, Agents and
Computational Autonomy: Potential,
Risks, and Solutions, M. Nickles, M.
Rovatsos, and G. Weiss, eds., LNCS
2969, Springer, 2004, pp. 1739.
16. B. Whitby, Sometimes Its Hard to
Be a Robot: A Call for Action on the
Ethics of Abusing Artifcial Agents,
Interacting with Computers, vol. 20,
no. 3, 2008, pp. 326333.
17. D.D. Woods and N. Sarter, Learning
from Automation Surprises and Going
Sour Accidents, Cognitive Engineer-
ing in the Aviation Domain, N.B. Sarter
and R. Amalberti, eds., Natl Aeronau-
tics and Space Administration, 1998.
Robin Murphy is the Raytheon Profes-
sor of Computer Science and Engineer-
ing at Texas A&M University. Contact
him at murphy@cse.tamu.edu.
David D. Woods is a professor in the
Human Systems Integration section of
the Department of Integrated Systems
Engineering at Ohio State University.
Contact him at woods.2@osu.edu.
IEEE Intelligent Systems delivers
the latest peer-reviewed research on
all aspects of artificial intelligence,
focusing on practical, fielded applications.
Contributors include leading experts in
Intelligent Agents The Semantic Web
Natural Language Processing
Robotics Machine Learning
I
E
E
E
Visit us on the Web at
www.computer.org/intelligent
THE #1 ARTIFICIAL
INTELLIGENCE
MAGAZINE!

You might also like