Professional Documents
Culture Documents
Jeffrey White
OIST
jeffrey.white@oist.jp
From his preliminary analysis in 1965, Hubert Dreyfus projected a future much
different than those with which his contemporaries were practically concerned,
tempering their optimism in realizing something like human intelligence through
conventional methods. At that time, he advised that there was nothing "directly"
to be done toward machines with human-like intelligence, and that practical
research should aim at a symbiosis between human beings and computers with
computers doing what they do best, processing discrete symbols in formally
structured problem domains. Fast-forward five decades, and his emphasis on the
difference between two essential modes of processing, the unconscious yet
purposeful mode fundamental to situated human cognition, and the "minded"
sense of conscious processing characterizing symbolic reasoning that seems to
lend itself to explicit programming, continues into the famous Dreyfus-McDowell
debate. The present memorial reviews Dreyfus' early projections, asking if the
fears that punctuate current popular commentary on AI are warranted, and in
light of these if he would deliver similar practical advice to researchers today.
Acknowledgements: The author wishes to thank Karamjit Gill for this journal and
for his support, as well as for arranging anonymous review resulting in insightful
directions for significant improvements. This work is dedicated to Hubert
Dreyfus.
1. Introduction
This paper recollects insights central to Dreyfus’ (1965) assessment of the state
of artificial intelligence research and follows them through his more recent
(2007) and (2012) commentaries to the state of the art, today. This retrospection
is then turned forward, and asks if we should work from the same
understanding, now, with which he had begun and had reiterated over the five
decades since.1
1
For a more general review of Dreyfus’ work in philosophy of artificial intelligence with some
attention for example to the issue of embodiment with which the present paper concludes, see Brey,
2001.
2
2
“It’s like claiming that the first monkey that climbed a tree was making progress towards flight to the
moon.” (cf. Dreyfus, 2007, page 1142)
3
intelligence” such that “adding a bit of complexity to what has already been done
with animats counts as progress toward humanoid intelligence” (2007, page
1142). In short, guilty of the first-step fallacy, after early success then
unanticipated failure they simply ran out of tree, confirming Dreyfus’ underlying
pattern.3
From the beginning of his career in philosophy of AI until the end of his life, the
core of Dreyfus’ early analysis remained effective. For example, in Dreyfus
(2012), he found his underlying pattern being pursued by Silicon Valley insiders
enthusiastically trading in the idea that rapid developments in AI will eventually
deliver human immortality, in stark difference writing that there is “no sense” in
claiming progress to such an end because there remains “no reason to think that
we are making progress towards AI or, indeed, that AI is even possible” (2012,
page 97)
The next section reviews Dreyfus’ 1965 assay with a focus on what he took to be
the greatest hurdle to human-like AI in the notion of “fringe consciousness” and
its import to the broader critique that Dreyfus consistently brought to bear
thereafter. The third section traces this critique through the dynamical turn to
today and returns to Chalmers’ (2010) argument for the “singularity”. The fourth
section turns this review briefly forward, asking what direction his vision can
provide us, today.
Dreyfus’ 1965 assay develops themes that recur throughout his engagements
with contemporaries over the course of his career. It begins with a survey of the
then-current field, effectively performing a deep-learning exercise on the early
movements in AI research to isolate an “overall pattern” of activity involving
unwarranted insider enthusiasm arising from early successes in four general
areas of research each involving modes of information processing that he found
unique to human cognition and in principle inaccessible to machines. In the end,
his analysis recommends research directions in the short term, and speculates
on the machine that may be capable of human-like cognition in the long-term.
3
Dreyfus seemed most disappointed that “neither Dennett nor anyone connected with the project has
published an account of the failure and asked what mistaken assumptions underlay their absurd
optimism” with Dennett pinning the stall-out on a “lack of graduate students” (2007, page 1141).
4
Of the four, he holds pattern recognition to be most fundamental, noting that “resolution of the
difficulties which have arrested development” in other fields “all presuppose success” in this one
(1965, page 14).
4
artificial agent must confront” (page 18) understood in terms of: insight
(distinguishing the essential from the non-essential), fringe consciousness (being
sensitive to salient situational information without explicit iteration and context
dependent ambiguity reduction (due embedded agency within the context of
often implicit goals).5
5
Later, he bundles these three forms of information processing into what he calls “perspicuous
grouping”, the ability to perceive connections without explicitly counting them out, and to
recognize individual examples as “typical” of a class without explicit comparison of every other
given example (1965, pages 37-46).
6
As the chess player is “zeroing in” on opportune movements, he is effectively “unconsciously
counting out” the various alternatives (1965, page 53).
5
7
Structuring a problem “does not involve merely the given parts and their transformations” but “works
in conjunction with the material that is structurally relevant but is selected from past experience.”
(Dreyfus, 1965, page 29) This insight bears on prospective agency through backpropagation of
perceived error as realized in contemporary dynamic systems models in an interesting way,
noteworthy in terms of the neurorobotics research mentioned in the fourth section of this paper.
6
8
He identifies a similar loop arising in the context of gameplay, as the significance of any given piece
is partly determined by any single game piece and partly by the position of that piece relative other
pieces, with all of this only significant in the context of possible future moves as constrained by the
structured goal conditions of the game. For a heuristic chess program for example, in order to
escape such circularity, the interdependence of such definitions requires that some be fixed, at
which point its limits “become a matter of principle rather than simply of practice” (1965, page 73).
7
The crux of this argument rests in the nature of fringe consciousness, that it is
situated, and that natural language is context dependent not due to the nature of
language, or to the laws of logic, but due to the nature of the language user and
logic programmer, him/herself. As Joseph Rouse has written:
Even simple words, such as “pen” signify different things in different situations.
Understanding the significance of the usage of such simple terms requires that
one appreciate the context of its employment, and there is no way for an AI to
take up a place within such a context, as if the utterer, that allows it to surmise
the intended significance unless it is similarly situated.9 Situated in the context of
shared goals, insightful human beings avoid the circularity inherent in
interdependently defined terms and actions through “the short-cut processing
made possible by the fringes of consciousness and ambiguity tolerance.” (1965,
page 82) On the other hand, “digital machines have symbol-manipulating powers
superior to those of humans” and with this in mind, Dreyfus (1965) in the end
advises that “they should, so far as possible, take over the digital aspects of
information processing” (page 82) as compliments to human cognition, with
research directed accordingly.
9
For an interesting use of this example, see Williams, Haugeland and Dreyfus (1974).
8
In order to delimit those areas in which research might focus while avoiding the
mistake of applying symbol-pushing methods to insoluble contexts, Dreyfus
(1965) presents four classes of problem space and characterizes them in terms
of the sorts of information processing that suit their solutions: associationistic,
simple formal, complex formal and non-formal.10 He forecasts the greatest
potential for AI research in the associationistic and simple formal domains, being
of the sort easily outsourced to computers as adjuncts to human cognition. Non-
formal and complex-formal domains present fringe-effect difficulties of different
sorts, with complex-formal type problems responsible for “most of the
misunderstandings and difficulties in the field” on Dreyfus’ early analysis, as they
invite but resist “exhaustive enumeration” e.g. chess and go. (page 79) These
difficulties arise when researchers apply methods successful in simple or
dimensionally reduced formal problem spaces to more complex ones, due to the
“infinity of facts” component of fringe consciousness reviewed above, indicative
of the inherent limitation in Cartesian AI set out above, and made most apparent
in non-formal contexts. Non-formal problems include “all our everyday activities
in indeterminate situations” including the use of natural language (page 77).
And, as treating complex-formal as associationistic problems is a failing strategy,
treating non-formal as complex-formal contexts is also “doomed to failure” (page
81) because fringe consciousness depends on being situated in a common, goal-
directed context that cannot be formally represented regardless of
computational power over externally determined discrete dimensions.
In terms of natural language for example, Dreyfus (1965) observes that “the only
way to make a computer which could understand and translate a natural
language is to program it to learn about the world.” (page 35) In order to learn a
language and to use it appropriately, the learner must share “at least some of the
goals and interests of the teacher, so that the activity at hand helps to determine
the meanings of the words used.” (page 37) This is exactly the sort of thing that a
digital computer is least fit to do, and why Dreyfus was not surprised that
10
Problems accessible to “associationistic” processes include term-replacement (“mechanical
dictionary”), simple memory (associating symbols with patterns), conditioned response (as a means
to producing these associations) and trial and error (maze navigation) type problems, all of which
are accessible to traditional symbol-pushing AI in the form of decision trees or list searches. The
second type is also fully accessible to traditional AI. “Simple formal” type problems are fully
governed by explicit rules defining contexts simple enough for meanings to be made completely
explicit independent of context, including games whose moves can be explicitly counted out,
theorem proofs involving searches for optimal applications of rules under explicit constraints, and
pattern recognition given salient features are defined beforehand. The third type, “complex formal”
problems, are only partially accessible to traditional AI, including “uncomputable” games whose
decision trees cannot be brute-force counted-out including chess and go which require an
appreciation of global context in order to settle on which features are essential and which not before
deciding on further moves, and pattern recognition wherein regularities must first be discovered
before similarities are recognized. Finally, Dreyfus holds that the fourth type of problem is wholly
outside of machine comprehension on principle, consisting in “non-formal” problems with
solutions requiring context sensitivity and an ability to distinguish essential from nonessential
aspects in patterns which are distorted, the translation of natural language into metaphorical
contexts, and the ability to structure otherwise seemingly unstructured problems through what
Dreyfus calls “perceptive guess” (see the section headed “Areas of Intelligent Activity Classified
with Respect to the Possibility of Artificial Intelligence in Each” beginning on page 75).
9
researchers were so reliably coming up short. As for the resources dumped into
AI research to that point, Dreyfus (1965) answers that they were not “wasted …
if, instead of trying to hide our difficulties, we try to understand what they show”
(page 84) including that “the associationist assumption that all thinking can be
analyzed into discrete, determinate operations” (page 85) of the sort easily
programmed into a digital computer is wrong (c.f. Kenaw, 2008, for an argument
to this effect in the context of rationalism). With such lessons learned, Dreyfus
didn’t see failures in AI as failures in the long run, rather focusing on the upshot
that they “suggest fascinating new areas for basic research” going forward,
“notably the development and programming of machines capable of global and
indeterminate forms of information processing” (page 85).
At that time, computers could not perform such processing, and until they could
AI would remain impossible, which is why Dreyfus advised researchers to “think
in the short run of cooperation between men and digital computers, and only in
the long run of non-digital automata” able to exhibit “the three forms of
information processing essential in dealing with our informal world. (1965, page
85) As to how such machines may be developed in the future, Dreyfus (1965)
observes that only Claude Shannon had by that time recognized that “such
problems as pattern recognition, language translation, and so on, may require a
different type of computer than any we have today” with Shannon’s question
ultimately being “Can we design … a computer whose natural operation is in
terms of patterns, concepts, and vague similarities, rather than sequential
operations on ten-digit numbers? (Shannon as quoted in Dreyfus, 1965, page 75)
This question clearly prefigures contemporary neural networks, the source of
current successes in AI research, and invites review of Dreyfus’ assessment of
such technologies in the next section.
As we push against the unknown, the margins of what we do know recede, with
early enthusiasm dimming as theory becomes impractical.11 This much is
trivially true, though not always obvious. In his 1965 analysis, Dreyfus makes
this limitation explicit, asking if we are “pushing out on a continuum like that of
velocity, such that progress becomes more and more difficult as we approach the
speed of light, or are we facing a discontinuity, like the tree-climbing man who
reaches for the moon?” (page 18) This confronts researchers with a dilemma,
and contrary to industry momentum, Dreyfus showed the courage to take the
right horn.
As we have seen, as early as 1965 Dreyfus saw that “human and mechanical
information processing proceed in entirely different ways” (1965, page 63) and
accordingly he anticipated discontinuity, faulting his contemporaries’
“associationist” presumptions about human information processing at that point
11
Alexander VonSchoenborn taught this lesson through the image of an ever-expanding sphere of light
around an increasingly bright candle. The greater the light, the larger the unlit fringe.
10
and for a long time afterwards, that it or at least its defining features may be
reduced to symbol manipulation. Instead, he pointed to the:
A relatively short time later, he was writing about work focused on skirting this
issue in terms of the “frame problem” at once finding many of his
contemporaries falling into the same “overall pattern” marring AI research since
the beginning. This section briefly updates our understanding of Dreyfus’
position in light of interceding advances, while the next section looks ahead
along the lines that the preceding plot lays out, at the possible advent of
embodied and potentially embedded humanoid robots and the fears that seem to
punctuate human cognizance of this potential.
12
By “resistor analogue”, Dreyfus is characterizing a machine that works as do human beings to infer
best explanations or to abduce optimal solutions such as we see in contemporary machine learning,
11
Again, for Dreyfus the fundamental problem isn’t how many dimensions which a
computer might employ to form a web of associations, but rather what it might
take for that machine to determine the relevance of these relations within a
given situation:
Why put up the shades? This is not a simple question after all, and on Dreyfus’
analysis what is lacking in a computer attempting to answer such a question is a
situation within a context of shared goals that is a given for human beings but
that confutes explicit programming. Without the insight that this situatedness
affords, there is no way to settle on how to frame a given problem space. As we
saw in the last section, even a limited situation invites an infinite number of
potentially relevant facts, any of which may be critical for determining an
appropriate next move or the significance of a given term, resulting in a loop of
interdependency which a machine is potentially unable to resolve let alone
deliver appropriate and timely action. Looking back on the frame problem thus,
Dreyfus wrote that:
These failures along with the advent of bottom-up learning and neural networks
brought researchers to focus on “behavior based robotics” and the idea that
externally derived discrete “representations” should be abandoned for systems
that use the world as their guiding models, instead. This shift from symbol-
pushing to situated robots signified a recognition of the discontinuity that
Dreyfus had foreseen three decades prior. However, Dreyfus still found these
systems limited to “fixed isolated features in the environment” rather than
“context” as a whole, and observed that their limitations seemed to “beg the
question of changing relevance” rather than to solve what had become
understood as “the frame problem” once and for all (2012, page 93).
This brings us back to Dreyfus’ overall pattern as introduced in the first section
of this paper in terms of Brooks and Dennett’s COG program, and to the first
12
13
Whereby AI becomes more intelligent than human beings, human beings are compelled to
increasingly integrate themselves with AI in order to keep up with technological progress, and once
bent to the solution of humanity’s most pressing problems, achieve a capacity to deliver humanity
from them, cf. Kurzweil, 2005. For a technical argument as to why such an “explosion” of
“superintelligence” in self-modifying AI is unlikely, see Benthall (2017).
13
Since 2012, there have been notable advances on DeepBlue’s chess game. For
example, Google’s AlphaGo program bested the world’s finest human go players.
Dreyfus (1965) classified go within the category of “complex-formal” problems
for which he thought that computers may be tailored given success in
implementing just those facets of reasoning that he had identified as necessary
to the task – for one, overcoming the need for exhaustive enumeration of
possible moves given current positions. And, AlphGo has done just that through
the combination of heurisitc search and deep-learning in neural networks, a
technology that simulates human neural processes and that Dreyfus had
anticipated in terms of a “minimal path through a network”.
Most recently in late October, 2017, Google’s DeepMind team (three founders of
which, including a co-author of the current AlphaGo Zero announcement paper
in Nature, are signatories of the Future of Life Institute Open Letter on research
priorities to be discussed in the next section) has announced an improvement
over their AlphaGo platform, AlphaGo Zero, which has been able to beat its
predecessor 100 games to 0 with zero (hence the name) human input, learning
solely through self-reinforcement without supervision (cf. Silver et al., 2017).
This is of course also an advance to be taken seriously, but Dreyfus’ concern that
such systems remain embedded in relatively limited contexts remains a sound
one. AlphaGo Zero exploits “fixed isolated features” within a very limited, strictly
formal structure and though complex relative to other board games, go remains
only that, far from the informal context shifting interactions characteristic of the
human condition.
The trouble which Dreyfus (1965) zeroed-in on more than five decades ago, and
with which he wrestled for the rest of his life, ultimately comes down to
convincing enthusiasts of AI to redirect their efforts in the short term, until
understanding of the human condition meets with technologies up to expressing
this understanding in an artificial media. Those determined otherwise he likened
to “alchemists” who, having refined “quicksilver” from dirt, worked centuries
from the same methods hoping for gold. For fifty-two further years after this
earliest assay, Dreyfus consistently found his contemporaries vainly working to
14
Clearly, the stakes could not be higher. Recognizing the potential dangers of
machine-mediated global nuclear war, rather than allowing research to devolve
into an arms race (cf. Armstrong et al., 2016), Musk and many others including
Steve Wozniak, Wendell Wallach, Stephen Hawking, Ben Goertzel, Eliezer
Yudkowsky, Nick Bostrom, Geoffrey Hinton, Sam Harris and James Moore
supported a 2015 testament from the Future of Life Institute recommending a
set of research priorities “aimed at ensuring that AI remains robust and
beneficial” (Russell et al., 2015, page 105, see also Future of Life Institute, 2015).
Most recently, this past year the Future of Life Institute put forward “An Open
Letter To The United Nations Convention On Certain Conventional Weapons”
endorsed by over a hundred founders and CEOs of companies developing AI and
robotics imploring the High Contracting Parties to the Convention to “protect us
all” from the dangers of AI research being “repurposed to develop autonomous
weapons” (cf. Future of Life Institute, 2017).14 That we are on the way to such an
eventuality is further reinforced in the imagery employed by neuroscience
14
It is interesting to contrast these efforts with a related assessment of the “real danger of artificial
intelligence”, that increasing automation by way of semi-autonomous machines promises to
exacerbate rather than abate an historically unprecedented degree of wealth inequality (cf. Lee,
2017) which has proven historically to precede mass violence, social upheaval, and in the 20th
century World War, and of which “autonomous weapons” will surely now play a part.
15
student cum popular author and early signatory to the Future of Life Institute’s
“Open Letter” recommending research priorities for beneficial AI Sam Harris as
he warns that “the rate of progress doesn’t matter, because any progress is
enough to get us into the end zone” and “The train is already out of the station,
and there's no brake to pull.”15(Harris, 2016, at 5:11 and 5:25 respectively)
Bringing Dreyfus’ early analysis to bear on the issue reveals two things. First of
all, there is the apparent replay of his “overall pattern” in the explicitly linear
thinking as exposed in Harris’ imagery, for example. One might have thought
similarly in 1965, that steady progress along established tracks would lead to
runaway artificial intelligence, but this has hardly been the case. So, to
paraphrase Dreyfus’ rebuttal of Chalmers’ enthusiasm for an imminent
“singularity”, there is no reason to project from current successes in board
games to the likelihood of a fully autonomous “demon” AI beyond human control,
runaway train analogies being beside the point.
15
Meanwhile, the simple-mindedness of such reasoning has not gone unrecognized by proponents of
current research vectors. For example, late in 2015 the Information Technology and Innovation
Foundation nominated the “loose coalition of scientists and luminaries who stirred fear and hysteria
in 2015 by raising alarms that artificial intelligence (AI) could spell doom for humanity” for its
annual Luddite Award, an honor that was finally awarded them in January of the next year (cf.
Atkinson, 2015, and ITIF, 2016).
16
Where Pereira and colleagues work from the level of evolved group and abduced
principles, and Sun and colleagues focus on bridging “macro” with “micro” levels
of social organization, so far as ab initio development of an embodied
autonomous AI, research is for example ongoing at the Okinawa Institute of
Science and Technology with Jun Tani’s neurorobotics group having to date
demonstrated rudimentary creative composition in relatively informal human-
mediated environments (cf. Tani, 2016, also White and Tani, 2017). Following
Dreyfus, this research may be promising if for no other reason than that it is
discontinuous with prior approaches, explicitly recognizing that situated
cognition is the necessary substrate for artificial intelligence (as he understood
it) rather than aiming to operate through brute-force computational power in
purely formal contexts instead. And on this note, his instruction is poignant. By
taking situations as primitive to cognition, grounding context-dependent goal
conditions therein rather than on “fixed isolated features” of pre-determined
problem spaces, we face “the question of changing relevance” once and for all
and open opportunities for insight into how we human beings are able to “deal
with the ill-structured data of daily life” at the same time.
16
Noting that the cognizance of the individual of its situation within a social collective is “emergent” in
2001, Sun remarked on research in “bridging” the two (an image also central to da Costa and
Pereira, 2015) that “It helps to unearth the social unconscious embedded into social institutions and
lodged inside individual minds” with the “crucial step in establishing the micro–macro link …
between individual agents and society” becoming “how we should understand and characterize the
structures of the social unconscious and its ‘‘emergence’’, computationally or otherwise.” (Sun,
2001, page 2)
17
of which we are conscious. With the help of AI, we may simulate peaceful
alternative futures and plot the transitions between them, thereby enabling our
collective self-determination past extinction-level human failures including
machine-mediated global nuclear war (cf. White, 2016). On this view, there is
nothing to fear of a future rich with properly articulated AI of the sort to which
Dreyfus pointed, and much to fear without.
5. Conclusion
In summary, there is no reason to expect that the same approach which delivers
automated killing game-machines should result in anything like human
intelligence, what Dreyfus understood to be artificial intelligence, and every
reason to expect that it cannot. For example, though it may learn on its own how
to play the game, AlpaGo Zero optimizes interim results within a fixed formal
context and nothing else. Though it may in principle be exposed to other limited-
context purely formal task environments and be expected to perform similarly –
in an update on the use of tic-tac-toe to illustrate the futility of global nuclear
war, if this be the goal of researchers for example, by turning war into tic-tac-toe
or in AlphaGo Zero, chess - it lacks the insight to delineate those essential from
inessential dimensions punctuating even relatively simple informal contexts, e.g.
it will not be able to tell us why Professor Dreyfus had opened his blinds when he
did, just as it will not be able to advise either why it is best to repurpose
humankind’s greatest intellectual achievements to destroy a good part of the
natural world in order to retain one political economy over another, or why it is
best not to invest in the option from the beginning. The reasons for this
limitation are the same as those to which Dreyfus has been calling our attention
since at least 1965, because such machines remain unable to structure a given
problem space around an embodied situation the condition of which is the
fundamental concern of human beings. Absent this concern, machines remain
unable to advise on which actions may be best, now.
What would it take to deliver such a capacity? In the second section of this paper,
we mentioned that Dreyfus asks why it is that at a given point, once we become
aware of objects at the fringe of consciousness, through insight into the global
situation, conscious modes of cognition take effect. We begin to reason about
these objects. Dreyfus notes that for a human being “conscious counting begins
when he has to refine this global process in order to deal with details” and with
this asks “what kind of program could convert this unconscious counting into the
kind of fringe-influenced awareness of the centers of interest, which is the way
zeroing-in presents itself in our experience?” (1965, page 23) Drawing on the
preceding discussion, we may now answer: the kind of program that shares in
the needs and aims of others like ourselves so that it is sensitive to changes in
context that bring common goals closer or draw them distant, ideally one that
shares our situation as well as our visceral concern for the way that this situation
turns out.
weapons system, or perhaps the weapons system engineer for that matter, but it
was important to Dreyfus. Writing on Heidegger and the role of technology in
terms of revolutionary times under the threat of global imperialism and
mechanized warfare, Dreyfus answers Heidegger’s plea that “only a god can save
us, now” with a more tangible solution, that we reinvent instead the “cultural
paradigm” in terms of which discoveries are made and worlds created:
Again, we should emphasize that that these cues are not of the sort amenable to
explicit enumeration and programmable search (cf. page 31) but are rather
objects of human insight that arise at the fringes of consciousness, here in the
form of “new meaningful directions”.
So in the end, how might Hubert Dreyfus advise current researchers in AI? We
should employ technologies to enhance access to fringe consciousness, so that
we can mine culture and tradition for meaningful directions forward. We should
do what we do best and allow machines to do what they do best. We may also
benefit from Dreyfus’ observation that the salvation of what he called
“Heideggerian AI” may come from the deeper appreciation of Heideggerian
philosophy, specifically that of authenticity as applied not only to model
architectures, but to the researchers themselves who must first understand their
authentic condition before articulating similar in artifacts. Until we are ready to
reconfigure our industry into a form that encourages challenges of paradigm and
what may be called “authentic inquiry” of the sort that Dreyfus also exemplified -
a ‘return to the situation, itself’ so to speak - it seems that we may well see his
overall pattern repeated again in the near future and at a grand scale, beginning
with the failure of Moor’s law as the capacity to zero-in on what is important for
ongoing progress comes at increasingly higher human costs.17
Finally, we conclude not with where Dreyfus left off, but rather with where our
story began, on the cusp of discontinuity and running out of tree. This time
however, we are benefitted by his example, and can feel encouraged to recast the
future through authentic inquiry going forward.
Works Consulted:
Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a
model of artificial intelligence development. AI & Society, 31, 2, 201-206.
17
This is the thesis driving the Bloom et al. (2017) for example.
19
Bloom, N., Jones, C. I., Reenen, J. V., & Webb, M. (2017, September). Are Ideas
Getting Harder to Find? National Bureau of Economic Research. Retrieved
November 25, 2017, from http://www.nber.org/papers/w23782
da Costa Cardoso, F., & Pereira, L.M. (2015) The emergence of artificial
autonomy: a view from the foothills of a challenging. In White, J., & Searle, R.
(eds.) Rethinking machine ethics in the age of ubiquitous technology. Hauppage:
IGI Global.
Dreyfus, H. L., & Rand Corp Santa Monica Calif. (1965). Alchemy and Artificial
Intelligence. Ft. Belvoir: Defense Technical Information Center. Retrieved
November 25, 2017, from
https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf
Dreyfus, H. L. (2012). A History of First Step Fallacies. Minds and Machines, 22, 2,
87-99.
Future of Life Institute. (2015). AI Open Letter. Retrieved November 25, 2017,
from https://futureoflife.org/ai-open-letter/
20
Future of Life Institute. (2017). An Open Letter to the United Nations Convention
on Certain Conventional Weapons. Retrieved November 26, 2017, from
https://futureoflife.org/autonomous-weapons-open-letter-2017
Harris, S. (2016) TEDSummit, June 2016, Can we build AI without losing control of
it? TED Summit, June 2016. Retrieved November 25, 2017, from
https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control
_over_it/transcript
ITIF. (2016). Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award.
Retrieved November 25, 2017, from
https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-
itif’s-annual-luddite-award
Kurzweil, R. 2005. The Singularity Is Near: When Humans Transcend Biology. New
York, NY: Viking.
Lee, K. (2017, June 24). The Real Threat of Artificial Intelligence. Retrieved
November 25, 2017, from
https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-
economic-inequality.html
Musk, E. (2017, September 4). May be initiated not by the country leaders, but
one of the AI's, if it decides that a preemptive strike is most probable path to
victory. Retrieved November 25, 2017, from
https://twitter.com/elonmusk/status/904639405440323585
Putin, V. (2017, September 1). 'Whoever leads in AI will rule the world': Putin to
Russian children on Knowledge Day. Retrieved November 25, 2017, from
https://www.rt.com/news/401731-ai-rule-world-putin/
21
Russell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and
Beneficial Artificial Intelligence. AI Magazine, 36, 4, 105-114.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van, D. D. G., Schrittwieser,
J., ... Sutskever, I. (2016). Mastering the game of Go with deep neural networks
and tree search. Nature, 529, 7587, 484-489.
Sun, R. (2001). Individual action and collective function: From sociology to multi-
agent learning. Cognitive Systems Research, 2, 1, 1-3.
Sun, R., Wilson, N., & Lynch, M. (2016). Emotion: A Unified Mechanistic
Interpretation from a Cognitive Architecture. Cognitive Computation, 8, 1, 1-14.
Williams, B., Haugeland, J., & Dreyfus, H. (1974, June 27). An Exchange on
Artificial Intelligence. Retrieved November 25, 2017, from
http://www.nybooks.com/articles/1974/06/27/an-exchange-on-artificial-
intelligence