You are on page 1of 21

1

Dreyfus on the "Fringe": information processing, intelligent activity, and


the future of thinking machines

Jeffrey White
OIST
jeffrey.white@oist.jp

From his preliminary analysis in 1965, Hubert Dreyfus projected a future much
different than those with which his contemporaries were practically concerned,
tempering their optimism in realizing something like human intelligence through
conventional methods. At that time, he advised that there was nothing "directly"
to be done toward machines with human-like intelligence, and that practical
research should aim at a symbiosis between human beings and computers with
computers doing what they do best, processing discrete symbols in formally
structured problem domains. Fast-forward five decades, and his emphasis on the
difference between two essential modes of processing, the unconscious yet
purposeful mode fundamental to situated human cognition, and the "minded"
sense of conscious processing characterizing symbolic reasoning that seems to
lend itself to explicit programming, continues into the famous Dreyfus-McDowell
debate. The present memorial reviews Dreyfus' early projections, asking if the
fears that punctuate current popular commentary on AI are warranted, and in
light of these if he would deliver similar practical advice to researchers today.

Keywords: Hubert Dreyfus, fringe consciousness, future of AI, artificial


intelligence

Acknowledgements: The author wishes to thank Karamjit Gill for this journal and
for his support, as well as for arranging anonymous review resulting in insightful
directions for significant improvements. This work is dedicated to Hubert
Dreyfus.

1. Introduction

This paper recollects insights central to Dreyfus’ (1965) assessment of the state
of artificial intelligence research and follows them through his more recent
(2007) and (2012) commentaries to the state of the art, today. This retrospection
is then turned forward, and asks if we should work from the same
understanding, now, with which he had begun and had reiterated over the five
decades since.1

1
For a more general review of Dreyfus’ work in philosophy of artificial intelligence with some
attention for example to the issue of embodiment with which the present paper concludes, see Brey,
2001.
2

In Dreyfus’ earliest (1965) analysis contracted by the RAND Corporation, with


some controversy published under the title “Alchemy and Artificial Intelligence”
and that became the heart of What Computers Can’t Do (1972) and What
Computers Still Can’t Do (1992), he was surprised that no one had yet “taken
stock” of the status of AI research by that point (1965, page 18). Instead, he
found researchers proceeding according to overly optimistic expectations of
existing methods, resulting in an “overall pattern” of early successes, later
failures and halted progress met primarily with dogmatic pursuit along the same
lines of inquiry regardless. He accounted for this tendency thusly: “When the
situation looks grim … enthusiasts can always fall back on their own optimism”
as they “substitute long-range for [near-term] operational claims” (1965, page
16). Today, are we guilty of the same?

At root, Dreyfus found researchers operating according to a specious definition


of “progress” as “displacement toward the ultimate goal” (1965, page 17) as if a
physics problem in linear acceleration, expressed nearly fifty years later in the
following presumption:

… ever since our first work on computer intelligence we


have been inching along a continuum at the end of which is
AI so that any improvement in our programs no matter
how trivial counts as progress. (Dreyfus, 2012, page 99)

This attitude became a central point in his ongoing criticism of AI research, in


“Alchemy and Artificial Intelligence” writing that “According to this definition the
first man to climb a tree could claim tangible progress toward flight to the
moon.” (1965, page 17) This image - to which he often returned - he also
credited to his brother, Stuart, who cast the same around a monkey rather than a
man2 but with its lesson the same: The moon may appear closer from the top of a
tree, but tree-climbing is not going to get us there no matter how good we get at
it.

Variations on this theme recur in Dreyfus’ appraisals of contemporaries over the


course of his long career. In other contexts, he found researchers operating on a
similarly flawed understanding of progress guilty of the “first step fallacy”, a
term that he credited to Yehoshua Bar-Hillel (cf. 2007, page 1142). For example,
after noting the “modesty” of Rodney Brooks’ early success with relatively
simple, biologically inspired, behavior-based robotics, he judged of Brooks and
Daniel Dennett’s ill-fated COG program, employing the same methods in the
development of a humanoid robot, that they were “repeating the extravagant
optimism characteristic of AI researchers in the sixties” as they “decided to leap
ahead and build a humanoid robot.” (2007, page 1141) In a permutation of
Stuart’s treed monkey, Dreyfus figured that their mistake was to operate on “the
implicit assumption that human intelligence is on a continuum with insect

2
“It’s like claiming that the first monkey that climbed a tree was making progress towards flight to the
moon.” (cf. Dreyfus, 2007, page 1142)
3

intelligence” such that “adding a bit of complexity to what has already been done
with animats counts as progress toward humanoid intelligence” (2007, page
1142). In short, guilty of the first-step fallacy, after early success then
unanticipated failure they simply ran out of tree, confirming Dreyfus’ underlying
pattern.3

From the beginning of his career in philosophy of AI until the end of his life, the
core of Dreyfus’ early analysis remained effective. For example, in Dreyfus
(2012), he found his underlying pattern being pursued by Silicon Valley insiders
enthusiastically trading in the idea that rapid developments in AI will eventually
deliver human immortality, in stark difference writing that there is “no sense” in
claiming progress to such an end because there remains “no reason to think that
we are making progress towards AI or, indeed, that AI is even possible” (2012,
page 97)

The next section reviews Dreyfus’ 1965 assay with a focus on what he took to be
the greatest hurdle to human-like AI in the notion of “fringe consciousness” and
its import to the broader critique that Dreyfus consistently brought to bear
thereafter. The third section traces this critique through the dynamical turn to
today and returns to Chalmers’ (2010) argument for the “singularity”. The fourth
section turns this review briefly forward, asking what direction his vision can
provide us, today.

Section 2: Origin story

Dreyfus’ 1965 assay develops themes that recur throughout his engagements
with contemporaries over the course of his career. It begins with a survey of the
then-current field, effectively performing a deep-learning exercise on the early
movements in AI research to isolate an “overall pattern” of activity involving
unwarranted insider enthusiasm arising from early successes in four general
areas of research each involving modes of information processing that he found
unique to human cognition and in principle inaccessible to machines. In the end,
his analysis recommends research directions in the short term, and speculates
on the machine that may be capable of human-like cognition in the long-term.

In four research areas, game playing, language translation, problem-solving and


pattern recognition, Dreyfus (1965) documents early “dramatic” (and
dramaticized) successes involving “the easy performance of simple tasks, or low-
quality work on complex tasks” followed by “diminishing returns,
disenchantment, and, in some cases, pessimism.”4 (page 16) Dreyfus holds that
each of the four areas involves a “specific form of human information processing
which enables human subjects in that area to avoid the difficulties which an

3
Dreyfus seemed most disappointed that “neither Dennett nor anyone connected with the project has
published an account of the failure and asked what mistaken assumptions underlay their absurd
optimism” with Dennett pinning the stall-out on a “lack of graduate students” (2007, page 1141).
4
Of the four, he holds pattern recognition to be most fundamental, noting that “resolution of the
difficulties which have arrested development” in other fields “all presuppose success” in this one
(1965, page 14).
4

artificial agent must confront” (page 18) understood in terms of: insight
(distinguishing the essential from the non-essential), fringe consciousness (being
sensitive to salient situational information without explicit iteration and context
dependent ambiguity reduction (due embedded agency within the context of
often implicit goals).5

In principle, Dreyfus understood machines to be unable to express these


uniquely human modes of cognition, writing that “Machines are perfect
Cartesians”(1965, page 66) dealing only in clear and distinct ideas. This
limitation moreover poses “a boundary to possible progress” toward intelligent
machines due to the “indispensable” nature of the “uniquely human forms of
information processing” which “alone provide access to information inaccessible
to a mechanical system.” (page 65) With this in mind, Dreyfus (1965) pressed the
question of how machines may be able to “deal with the ill-structured data of
daily life?” (page 66) - a question to which we will return in section 4. First, let’s
look more closely at how these indispensable modes of human information
processing presented themselves top Dreyfus’ (1965) analysis of common
problem domains.

Throughout his career, Dreyfus distinguished between two types of information


processing: the explicit, discrete symbol manipulation characteristic of
computational machines, and the implicit, context-dependent and ambiguity
tolerant “global” awareness characteristic of embodied and embedded human
beings. It is in the context of this distinction that Dreyfus (1965) locates the
“overall pattern” into which so many researchers seem unwittingly to fall. Early
successes involved formal systems manipulating distinct ideas in limited
domains, and failures the “exponential growth” of these discrete dimensions and
the need for their explicit monitoring which rapidly becomes impossible to
compute (cf. page 24). Humans on his assay “avoid” these problems by “avoiding
the discrete information processing techniques from which these difficulties
arise” (page 48) through what he – following William James – called “fringe
consciousness” (cf. page 21). In the context of game play research for example,
Dreyfus observed that chess involves these two distinct modes of reasoning,
implicit “zeroing in on an area formerly on the fringes of consciousness, which
other areas still on the fringes of consciousness make interesting; and counting
out explicit alternatives.”6 (original emphasis, pages 23-24) Researchers get into
trouble when they take the latter to be sufficient for the former; research stalls
and enthusiasm wanes in demonstration of Dreyfus’ “overall pattern”.

Fringe consciousness contrasts with heuristic search as implemented in game-


playing programs, with chess on this early assay being one such space in which
heuristics failed to isolate good moves through simple “tree-pruning” and in the

5
Later, he bundles these three forms of information processing into what he calls “perspicuous
grouping”, the ability to perceive connections without explicitly counting them out, and to
recognize individual examples as “typical” of a class without explicit comparison of every other
given example (1965, pages 37-46).
6
As the chess player is “zeroing in” on opportune movements, he is effectively “unconsciously
counting out” the various alternatives (1965, page 53).
5

context of which fringe consciousness facilitates the identification of opportune


strategies before moves are ever explicitly counted out. It is in the context of
such “zeroing-in” on opportune moves that Dreyfus originally credits James with
the notion “fringes of consciousness” with examples including “the ticking of a
clock which we notice only if it stops” and “the vague awareness of faces in a
crowd as we search for a friend.” (1965, page 21) Another way in which the
pervasive sense of “fringe” or “marginal” consciousness presents is in the sense
that the façade of a familiar house involves the depth of the home behind.
Dreyfus contended that human beings see the opportune moves on a chess board
similarly, as the implied situation behind the thin façade that is the momentary
placement of pieces, with this sort of reasoning “implicit” in direct contrast with
heuristic search and decision tree pruning through which different courses of
action are explicitly iterated. And, it was because brute force computation was
not able to sufficiently map explicit alternatives that formal chess programs
failed to render expert-level chess at that time.

As with game-play, a similar limitation presents in the case of problem solving. In


this context, Dreyfus (1965) allows that any problem can in principle be solved
by a digital computer “provided the data and the rules of transformation are
explicit” but this requires that the problem be structured for the machine
beforehand (page 70) in order to avoid an infinite counting out of possibly
salient dimensions and to an “infinite regress” of higher and higher levels of
operations required to judge preceding results (page 71). Here again on the
fringes of consciousness, Dreyfus suggests that “only the uniquely human form of
information processing which uses the indeterminate sense of dissatisfaction”
(page 71) avoids this regress.

Completely determined problems with solutions following from explicitly given


rules are open to direct solution, but these sorts of problems are rare in human
experience. For Dreyfus, “the most important aspect of problem solving
behavior” involves “a grasp of the essential structure of the problem” or “insight”
into the “deeper structure” that “enables one to organize the steps necessary for
a solution”7 (attributed to Wertheimer, page 27). Even relatively well-structured
problems like “playing the horses” – betting that one horse will complete a race
before others - involve a potentially infinite number of pertinent dimensions.
Though we may restrict calculations to obviously important dimensions like a
horse’s age, its jockey’s weight, and so on our success is not guaranteed as there
remain less obvious dimensions, like “whether the horse is allergic to goldenrod
or whether the jockey has just had a fight with the owner” which might be the
deciding factors in any given race for example (1965, page 68). If a machine were
to explicitly count out each of these potentially influential dimensions, then “it
could never complete the calculations necessary to predict the outcome” (page
68). At the same time, the human ability to “retain this infinity of facts on the

7
Structuring a problem “does not involve merely the given parts and their transformations” but “works
in conjunction with the material that is structurally relevant but is selected from past experience.”
(Dreyfus, 1965, page 29) This insight bears on prospective agency through backpropagation of
perceived error as realized in contemporary dynamic systems models in an interesting way,
noteworthy in terms of the neurorobotics research mentioned in the fourth section of this paper.
6

fringes of consciousness, being “a uniquely human form of information


processing not amenable to mechanical search techniques”, allows human beings
access to the open-ended information characteristic of everyday experience”
(page 68) without requiring that this inexhaustible list be made explicit.

In summary, game-playing and problem-solving research reveal that fringe-


consciousness of contextual information potentially relevant in the solution to a
problem, and insight into the essential aspects of that problem are necessary
because they structure the problem’s solution. Dreyfus writes that “all searching,
unless directed by a preliminary structuring of the problem, is merely a blind
muddling through” (page 30), and notes that it is “precisely” this “function of
intelligence which resists further progress in the problem-solving field” (1965,
page 29). At the same time in the area of language translation, Dreyfus (1965)
finds that research also failing due to a related “non-programmable form of
information processing” - “ambiguity tolerance” (page 30).

Without fringe consciousness and insight into intended meaning, language


programs must explicitly and exhaustively define terms relative to other terms,
but even this requires that the language user be able to tolerate ambiguity in the
use of terms even as their definitions are refined. Dreyfus (1965) points to the
logical circularity inherent in attempting to ground any single term relative
others, as the summed relations determine the context in conjunction and not
alone - “every word must be made determinate before any word can be made
determinate”(page 72). And, he observes that neither word order nor context of
usage are sufficient alongside a “mechanical dictionary” in order to determine
“which of several possible meanings” that a given word usage may imply (page
31). Finally, he describes the human capacity for natural language as a product of
the “fringe effect”. On Dreyfus’ account, the “user of a natural language is not
aware of many of the cues to which he responds in determining the intended
syntax and meaning” at the same time emphasizing that these cues are not of the
sort amenable to explicit enumeration and thus programmable search (cf. page
31). Rather, objects of reference are objects of human insight held at the fringes
of consciousness until context resolves latent ambiguity and optimal
interpretations present themselves.8 Without insight into fringe consciousness
and tolerance for ambiguity until intentions are resolved, computation continues
perhaps ad infinitum as explicit meanings are fixed and language – as well any
other form of action - becomes practically impossible. Meanwhile, the ambiguity
tolerant human being avoids such endless loops as he/she is able to, in the
context of game-play for example:

… define “danger” as precisely as necessary to make


whatever decision the situation requires, without at the

8
He identifies a similar loop arising in the context of gameplay, as the significance of any given piece
is partly determined by any single game piece and partly by the position of that piece relative other
pieces, with all of this only significant in the context of possible future moves as constrained by the
structured goal conditions of the game. For a heuristic chess program for example, in order to
escape such circularity, the interdependence of such definitions requires that some be fixed, at
which point its limits “become a matter of principle rather than simply of practice” (1965, page 73).
7

same time being obliged to try to eliminate all possible


ambiguity. The human player never need make the whole
context explicit in working out any particular move. (page
74)

Without a capacity for ambiguity tolerance, Dreyfus follows Bar-Hillel in


estimating that high-quality machine translation of natural language is
unattainable “not only in the near future but altogether” (Dreyfus, 1965, page
32). And, he points to Wittgenstein’s “language game” to emphasize that natural
language, like children’s games, are not completely rule-bound but are
moderated by participant judgment on the fly, and that it is in the context of
implicit “extra-linguistic goals” that strict significations for any given term or
action arise. Language is not used in a strictly formal way, but as a tool to bring
about certain situations to the satisfaction of the members of the language
community who reside in them. “Fringe consciousness” of these shared goals is
sufficient to “reduce ambiguity” without removing it completely, and further
allows the insightful participants in their interactions “to ignore as irrelevant
certain possible parsings of sentences and meanings of words which would be
included in the output of a machine.” (page 34)

The crux of this argument rests in the nature of fringe consciousness, that it is
situated, and that natural language is context dependent not due to the nature of
language, or to the laws of logic, but due to the nature of the language user and
logic programmer, him/herself. As Joseph Rouse has written:

Philosophers tend to see formal relations as a framework


around which the bodily and socially interactive aspects of
discursive practice are built. We should instead understand
these powerful expressive resources as abstracted from
and presupposing immersion in a natural language as an
integral part of the world we inhabit. (in Shear, 2013, page
262).

Even simple words, such as “pen” signify different things in different situations.
Understanding the significance of the usage of such simple terms requires that
one appreciate the context of its employment, and there is no way for an AI to
take up a place within such a context, as if the utterer, that allows it to surmise
the intended significance unless it is similarly situated.9 Situated in the context of
shared goals, insightful human beings avoid the circularity inherent in
interdependently defined terms and actions through “the short-cut processing
made possible by the fringes of consciousness and ambiguity tolerance.” (1965,
page 82) On the other hand, “digital machines have symbol-manipulating powers
superior to those of humans” and with this in mind, Dreyfus (1965) in the end
advises that “they should, so far as possible, take over the digital aspects of
information processing” (page 82) as compliments to human cognition, with
research directed accordingly.

9
For an interesting use of this example, see Williams, Haugeland and Dreyfus (1974).
8

In order to delimit those areas in which research might focus while avoiding the
mistake of applying symbol-pushing methods to insoluble contexts, Dreyfus
(1965) presents four classes of problem space and characterizes them in terms
of the sorts of information processing that suit their solutions: associationistic,
simple formal, complex formal and non-formal.10 He forecasts the greatest
potential for AI research in the associationistic and simple formal domains, being
of the sort easily outsourced to computers as adjuncts to human cognition. Non-
formal and complex-formal domains present fringe-effect difficulties of different
sorts, with complex-formal type problems responsible for “most of the
misunderstandings and difficulties in the field” on Dreyfus’ early analysis, as they
invite but resist “exhaustive enumeration” e.g. chess and go. (page 79) These
difficulties arise when researchers apply methods successful in simple or
dimensionally reduced formal problem spaces to more complex ones, due to the
“infinity of facts” component of fringe consciousness reviewed above, indicative
of the inherent limitation in Cartesian AI set out above, and made most apparent
in non-formal contexts. Non-formal problems include “all our everyday activities
in indeterminate situations” including the use of natural language (page 77).
And, as treating complex-formal as associationistic problems is a failing strategy,
treating non-formal as complex-formal contexts is also “doomed to failure” (page
81) because fringe consciousness depends on being situated in a common, goal-
directed context that cannot be formally represented regardless of
computational power over externally determined discrete dimensions.

In terms of natural language for example, Dreyfus (1965) observes that “the only
way to make a computer which could understand and translate a natural
language is to program it to learn about the world.” (page 35) In order to learn a
language and to use it appropriately, the learner must share “at least some of the
goals and interests of the teacher, so that the activity at hand helps to determine
the meanings of the words used.” (page 37) This is exactly the sort of thing that a
digital computer is least fit to do, and why Dreyfus was not surprised that

10
Problems accessible to “associationistic” processes include term-replacement (“mechanical
dictionary”), simple memory (associating symbols with patterns), conditioned response (as a means
to producing these associations) and trial and error (maze navigation) type problems, all of which
are accessible to traditional symbol-pushing AI in the form of decision trees or list searches. The
second type is also fully accessible to traditional AI. “Simple formal” type problems are fully
governed by explicit rules defining contexts simple enough for meanings to be made completely
explicit independent of context, including games whose moves can be explicitly counted out,
theorem proofs involving searches for optimal applications of rules under explicit constraints, and
pattern recognition given salient features are defined beforehand. The third type, “complex formal”
problems, are only partially accessible to traditional AI, including “uncomputable” games whose
decision trees cannot be brute-force counted-out including chess and go which require an
appreciation of global context in order to settle on which features are essential and which not before
deciding on further moves, and pattern recognition wherein regularities must first be discovered
before similarities are recognized. Finally, Dreyfus holds that the fourth type of problem is wholly
outside of machine comprehension on principle, consisting in “non-formal” problems with
solutions requiring context sensitivity and an ability to distinguish essential from nonessential
aspects in patterns which are distorted, the translation of natural language into metaphorical
contexts, and the ability to structure otherwise seemingly unstructured problems through what
Dreyfus calls “perceptive guess” (see the section headed “Areas of Intelligent Activity Classified
with Respect to the Possibility of Artificial Intelligence in Each” beginning on page 75).
9

researchers were so reliably coming up short. As for the resources dumped into
AI research to that point, Dreyfus (1965) answers that they were not “wasted …
if, instead of trying to hide our difficulties, we try to understand what they show”
(page 84) including that “the associationist assumption that all thinking can be
analyzed into discrete, determinate operations” (page 85) of the sort easily
programmed into a digital computer is wrong (c.f. Kenaw, 2008, for an argument
to this effect in the context of rationalism). With such lessons learned, Dreyfus
didn’t see failures in AI as failures in the long run, rather focusing on the upshot
that they “suggest fascinating new areas for basic research” going forward,
“notably the development and programming of machines capable of global and
indeterminate forms of information processing” (page 85).

At that time, computers could not perform such processing, and until they could
AI would remain impossible, which is why Dreyfus advised researchers to “think
in the short run of cooperation between men and digital computers, and only in
the long run of non-digital automata” able to exhibit “the three forms of
information processing essential in dealing with our informal world. (1965, page
85) As to how such machines may be developed in the future, Dreyfus (1965)
observes that only Claude Shannon had by that time recognized that “such
problems as pattern recognition, language translation, and so on, may require a
different type of computer than any we have today” with Shannon’s question
ultimately being “Can we design … a computer whose natural operation is in
terms of patterns, concepts, and vague similarities, rather than sequential
operations on ten-digit numbers? (Shannon as quoted in Dreyfus, 1965, page 75)
This question clearly prefigures contemporary neural networks, the source of
current successes in AI research, and invites review of Dreyfus’ assessment of
such technologies in the next section.

Section 3. The dynamical turn

As we push against the unknown, the margins of what we do know recede, with
early enthusiasm dimming as theory becomes impractical.11 This much is
trivially true, though not always obvious. In his 1965 analysis, Dreyfus makes
this limitation explicit, asking if we are “pushing out on a continuum like that of
velocity, such that progress becomes more and more difficult as we approach the
speed of light, or are we facing a discontinuity, like the tree-climbing man who
reaches for the moon?” (page 18) This confronts researchers with a dilemma,
and contrary to industry momentum, Dreyfus showed the courage to take the
right horn.

As we have seen, as early as 1965 Dreyfus saw that “human and mechanical
information processing proceed in entirely different ways” (1965, page 63) and
accordingly he anticipated discontinuity, faulting his contemporaries’
“associationist” presumptions about human information processing at that point

11
Alexander VonSchoenborn taught this lesson through the image of an ever-expanding sphere of light
around an increasingly bright candle. The greater the light, the larger the unlit fringe.
10

and for a long time afterwards, that it or at least its defining features may be
reduced to symbol manipulation. Instead, he pointed to the:

… possibility that the brain might process information in


an entirely different way than a computer – that
information might, for example, be processed globally the
way a resistor analogue solves the problem of the minimal
path through a network.12 (1965, page 47)

At that time, he suggested that a radical departure from available technology -


“wet” as opposed to the “dry digital computer” - may be necessary in order to
produce an analogue of a human brain able to “simulate intelligent behavior”
(1965, page 59). But, he also allowed more generally that the same processes
may be formalized in mathematical terms manipulable by a digital computer wet
or dry (cf. page 61). He wrote that, for any type of information, “a digital
computer can in principle be programmed to simulate a device which can
process that type of information.” (page 62) Still, such a simulation would be
limited in ways reviewed in the last section:

At best, research in artificial intelligence can write


programs which allow the digital machine to approximate
by means of discrete operations, the results which human
beings achieve by avoiding rather than resolving the
difficulties inherent in discrete techniques. (page 63)

A relatively short time later, he was writing about work focused on skirting this
issue in terms of the “frame problem” at once finding many of his
contemporaries falling into the same “overall pattern” marring AI research since
the beginning. This section briefly updates our understanding of Dreyfus’
position in light of interceding advances, while the next section looks ahead
along the lines that the preceding plot lays out, at the possible advent of
embodied and potentially embedded humanoid robots and the fears that seem to
punctuate human cognizance of this potential.

As we have seen, “good old fashioned artificial intelligence” based on discrete


symbol processing represents what Dreyfus categorized as the “associationist”
thesis about human cognition, that it may be reduced to such, a view of which he
remained critical over the decades following his earliest assay. For example,
Dreyfus (2012) takes Marvin Minsky to task for expecting that “all that was
needed to achieve AI was representing a few million facts” assessed for
relevance within predetermined contexts or “frames” (page 91). Much of this
effort went into the development of “expert systems”, an application for which
Dreyfus originally showed some optimism (as discussed in the preceding section,
as an extension of human cognition). In the end however, such systems failed to
demonstrate expertise. At best, they exhibited only “competence” with Dreyfus

12
By “resistor analogue”, Dreyfus is characterizing a machine that works as do human beings to infer
best explanations or to abduce optimal solutions such as we see in contemporary machine learning,
11

observing that “although competent people do follow rules … experts don’t … so


eliciting algorithms of expertise was out of the question.” (original emphasis,
2012, page 92)

Again, for Dreyfus the fundamental problem isn’t how many dimensions which a
computer might employ to form a web of associations, but rather what it might
take for that machine to determine the relevance of these relations within a
given situation:

For example, if I put up the shades in my office, which


other facts about my office will change. The intensity of the
illumination for sure, the temperature perhaps, the
shadows probably, but not the number of books on the
shelves, etc. (2012, page 91)

Why put up the shades? This is not a simple question after all, and on Dreyfus’
analysis what is lacking in a computer attempting to answer such a question is a
situation within a context of shared goals that is a given for human beings but
that confutes explicit programming. Without the insight that this situatedness
affords, there is no way to settle on how to frame a given problem space. As we
saw in the last section, even a limited situation invites an infinite number of
potentially relevant facts, any of which may be critical for determining an
appropriate next move or the significance of a given term, resulting in a loop of
interdependency which a machine is potentially unable to resolve let alone
deliver appropriate and timely action. Looking back on the frame problem thus,
Dreyfus wrote that:

It seemed to me obvious that any AI program using frames


to organize millions of meaningless facts so as to retrieve
the currently relevant ones was going to be caught in a
regress of frames for recognizing relevant frames for
recognizing relevant facts, and that, therefore, the frame
problem wasn’t just a problem but was a sign that
something was seriously wrong with the whole approach.
(2012, page 92)

These failures along with the advent of bottom-up learning and neural networks
brought researchers to focus on “behavior based robotics” and the idea that
externally derived discrete “representations” should be abandoned for systems
that use the world as their guiding models, instead. This shift from symbol-
pushing to situated robots signified a recognition of the discontinuity that
Dreyfus had foreseen three decades prior. However, Dreyfus still found these
systems limited to “fixed isolated features in the environment” rather than
“context” as a whole, and observed that their limitations seemed to “beg the
question of changing relevance” rather than to solve what had become
understood as “the frame problem” once and for all (2012, page 93).

This brings us back to Dreyfus’ overall pattern as introduced in the first section
of this paper in terms of Brooks and Dennett’s COG program, and to the first
12

serious push to situate AI in terms of the “familiar background” characterized in


human “commonsense knowledge” as carried forth by Douglas Lenat (Dreyfus,
2012, discussion beginning from page 94). In a replay of his criticism of Minsky,
even after having “formalized tens of millions of background facts” Dreyfus
(2012) finds this research stalled, and recalling his earlier image of velocity
towards the speed of light, chasing “a receding goal” (pages 94-95). Once again,
Lenat’s work in situating an AI in “commonsense reality” exemplified the overall
pattern marring AI research and that Dreyfus identified in his 1965 assay and
which he discusses in terms of “the first step fallacy” in 2012, that being working
toward “a goal that there is no reason to believe can ever be achieved or even
approached” by the methods employed (2012, page 95).

Dreyfus’ (2012) moreover isolates an “an explicit statement of the first-step


fallacy” (Dreyfus, 2012, page 96) in David Chalmers’ (2010) argument for the
“singularity”, that smarter-than human AI is forthcoming once adequate software
(in the form of algorithms) is in place, and that this AI will in turn engineer ever-
more intelligent AI, leading to an “explosion” in problem-solving capabilities and
“super intelligence” (as reported in Dreyfus, 2012, page 95, see also Bostrom,
2012, 2014) thereby delivering the singularity and all that it represents.13 Most
surprising is that Chalmers explicitly recognizes Dreyfus’ (1965) “overall
pattern” of over-enthusisam met with disappointment that characterizes the
history of AI research, but at the same time exempts his own view from it,
returning us once again to dreamer in the tree-top, longing for the moon and
figuring on climbing to get there. In an overt expression of the unwarranted
optimism characteristic of this pattern, Chalmers (2010) simulataneously
recognizes that “AI has been a series of failures” and that “there is no current AI”
on which to improve (as reported in Dreyfus, 2012, page 97) while maintaining
that “The argument for a singularity is one that we should take seriously.”
(Chalmers, 2010, page 10). Dreyfus notes simply that Chalmers’ argument is
grounded in the unwarranted assumption predicating his “overall pattern”
through the “first step fallacy”, “that the goal [the singularity] can be achieved
and that the current approach is a step in the right direction on a continuum that
will lead to that goal.” (Dreyfus, 2012, page 96) Thus, rather than see his
contemporaries take his message to heart, and redirect efforts to achievable
goals given existing methods, Dreyfus (2012) finds his original critique not only
effective five decades after its initial exposition, but explicitly recognized if only
to be disregarded.

At this point, Dreyfus (2012) reviews the appparent success in AI represented by


IBM’s Deep Blue and Watson programs as contra-evidence to the failures of AI
research to achieve human-like intelligence. He concedes that current examples
of AI perform well in increasingly complex yet purely formal contexts, but
continue to fail in even simple informal contexts (recall Dreyfus’ blinds example)

13
Whereby AI becomes more intelligent than human beings, human beings are compelled to
increasingly integrate themselves with AI in order to keep up with technological progress, and once
bent to the solution of humanity’s most pressing problems, achieve a capacity to deliver humanity
from them, cf. Kurzweil, 2005. For a technical argument as to why such an “explosion” of
“superintelligence” in self-modifying AI is unlikely, see Benthall (2017).
13

and do not amount to human or even human-like intelligence, writing about AI


success in chess that:

This is a surprising result, and needs to be taken seriously.


But it should not be thought of as showing that computers
could be programmed to behave in a way that is humanly
intelligent. That, for example, computers could be
programmed to exhibit common sense. If computers ever
do pass this hurdle by performing billions of meaningless
operations a second, we will have to think hard about what
we mean by being humanly intelligent. (Dreyfus, 2012,
page 98)

Since 2012, there have been notable advances on DeepBlue’s chess game. For
example, Google’s AlphaGo program bested the world’s finest human go players.
Dreyfus (1965) classified go within the category of “complex-formal” problems
for which he thought that computers may be tailored given success in
implementing just those facets of reasoning that he had identified as necessary
to the task – for one, overcoming the need for exhaustive enumeration of
possible moves given current positions. And, AlphGo has done just that through
the combination of heurisitc search and deep-learning in neural networks, a
technology that simulates human neural processes and that Dreyfus had
anticipated in terms of a “minimal path through a network”.

Most recently in late October, 2017, Google’s DeepMind team (three founders of
which, including a co-author of the current AlphaGo Zero announcement paper
in Nature, are signatories of the Future of Life Institute Open Letter on research
priorities to be discussed in the next section) has announced an improvement
over their AlphaGo platform, AlphaGo Zero, which has been able to beat its
predecessor 100 games to 0 with zero (hence the name) human input, learning
solely through self-reinforcement without supervision (cf. Silver et al., 2017).
This is of course also an advance to be taken seriously, but Dreyfus’ concern that
such systems remain embedded in relatively limited contexts remains a sound
one. AlphaGo Zero exploits “fixed isolated features” within a very limited, strictly
formal structure and though complex relative to other board games, go remains
only that, far from the informal context shifting interactions characteristic of the
human condition.

Section 4. The future of AI

The trouble which Dreyfus (1965) zeroed-in on more than five decades ago, and
with which he wrestled for the rest of his life, ultimately comes down to
convincing enthusiasts of AI to redirect their efforts in the short term, until
understanding of the human condition meets with technologies up to expressing
this understanding in an artificial media. Those determined otherwise he likened
to “alchemists” who, having refined “quicksilver” from dirt, worked centuries
from the same methods hoping for gold. For fifty-two further years after this
earliest assay, Dreyfus consistently found his contemporaries vainly working to
14

transmute one type of substance to something essentially different, only to


muddle on in blind optimism without the insight to restructure the problem
while ignoring his message. “Similarly, the person who is hypnotized by the
moon and is inching up those last branches toward the top of the tree would
consider it reactionary of someone to shake the tree and yell ‘Come down!’”
(Dreyfus, 1965, page 86) Regardless, his mission continued, and at this point we
may ask if we are in similar position, today?

Recently, popular commentary on the progress of AI research has increasingly


come in the form of a warning of a different sort. Rather than being met with
undue optimism, the current atmosphere of anticipation around near-term AI is
fearful. For example, late in 2014, tech icon Elon Musk characterized AI as the
summoning of a demon, bound to escape human control (cf. Musk, 2014). And
just this past year, Musk painted AI as humanity’s greatest existential threat,
suggesting that State-level races to autonomous killing machines may invite a
Third World War. Musk tweeted that nuclear annihilation “May be initiated not
by the country leaders, but one of the AI's, if it decides that a preemptive strike is
the most probable path to victory” (Musk, 2017, September 4, edited for spelling
and grammar) in response to a statement delivered to an audience of over one
million on Knowledge Day in Russia, September 1st, 2017 by President of the
Russian Federation, Vladimir Putin. Putin said that:

Artificial intelligence is the future, not only for Russia, but


for all humankind. It comes with colossal opportunities,
but also threats that are difficult to predict. Whoever
becomes the leader in this sphere will become the ruler of
the world. (Putin, 2017, as translated in Russia Today)

Clearly, the stakes could not be higher. Recognizing the potential dangers of
machine-mediated global nuclear war, rather than allowing research to devolve
into an arms race (cf. Armstrong et al., 2016), Musk and many others including
Steve Wozniak, Wendell Wallach, Stephen Hawking, Ben Goertzel, Eliezer
Yudkowsky, Nick Bostrom, Geoffrey Hinton, Sam Harris and James Moore
supported a 2015 testament from the Future of Life Institute recommending a
set of research priorities “aimed at ensuring that AI remains robust and
beneficial” (Russell et al., 2015, page 105, see also Future of Life Institute, 2015).
Most recently, this past year the Future of Life Institute put forward “An Open
Letter To The United Nations Convention On Certain Conventional Weapons”
endorsed by over a hundred founders and CEOs of companies developing AI and
robotics imploring the High Contracting Parties to the Convention to “protect us
all” from the dangers of AI research being “repurposed to develop autonomous
weapons” (cf. Future of Life Institute, 2017).14 That we are on the way to such an
eventuality is further reinforced in the imagery employed by neuroscience

14
It is interesting to contrast these efforts with a related assessment of the “real danger of artificial
intelligence”, that increasing automation by way of semi-autonomous machines promises to
exacerbate rather than abate an historically unprecedented degree of wealth inequality (cf. Lee,
2017) which has proven historically to precede mass violence, social upheaval, and in the 20th
century World War, and of which “autonomous weapons” will surely now play a part.
15

student cum popular author and early signatory to the Future of Life Institute’s
“Open Letter” recommending research priorities for beneficial AI Sam Harris as
he warns that “the rate of progress doesn’t matter, because any progress is
enough to get us into the end zone” and “The train is already out of the station,
and there's no brake to pull.”15(Harris, 2016, at 5:11 and 5:25 respectively)

Bringing Dreyfus’ early analysis to bear on the issue reveals two things. First of
all, there is the apparent replay of his “overall pattern” in the explicitly linear
thinking as exposed in Harris’ imagery, for example. One might have thought
similarly in 1965, that steady progress along established tracks would lead to
runaway artificial intelligence, but this has hardly been the case. So, to
paraphrase Dreyfus’ rebuttal of Chalmers’ enthusiasm for an imminent
“singularity”, there is no reason to project from current successes in board
games to the likelihood of a fully autonomous “demon” AI beyond human control,
runaway train analogies being beside the point.

Secondarily, reading from Dreyfus’ analysis of current issues reveals a tragedy


much more imminent, that we do not learn from our mistakes and trust in
Dreyfus’ judgment, to redirect attention toward “fascinating new areas for basic
research” (Dreyfus, 1965, page 85, recalling section 2 of this paper) before being
swept up by a runaway hype-train one way or the other. If we do not pursue
fundamental alternatives, we may indeed end up with machine intelligences
operating within a limited sphere of games repurposed to war (so far as this also
is understood to be a game) and with expected results, cue Musk. That said
however, it is not the machines to be feared in such a case, but rather the human
beings who stipulate their “winning” parameters. Rather than aim for the
heavens, these human beings aim to climb down from the trees with big sticks
loaded with bad intentions. This is certainly something to be feared, but has very
little to do with self-modifying super-artificial and everything to do with
malfunctioning sub-optimal natural intelligence, instead.

There is no sense in engineering artificial compliments to human intelligence


when this natural intelligence is bent only on mutually assured self-destruction
by way of the medium. On the other hand, in so far as alternative futures
facilitated by beneficial AI is concerned, there has been much headway though
with much less popular attention to it. Take for example Ron Sun and colleagues’
work on psychologically realistic cognitive architectures employing hybrid
learning systems (which couple the two essential forms of information
processing at the heart of Dreyfus’ work, implicit and explicit) involving moral
agency, creativity, emotion, and towards an intelligence capable of performing in
informal contexts, on innate autonomy (2013, 2015, 2016, in press). Consider
also ongoing research in the evolution of morality, prosociality and the winning

15
Meanwhile, the simple-mindedness of such reasoning has not gone unrecognized by proponents of
current research vectors. For example, late in 2015 the Information Technology and Innovation
Foundation nominated the “loose coalition of scientists and luminaries who stirred fear and hysteria
in 2015 by raising alarms that artificial intelligence (AI) could spell doom for humanity” for its
annual Luddite Award, an honor that was finally awarded them in January of the next year (cf.
Atkinson, 2015, and ITIF, 2016).
16

strategy that is cooperation by Luis Pereira and colleagues using logic


programming (cf. da Costa Cardoso and Pereira, 2015, Pereira and Saptawijaya,
2015, 2016). Rather than assume a confrontational and competitive motivation
driving AI research and by extension AI itself, Pereira’s research indicates that
anything approximating a successful natural intelligence should optimize for
cooperation and mutual beneficence instead, with Sun and colleagues working
towards instantiating such a motivation in a machine agent by understanding the
social nature of human cognition and agency first of all.16

As for how to finally succeed in engineering a human-like AI, Dreyfus (2102)


points to its embodiment as the “hidden obstacle that makes the first step a
fallacy” resulting in failure.

Without a body to consult, an AI program couldn’t answer


such questions as whether or not people can whistle and chew
gum at the same time, or why they have to face a problem in
order to deal with it. (Dreyfus, 2012, page 98)

Where Pereira and colleagues work from the level of evolved group and abduced
principles, and Sun and colleagues focus on bridging “macro” with “micro” levels
of social organization, so far as ab initio development of an embodied
autonomous AI, research is for example ongoing at the Okinawa Institute of
Science and Technology with Jun Tani’s neurorobotics group having to date
demonstrated rudimentary creative composition in relatively informal human-
mediated environments (cf. Tani, 2016, also White and Tani, 2017). Following
Dreyfus, this research may be promising if for no other reason than that it is
discontinuous with prior approaches, explicitly recognizing that situated
cognition is the necessary substrate for artificial intelligence (as he understood
it) rather than aiming to operate through brute-force computational power in
purely formal contexts instead. And on this note, his instruction is poignant. By
taking situations as primitive to cognition, grounding context-dependent goal
conditions therein rather than on “fixed isolated features” of pre-determined
problem spaces, we face “the question of changing relevance” once and for all
and open opportunities for insight into how we human beings are able to “deal
with the ill-structured data of daily life” at the same time.

Finally, with the integration of all of these approaches into psychologically


realistic large-scale social simulations, we may avoid the tragedy foreseen by so
many, above. We need not run out of tree, as if we may climb our way out of
current problems only to find ourselves out on a thin limb to suffer the fall, big
sticks and all. We may instead have a telescope, effectively advancing the fringe

16
Noting that the cognizance of the individual of its situation within a social collective is “emergent” in
2001, Sun remarked on research in “bridging” the two (an image also central to da Costa and
Pereira, 2015) that “It helps to unearth the social unconscious embedded into social institutions and
lodged inside individual minds” with the “crucial step in establishing the micro–macro link …
between individual agents and society” becoming “how we should understand and characterize the
structures of the social unconscious and its ‘‘emergence’’, computationally or otherwise.” (Sun,
2001, page 2)
17

of which we are conscious. With the help of AI, we may simulate peaceful
alternative futures and plot the transitions between them, thereby enabling our
collective self-determination past extinction-level human failures including
machine-mediated global nuclear war (cf. White, 2016). On this view, there is
nothing to fear of a future rich with properly articulated AI of the sort to which
Dreyfus pointed, and much to fear without.

5. Conclusion

In summary, there is no reason to expect that the same approach which delivers
automated killing game-machines should result in anything like human
intelligence, what Dreyfus understood to be artificial intelligence, and every
reason to expect that it cannot. For example, though it may learn on its own how
to play the game, AlpaGo Zero optimizes interim results within a fixed formal
context and nothing else. Though it may in principle be exposed to other limited-
context purely formal task environments and be expected to perform similarly –
in an update on the use of tic-tac-toe to illustrate the futility of global nuclear
war, if this be the goal of researchers for example, by turning war into tic-tac-toe
or in AlphaGo Zero, chess - it lacks the insight to delineate those essential from
inessential dimensions punctuating even relatively simple informal contexts, e.g.
it will not be able to tell us why Professor Dreyfus had opened his blinds when he
did, just as it will not be able to advise either why it is best to repurpose
humankind’s greatest intellectual achievements to destroy a good part of the
natural world in order to retain one political economy over another, or why it is
best not to invest in the option from the beginning. The reasons for this
limitation are the same as those to which Dreyfus has been calling our attention
since at least 1965, because such machines remain unable to structure a given
problem space around an embodied situation the condition of which is the
fundamental concern of human beings. Absent this concern, machines remain
unable to advise on which actions may be best, now.

What would it take to deliver such a capacity? In the second section of this paper,
we mentioned that Dreyfus asks why it is that at a given point, once we become
aware of objects at the fringe of consciousness, through insight into the global
situation, conscious modes of cognition take effect. We begin to reason about
these objects. Dreyfus notes that for a human being “conscious counting begins
when he has to refine this global process in order to deal with details” and with
this asks “what kind of program could convert this unconscious counting into the
kind of fringe-influenced awareness of the centers of interest, which is the way
zeroing-in presents itself in our experience?” (1965, page 23) Drawing on the
preceding discussion, we may now answer: the kind of program that shares in
the needs and aims of others like ourselves so that it is sensitive to changes in
context that bring common goals closer or draw them distant, ideally one that
shares our situation as well as our visceral concern for the way that this situation
turns out.

As to which goals we should direct AI research, and if artificial fringe-


consciousness should be one of them, this is very much an open question.
Worrying about what happens next doesn’t seem to suit an “autonomous”
18

weapons system, or perhaps the weapons system engineer for that matter, but it
was important to Dreyfus. Writing on Heidegger and the role of technology in
terms of revolutionary times under the threat of global imperialism and
mechanized warfare, Dreyfus answers Heidegger’s plea that “only a god can save
us, now” with a more tangible solution, that we reinvent instead the “cultural
paradigm” in terms of which discoveries are made and worlds created:

… we must foster human receptivity and preserve the


endangered species of pre-technological practices that
remain in our culture, in the hope that one day they will be
pulled together into a new paradigm, rich enough and
resistant enough to give new meaningful directions to our
lives. (Dreyfus, 1995, page 32)

Again, we should emphasize that that these cues are not of the sort amenable to
explicit enumeration and programmable search (cf. page 31) but are rather
objects of human insight that arise at the fringes of consciousness, here in the
form of “new meaningful directions”.

So in the end, how might Hubert Dreyfus advise current researchers in AI? We
should employ technologies to enhance access to fringe consciousness, so that
we can mine culture and tradition for meaningful directions forward. We should
do what we do best and allow machines to do what they do best. We may also
benefit from Dreyfus’ observation that the salvation of what he called
“Heideggerian AI” may come from the deeper appreciation of Heideggerian
philosophy, specifically that of authenticity as applied not only to model
architectures, but to the researchers themselves who must first understand their
authentic condition before articulating similar in artifacts. Until we are ready to
reconfigure our industry into a form that encourages challenges of paradigm and
what may be called “authentic inquiry” of the sort that Dreyfus also exemplified -
a ‘return to the situation, itself’ so to speak - it seems that we may well see his
overall pattern repeated again in the near future and at a grand scale, beginning
with the failure of Moor’s law as the capacity to zero-in on what is important for
ongoing progress comes at increasingly higher human costs.17

Finally, we conclude not with where Dreyfus left off, but rather with where our
story began, on the cusp of discontinuity and running out of tree. This time
however, we are benefitted by his example, and can feel encouraged to recast the
future through authentic inquiry going forward.

Works Consulted:

Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a
model of artificial intelligence development. AI & Society, 31, 2, 201-206.

17
This is the thesis driving the Bloom et al. (2017) for example.
19

Atkinson, R. D. (2015). The 2015 ITIF Luddite Award


Nominees: The Worst of the Year’s Worst Innovation Killers. Retrieved
November 25, 2017 from http://www2.itif.org/2015-itif-luddite-award.pdf

Benthall, S. (2017) Don't Fear the Reaper: Refuting Bostrom's Superintelligence


Argument. Preprint, arXiv:1702.08495 Retrieved November 25, 2017, from
https://arxiv.org/pdf/1702.08495.pdf

Bloom, N., Jones, C. I., Reenen, J. V., & Webb, M. (2017, September). Are Ideas
Getting Harder to Find? National Bureau of Economic Research. Retrieved
November 25, 2017, from http://www.nber.org/papers/w23782

Bostrom, N. (2012). The superintelligent will: Motivation and instrumental


rationality in advanced artificial agents. Minds and Machines, 22, 2, 71–85.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford


University Press.

Brey, P. (2001). ‘Hubert Dreyfus - Human versus Machine’. In Achterhuis, H.


(ed.), American Philosophy of Technology: The Empirical Turn, Indiana University
Press, 37-63.

Chalmers, D. J. (2010). The singularity: A philosophical analysis. Journal of


Consciousness Studies, 17, 7-65.

da Costa Cardoso, F., & Pereira, L.M. (2015) The emergence of artificial
autonomy: a view from the foothills of a challenging. In White, J., & Searle, R.
(eds.) Rethinking machine ethics in the age of ubiquitous technology. Hauppage:
IGI Global.

Dreyfus, H. L., & Rand Corp Santa Monica Calif. (1965). Alchemy and Artificial
Intelligence. Ft. Belvoir: Defense Technical Information Center. Retrieved
November 25, 2017, from
https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf

Dreyfus, H. L. (1995). Heidegger on Gaining a Free Relation to Technology. In


Feenburg, A. & Hannay, A. (eds.) Technology and the Politics of Knowledge.
Bloomington: Indiana University Press, 25-33.

Dreyfus, H. L. (2007). Why Heideggerian AI failed and how fixing it would


require making it more Heideggerian. Artificial Intelligence, 171, 18, 1137-1160.

Dreyfus, H. L. (2012). A History of First Step Fallacies. Minds and Machines, 22, 2,
87-99.

Future of Life Institute. (2015). AI Open Letter. Retrieved November 25, 2017,
from https://futureoflife.org/ai-open-letter/
20

Future of Life Institute. (2017). An Open Letter to the United Nations Convention
on Certain Conventional Weapons. Retrieved November 26, 2017, from
https://futureoflife.org/autonomous-weapons-open-letter-2017

Harris, S. (2016) TEDSummit, June 2016, Can we build AI without losing control of
it? TED Summit, June 2016. Retrieved November 25, 2017, from
https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control
_over_it/transcript

ITIF. (2016). Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award.
Retrieved November 25, 2017, from
https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-
itif’s-annual-luddite-award

Kenaw, S. (2008). Hubert L. Dreyfus’ Critique of Classical AI and its Rationalist


Assumptions. Minds and Machines, 18, 2, 227-238.

Kurzweil, R. 2005. The Singularity Is Near: When Humans Transcend Biology. New
York, NY: Viking.

Lee, K. (2017, June 24). The Real Threat of Artificial Intelligence. Retrieved
November 25, 2017, from
https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-
economic-inequality.html

Musk, E. (2014). 2014 MIT AeroAstro Centennial Symposium. Retrieved


November 25, 2017, from http://aeroastro.mit.edu/aeroastro100/centennial-
symposium

Musk, E. (2017, September 4). May be initiated not by the country leaders, but
one of the AI's, if it decides that a preemptive strike is most probable path to
victory. Retrieved November 25, 2017, from
https://twitter.com/elonmusk/status/904639405440323585

Pereira, L.M., Saptawijaya, A. (2015) Bridging two realms of machine ethics. In


White, J., & Searle, R. (eds.) Rethinking machine ethics in the age of ubiquitous
technology. Hauppage: IGI Global.

Pereira, L.M., Saptawijaya, A. (2016) Modeling Collective Morality via


Evolutionary Game Theory. In Magnani, L. (ed) Programming Machine Ethics.
Studies in Applied Philosophy, Epistemology and Rational Ethics, 26. Berlin:
Springer.

Plato, Cooper, J. M. (ed.) (1997). Complete works. Indianapolis, Ind: Hackett.

Putin, V. (2017, September 1). 'Whoever leads in AI will rule the world': Putin to
Russian children on Knowledge Day. Retrieved November 25, 2017, from
https://www.rt.com/news/401731-ai-rule-world-putin/
21

Russell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and
Beneficial Artificial Intelligence. AI Magazine, 36, 4, 105-114.

Rouse, J. (2013) What is Conceptually Articulated Understanding? In Schear, J. K.


(ed.). Mind, reason, and being-in-the-world: The McDowell-Dreyfus debate.
London: Routledge, 250-271.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van, D. D. G., Schrittwieser,
J., ... Sutskever, I. (2016). Mastering the game of Go with deep neural networks
and tree search. Nature, 529, 7587, 484-489.

Sun, R. (2001). Individual action and collective function: From sociology to multi-
agent learning. Cognitive Systems Research, 2, 1, 1-3.

Sun, R. (2013). Moral Judgment, Human Motivation, and Neural


Networks. Cognitive Computation, 5, 4, 566-579.

Sun, R., Helie, S. (2015) Accounting for Creativity Within a Psychologically


Realistic Cognitive Architecture. In: Besold T., Schorlemmer M., Smaill A. (eds)
Computational Creativity Research: Towards Creative Machines. Atlantis Thinking
Machines, 7. Paris: Atlantis Press

Sun, R., Wilson, N., & Lynch, M. (2016). Emotion: A Unified Mechanistic
Interpretation from a Cognitive Architecture. Cognitive Computation, 8, 1, 1-14.

Sun, R. (in press). “Intrinsic Motivation for Truly Autonomous Agents”

White, J. (2016). Simulation, self-extinction, and philosophy in the service of


human civilization. AI & Society, 31, 2, 171-190.

White, J, & Tani, J. (2017) From Biological to Synthetic Neurorobotics


Approaches to Understanding the Structure Essential to Consciousness (Part 3).
APA Newsletter: Philosophy and Computers, 17, 1, 11-22.

Williams, B., Haugeland, J., & Dreyfus, H. (1974, June 27). An Exchange on
Artificial Intelligence. Retrieved November 25, 2017, from
http://www.nybooks.com/articles/1974/06/27/an-exchange-on-artificial-
intelligence

Tani, J. (2016). Exploring Robotic Minds: Actions, Symbols, and Consciousness as


Self-Organizing Dynamic Phenomena. Oxford: Oxford University Press.

You might also like