You are on page 1of 273

The Implicit Mind

The Implicit Mind
Cognitive Architecture, the Self, and Ethics

MICHAEL BROWNSTEIN

1
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press


198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2018

All rights reserved. No part of this publication may be reproduced, stored in


a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by license, or under terms agreed with the appropriate reproduction
rights organization. Inquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above.

You must not circulate this work in any other form


and you must impose this same condition on any acquirer.

CIP data is on file at the Library of Congress


ISBN 978–​0–​19–​063372–​1

1 3 5 7 9 8 6 4 2
Printed by Sheridan Books, Inc., United States of America
CONTENTS

Acknowledgments  vii

1. Introduction  1

PART ONE  MIND

2. Perception, Emotion, Behavior, and Change: Components


of Spontaneous Inclinations  29
3. Implicit Attitudes and the Architecture of the Mind  64

PART T WO  SELF

4. Caring, Implicit Attitudes, and the Self  101


5. Reflection, Responsibility, and Fractured Selves  123

PART THREE  ETHICS

6. Deliberation and Spontaneity  153

7. The Habit Stance  177

8. Conclusion  206

Appendix: Measures, Methods, and Psychological Science  211


References  233
Index  257
ACKNOWLEDGMENTS

I’m so grateful for the generous, supportive, and incisive feedback I’ve received
over the years, on this and related projects, from Alex Acs, Marianna Alessandri,
Louise Antony, Mahzarin Banaji, Michael Brent, Lawrence Buell, Brent Cebul,
Jason D’Cruz, James Dow, Yarrow Dunhum, Ellen Fridland, Katie Gasdaglis,
Bertram Gawronski, Maggie Gram, Daniel Harris, Sally Haslanger, Jules Holroyd,
Bryce Huebner, Zachary Irving, Gabbrielle Johnson, Eric Katz, Sean Kelly, Julian
Kiverstein, Joshua Knobe, Victor Kumar, Benedek Kurdi, Calvin Lai, Carole Lee,
Edouard Machery, Eric Mandelbaum, Kenny Marotta, Christia Mercer, Eliot
Michaelson, John Morrison, Myrto Mylopolous, Brian Nosek, Keith Payne,
Jonathan Phillips, Jeremy Pober, Emily Remus, Luis Rivera, Laurie Rudman, Hagop
Sarkissian, Eric Schwitzgebel, Claire Seiler, Rena Seltzer, Paschal Sheeran, Robin
Sheffler, David Shoemaker, Susanna Siegel, Nico Silins, Holly Smith, Chandra
Sripada, Alex Stehn, Shannon Sullivan, Joseph Sweetman, Virginia Valian, Natalia
Washington, Thomas Webb, Alison Wylie, Sunny Yang, and Robin Zheng.
Many thanks, too, to Peter Ohlin, Lacey Davidson, Mary Becker, and two anony-
mous reviewers for Oxford University Press. You have helped to improve this book
in countless ways. Taylor Carman and John Christman gracefully tolerated ham-​
handed earlier attempts to work out the ideas contained herein. Thank you to both,
now that I’ve worked it all out flawlessly . . .
I owe extra-​special thanks to Daniel Kelly, Tamar Gendler, and Jennifer Saul,
each of whom is an inspiration and a role model. And I owe extra-​special super-​
duper thanks to Alex Madva, my one-​in-​a-​million collaborator. You flipped-​turned
my life upside down, so just take a minute, sit right there, and consider yourself the
prince of this affair.
Mom, Dad, Erick, and Carrie: I can’t promise it’s worth looking, but you’ll find
yourselves in here. Thank you for unwavering love and support.
Above all, my deepest thanks to Reine, Leda, Iggy, and Minerva. I  love  you,
I love you, I love you, and I love you too.

vii
viii Acknowledgments

Parts of the book were conceived, written, and revised with support—​for
which I’m very grateful—​from the American Council of Learned Societies, the
American Academy of Arts and Sciences, and the Leverhulme Trust. Some of the
material in this book has appeared in print previously in an earlier form. In par-
ticular, some of the examples in Chapter 1 were presented in Brownstein (2014)
and Brownstein and Madva (2012a,b). Chapter 2 is a much-​expanded version of
part of Brownstein and Madva (2012b). Also in Chapter 2, an earlier version of
§4 appeared in Brownstein (2017) and an earlier version of part of §4.2 appeared
in Brownstein and Madva (2012a). Parts of §3 and §5 in Chapter 3 are derived
from Brownstein and Madva (2012a,b), and parts of §6 were published in a differ-
ent form in Madva and Brownstein (2016). Parts of Chapters 4 and 5 are derived
from Brownstein (2014). In Chapter 4 as well, an earlier version of §4.1 appeared
in Brownstein and Madva (2012a), and an earlier version of §4.2 appeared in
Brownstein (2014). Some parts of Chapter 7 are derived from Brownstein (2016)
and a few passages of the Appendix are derived from Brownstein (2015). I’m grate-
ful to these presses and to my co-​author, Alex Madva, for permission to edit and
develop our collaborative work here.
The Implicit Mind
1

Introduction

Victims of violent assault sometimes say, after the fact, that “something just felt
wrong” about the person walking on the other side of the street. Or offering to help
carry the groceries into their apartment. Or hanging out in the empty hallway. But
to their great regret, they dismissed these feelings, thinking that they were just being
paranoid or suspicious. In The Gift of Fear, Gavin de Becker argues that the most
important thing people can do to avoid becoming victims of assault is to trust their
intuition when something about a person or situation seems amiss. He writes:

A woman is waiting for an elevator, and when the doors open she sees a
man inside who causes her apprehension. Since she is not usually afraid,
it may be the late hour, his size, the way he looks at her, the rate of attacks
in the neighborhood, an article she read a year ago—​it doesn’t matter why.
The point is, she gets a feeling of fear. How does she respond to nature’s
strongest survival signal? She suppresses it, telling herself: “I’m not going
to live like that; I’m not going to insult this guy by letting the door close
in his face.” When the fear doesn’t go away, she tells herself not to be so
silly, and she gets into the elevator. Now, which is sillier: waiting a moment
for the next elevator, or getting into a soundproofed steel chamber with a
stranger she is afraid of? (1998, 30–​31)

De Becker offers trainings promising to teach people how to notice their often
very subtle feelings of fear and unease—​their “Pre-​Incident Indicators”—​in
potentially dangerous situations. These indicators, he argues, are responsive to
nonverbal signals of what other people are thinking or planning. For example,
we may feel unease when another’s “micro-​expression,” like a quick sideways
glance, or rapid eye-​blinking, or slightly downturned lips, signals that person’s
intentions, even though we might not notice such cues consciously. De Becker’s
trainings have been adapted for police officers, who also often say, after vio-
lent encounters, that they could tell that something was wrong in a situation,
but they ignored those feelings because they didn’t seem justified at the time.

1
2 Introduction

This approach has been influential. De Becker designed the MOSAIC Threat
Assessment System that is used by many police departments to screen threats of
spousal abuse, and is also used to screen threats to members of the US Congress,
the Central Intelligence Agency, and federal justices, including the justices of the
Supreme Court.1
But there is a problem with de Becker’s advice. Stack his recommendations
up against research on “implicit bias” and a dilemma emerges. Roughly speaking,
implicit biases are evaluative thoughts and feelings about social groups that can
contribute to discriminatory behavior even in the absence of explicitly prejudiced
motivations.2 These thoughts and feelings are captured by “indirect” measures of
attitudes. Such measures are indirect in the sense that they avoid asking people
about their feelings or thoughts directly. Instead, on the most commonly used indi-
rect measure—​the “Implicit Association Test” (IAT; Greenwald et al., 1998)—​par-
ticipants are asked to sort names, images, or words that reflect social identity as
quickly as possible. What emerges is thought to reflect mental associations between
social groups and concepts like “good,” “bad,” “violent,” “lazy,” “athletic,” and so on.
Such associations result at least in part from common stereotypes found in con-
temporary societies about members of these groups. On the black–​white race IAT,
most white people (more than 70%) demonstrate negative implicit attitudes toward
blacks, and roughly 40% of black participants do too.3 Moreover, tests like the IAT
sometimes  predict biased behavior,  and in some contexts, they  do  so  better than
traditional self-​report measures.4
Consider, for example, research on “shooter bias.” In a computer simulation, par-
ticipants are quickly shown images of black and white men holding either guns or
harmless objects like cell phones. They are told to “shoot” all and only those people
depicted holding guns. The results are unsettling. Participants are more likely to
shoot an unarmed black man than an unarmed white man and are more likely to fail
to shoot an armed white man than an armed black man (Correll et al., 2002; Mekawi
and Bresin, 2015). Measures of implicit bias like the IAT can predict these results.
People who demonstrate strong implicit racial biases (in particular, strong implicit
associations between “black” and “weapons”) are more likely to make these race-​
based mistakes than people who demonstrate weaker or no implicit racial biases
(Glaser and Knowles, 2008). These findings are ominous in light of continued and
recent police shootings of unarmed black men in the United States. And there are

1
  See http://​gavindebecker.com/​main/​.
2
  There is much debate about how to define implicit bias. See Brownstein and Saul (2016) for
discussion.
3
  See Nosek et al. (2002, 2007), Ashburn-​Nardo et al. (2003), and Dasgupta (2004).
4
  See Nosek et al. (2007) and Greenwald et al. (2009). See the Appendix for a more in-​depth dis-
cussion of the IAT and behavioral prediction. Readers unfamiliar with the IAT may want to skip ahead
to the Appendix to read the section explaining how the test works.
Int roduc tion 3

thousands of related studies uncovering the pervasiveness of implicit biases against


blacks, women, gay people, and members of other socially stigmatized groups.5
Findings like these show that the way we perceive, and act upon our perceptions
of, micro-​expressions and subtle social signals is often influenced by stereotypes
and prejudices that most people—​or most people reading this book, at least—​
disavow. Shooter bias involves acting on the basis of subtle feelings of fear that
most white Americans are more likely to feel (but not necessarily notice themselves
feeling) when they are in the presence of a black man than when they are facing a
white man.6 Research shows that these feelings are indeed race-​based. For example,
shooter bias is exacerbated after participants read newspaper stories about black
criminals, but not after reading newspaper stories about white criminals (Correll
et al., 2007). These subtle feelings of fear pervade many mundane situations, too,
often in ways that only victims of prejudice notice. George Yancy (2008), for exam-
ple, describes the purse-​clutching, averted gazes, and general unease of some white
women when they are in an elevator with a black man, such as himself. In comment-
ing on the death of Trayvon Martin, Barack Obama made a similar point:

[T]‌here are very few African-​American men who haven’t had the experi-
ence of walking across the street and hearing the locks click on the doors
of cars. That happens [sic] to me, at least before I was a senator. There are
very few African-​Americans who haven’t had the experience of getting on
an elevator and a woman clutching her purse nervously and holding her
breath until she had a chance to get off. That happens often.7

So while one’s Pre-​Incident Indicators might be justifiably set off by a potential


assailant’s posture or gestures, they might also be set off by an innocent person’s skin
color, hoodie, or turban. This means that it might be both true that subtle feelings
and intuitions can act like social antennae, tuning us into what’s happening around

5
  As I discuss in the Appendix, no one psychological study is sufficient for concluding the existence
of some psychological phenomenon. And, sometimes, many studies appearing to demonstrate some
phenomenon can all turn out to be false positives or otherwise flawed, as the so-​called replication crisis
in psychology has shown. So I do not mean to suggest that the sheer number of studies on implicit
bias is definitive of the existence of implicit bias. But the breadth of a research program does count as
evidence, however defeasible, of the program’s core findings.
6
  Evidence for the fact that shooter bias involves acting on the basis of subtle feelings of fear stems
in part from the fact that shooter bias can be mitigated by practicing the plan “If I see a black face, I will
think ‘safe!’ ” (Stewart and Payne, 2008). But planning to think “quick!” or “accurate!” doesn’t have
the same effect on shooter bias (see Chapter 7). Note also the intersection of race and gender in this
stream of research, as all shooter bias studies use male targets only (so far as I know). White Americans’
implicit associations with black women are likely to be different than their implicit associations with
black men.
7
  See http://​www.huffingtonpost.com/​2013/​07/​19/​obama-​racial-​profiling_​n_​3624881.html.
4 Introduction

us, and also true that these very same feelings can be profoundly affected by prej-
udice and stereotypes. Our Pre-​Incident Indicators might be a valuable source of
attunement to the world, in other words, but they might also be a tragic source of
moral and rational failing.8 This is a grave point, particularly given de Becker’s rec-
ommendations to police officers to trust their intuition about potential criminal
suspects.
The juxtaposition of de Becker’s recommendations with research on implicit
bias illustrates a tension that is replicated across the sciences of the mind. On
the one hand, our spontaneous inclinations and dispositions are often attuned
to subtle elements of the world around us, sometimes even more so than our
reasoned judgments. Converging research in philosophy and psychology sug-
gests that these unreasoned (but not unreasonable) inclinations—​our instincts,
intuitions, gut feelings, sixth senses, heuristics, snap judgments, and so on—​
play key roles in thought and action and have the potential to have moral and
rational credibility. On the other hand, one might read the headline of the past
seventy-​f ive years of research in cognitive and social psychology as putting us
“on notice” about the moral and rational failings of our spontaneous inclina-
tions and dispositions. From impulsivity, to “moral myopia,” to implicit bias, it
seems as if we must be constantly on guard against the dangers of our “mere”
inclinations.
The central contention of this book is that understanding these “two faces of
spontaneity”—​ its virtues and vices—​ requires understanding what I  call the
“implicit mind.”9 In turn, understanding the implicit mind requires considering
three sets of questions. The first set focuses on the architecture of the implicit mind
itself, the second on the relationship between the implicit mind and the self, and the
third on the ethics of spontaneity.
First, what kinds of mental states make up the implicit mind? Are both “virtue
cases” and “vice cases” of spontaneity products of one and the same mental system?
What kind of cognitive structure do these states have, if so? Are implicit mental
states basic stimulus-​response reflexes? Are they mere associations? Or are they
beliefs?
Second, how should we relate to our spontaneous inclinations and dispositions?
Are they “ours,” in the sense that they reflect on our character? Commonly we think
of our beliefs, hopes, values, desires, and so on in these terms, as “person-​level”
states. Do implicit mental states reflect on who we are in this sense? Relatedly, under

8
  For related discussion, in which our spontaneous reactions are framed as sometimes at once epi­
stemically rational and morally faulty, see Gendler (2011), and see also replies from Egan (2011) and
Madva (2016a).
9
  I will speak often of “spontaneity,” but I do not mean it in the Kantian sense of the mind making
active contributions to the “synthesis” of perceptual experience. I use the term in the folk sense, to refer
to actions not preceded by deliberation, planning, or self-​focused thought.
Int roduc tion 5

what conditions are we responsible for spontaneous and impulsive actions, that is,
those that result from the workings of the implicit mind?
And, finally, how can we improve our implicit minds? What can we do to
increase the chances of our spontaneous inclinations and dispositions acting as
reliable indicators rather than as conduits for bias and prejudice? It is tempting to
think that one can simply reflect carefully upon which spontaneous inclinations to
trust and which to avoid. But in real-​time, moment-​to-​moment life, when decisions
and actions unfold quickly, this is often not plausible. How then can we act on our
immediate inclinations while minimizing the risk of doing something irrational or
immoral? How can we enjoy the virtues of spontaneity without succumbing to its
vices? A  poignant formulation of this last question surfaced in an unlikely place
in 2012. In the climactic scene of Warner Bros.’ The Lego Movie (Li et al., 2014),
with the bad guys closing in and the moment of truth arriving, the old wise man
Vitruvius paradoxically exhorts his protégé, “Trust your instincts . . . unless your
instincts are terrible!”
Indeed. A  plethora of research in the sciences of the mind shows that very
often we cannot but trust our instincts and that in many cases we should, unless
our instincts are terrible, which the sciences of the mind have also shown that they
often are.

1.  Many Virtues, Many Vices


To pursue these questions, I draw upon a wide range of examples. These have a fam-
ily resemblance, roughly all involving agents who act in some way without explicitly
thinking about what they are doing or even, in some cases, being aware of what
they are doing. I discuss these examples in more depth throughout the book. There
are, of course, many important differences between the cases, but it is my conten-
tion that a theory of the implicit mind requires consideration of the vast diversity
of relevant examples. My only aim here, in this chapter, is to flesh out the territory
that I intend my account of the implicit mind to cover. The edges of this territory
will be blurry.
Imagine that you’re walking through a museum, with your back to the wall, and
you turn around to see a very large painting from very close up.10 You will likely
have an immediate inclination to step back from the large painting. Some artists
seem to be aware of this phenomenon. Consider the following caption, which was
displayed beside one of Barnett Newman’s paintings in the Museum of Modern Art
in New York:

10
  I borrow this example from Hubert Dreyfus and Sean Kelly (2007), who are in turn influenced
by Maurice Merleau-​Ponty (1962/​2002). I  presented this example, along with the example of dis-
tance-​standing discussed later, with Alex Madva, in Brownstein and Madva (2012b).
6 Introduction

Vir Heroicus Sublimus, Newman’s largest painting at the time of its com-
pletion, is meant to overwhelm the senses. Viewers may be inclined to
step back from it to see it all at once, but Newman instructed precisely
the opposite. When the painting was first exhibited, in 1951 . . . Newman
tacked to the wall a notice that read, “There is a tendency to look at large
pictures from a distance. The large pictures in this exhibition are intended
to be seen from a short distance.”11

Newman seems to be anticipating the impulse to step back in order to improve


your orientation to the work. Indeed, he seems to be challenging this impulse,
perhaps in order to help you see his work in a new way. But ordinarily we step
back from a large painting in order to get the best view of it. We also ordinarily
do so spontaneously, without any explicit thought or deliberation. More gen-
erally, navigating our way around objects in the ambient environment requires
constant readjustment as we automatically seek out the “right” place to stand,
sit, walk, and so on (Kelly, 2005). This process of automatically and continually
adjusting our position in space is evident in social contexts too, such as inter-
personal interaction. Compare the behavior of the museumgoer with “distance-​
standing,” or knowing how and where to stand in relation to your interlocutor
during conversation. When you are too far or too close, you (usually) feel an
immediate need to readjust your body, by stepping backward or forward. This
isn’t a one-​and-​done reaction, but rather a process of continual readjustment.
The distance-​stander’s feel for where to stand will be informed by the mood,
the subject matter, the personal history between interlocutors, and the like. One
can be especially adept or klutzy at automatically adjusting in these situations,
particularly if one travels abroad and must learn to adjust one’s sense of appropri-
ate distances. Some “close-​talkers,” like the one parodied on Seinfeld, chronically
fail to exhibit the ordinary flexible self-​modification that typically characterizes
distance-​standing.
This kind of ability—​to spontaneously orient ourselves in physical and social
space in the right way—​certainly supports our goals in an instrumental sense (i.e.,
to get the best view of the painting or to have a good conversation). But these are
goals that we often do not even know we have. Museumgoers may not consciously
and deliberately strive to find the right place from which to see big paintings; and
few people intentionally aim to stand the right distance from others during conver-
sation. Those who do are sometimes the worst at it.
In the museumgoer and distance-​stander cases, our spontaneous moment-​
to-​moment inclinations enable us to “get  along” adequately in the world. But
in other cases, our spontaneous inclinations and dispositions play key roles in

11
  Abstract Expressionist New York, Museum of Modern Art, New York, 2010–​2011.
Int roduc tion 7

exemplary action. Peter Railton describes skilled spontaneous action in the arts and
in conversation:

A talented saxophonist can barrel into a riveting improvised solo that


shifts key and rhythm fluidly to make a satisfying expressive whole, with-
out making decisions along the way. A wit can find a funny and all-​too-​
true remark escaping her lips even before she has thoughts about what to
say—​sometimes to her regret. In such cases, individuals naturally describe
themselves as acting “spontaneously” or “intuitively,” “without thinking
about it.” (2014, 815–​816)

Expert athletes exemplify similar phenomena. And often it seems as if their


magisterial skill is in some way a product of their ability to act in an unhesitating,
undoubting way. Research in sports psychology suggests that particular kinds of
self-​focused thought—​thinking about how one is going to return the oncoming
serve or consciously choosing which way to turn to avoid a defender—​can inhibit
expert performance (Beilock, 2010). As David Foster Wallace put it, this sort of
reflective hesitance “breaks the rhythm that excludes thinking.”12 This may also be
tied to why top athletes sometimes seem to have very little to say about how they
do what they do. Of course, it’s not for lack of intelligence. Rather, it’s because their
decisions are spontaneous and unplanned. As the National Football League Hall
of Fame running back Walter Payton said, “People ask me about this move or that
move, but I  don’t know why I  did something. I  just did it.”13 Wallace opined on
this facet of expertise too, suggesting that the genius of top tennis players may have
something to do with their uncanny ability to turn off the “Iago-​like voice of the
self ” (2006, 154).
Experiences like these are sometimes described in terms of athletes being “in
the flow.”14 “Flow” is a concept emerging from research in positive psychology on
well-​being. Mihaly Csikszentmihalyi argues that flow is a kind of “optimal experi-
ence,” accessible not just in sports but through a wide range of activities. People
experience flow, Csikszentmihalyi argues, when they “become so involved in what
they are doing that the activity becomes spontaneous, almost automatic; they
stop being aware of themselves as separate from the actions they are perform-
ing” (1990, 3–​4). Flow is found in periods of intense concentration, when one is
focused on and engaged in a difficult but worthwhile task.15 Athletes and artists

12
  Quoted in Zadie Smiths’ memoriam on David Foster Wallace. See http://​fivedials.com/​fiction/​
zadie-​smith/​
13
  Quoted in Beilock (2010, 224). See Chapter 6 for a more in-​depth discussion of these examples.
14
  See my “Rationalizing Flow” (Brownstein, 2014).
15
  That is, a task that one subjectively perceives to be worthwhile. Perhaps, though, flow can some-
times also be found in activities one believes to be trivial (e.g., doing crossword puzzles). If so, this
8 Introduction

and writers sometimes talk about being in the flow in this sense. Those who have
experienced something like it probably know that flow is a fragile state. Phone calls
and police sirens can take one out of the flow, as can self-​reflection. According to
Csikszentmihalyi, happiness itself is a flowlike state, and the fragility of flow explains
why happiness is elusive.
Other examples are found in research suggesting that our spontaneous inclina-
tions are central to some forms of prosocial behavior. This opens the door to consid-
ering the role of spontaneity in ethics. For example, in some situations, people are
more cooperative and less selfish when they act quickly, without thinking carefully.
David Rand and colleagues (2012), for instance, asked people in groups of four to
play a public goods game. All the participants were given the same amount of money
and were then asked if they would like to contribute any of that money to a common
pool. Whatever was contributed to the common pool, they were told, would then
be doubled by the experimenter and distributed evenly among all four participants.
It turns out that people contribute more to the common pool when they make fast,
spontaneous decisions. The same is true when spontaneous decision-​making is
manipulated; people contribute more to the common pool when they are given less
time to decide what to do. This doesn’t just reflect subjects’ calculating their returns
poorly when they don’t have time to think. In economic games in which cooperative
behavior is optimal, such as the prisoner’s dilemma, people act more cooperatively
when they act spontaneously. This is true in one-​shot games and iterated games, in
online experiments and in face-​to-​face interactions, in time-​pressure manipulation
experiments and in conceptual priming experiments, in which subjects act more
charitably after writing about times when their intuition led them in the right direc-
tion compared with times when careful reasoning led them in the right direction
(Rand et al., 2012).16
Consider also the concept of “interpersonal fluency,” a kind of ethical expertise
in having and acting upon prosocial spontaneous inclinations.17 Then President-​
Elect Obama exhibited interpersonal fluency during his botched attempt in 2009
to take the Oath of Office. In front of millions of viewers, Obama and Chief Justice
of the Supreme Court John Roberts both fumbled the lines of the oath, opening the
possibility of a disastrously awkward moment.18 But after hesitating for a moment,

suggests that the subjective perception of worth can conflict with one’s beliefs about which activities
are worthwhile. See Chapter 4 for related discussion. Thanks to Reine Hewitt for this suggestion.
16
  These experiments are different from those used to defend so-​called Unconscious Thought
Advantage (UTA) theory. UTA experiments have typically been underpowered and have not been
replicated. For a critique of UTA, see Nieuwenstein et al. (2015).
17
  See Brownstein and Madva (2012a) for discussion of the following example. See Madva (2012)
for discussion of the concept of interpersonal fluency.
18
  For a clip of the event and an “analysis” of the miscues by CNN’s Jeanne Moos, see http://​www.
youtube.com/​watch?v=EyYdZrGLRDs.
Int roduc tion 9

Obama smiled widely and nodded slightly to Roberts, as if to say, “It’s okay, go
on.” This gesture received little explicit attention, but it defused the awkwardness
of the moment, making it possible for the ceremony to go on in a positive atmos­
phere. Despite his nervousness and mistakes, Obama’s social fluency was on display.
Moreover, most of us know people with similar abilities (aka “people skills”). These
skills require real-​time, fluid spontaneity, and they lead to socially valuable ends
(Manzini et al., 2009).19
Similar ideas are taken up in theories of virtue ethics. Aristotle argued that the
virtuous person will feel fear, anger, pity, courage, and other emotions “at the right
time, about the right thing, towards the right people, for the right end, and in the
right way” (2000, 1106b17–​20). To become virtuous in this way, Aristotle stressed,
requires training. Like the athlete who practices her swing over and over and over
again, the virtuous person must practice feeling and acting on her inclinations over
and over and over again in order to get them right. This is why childhood is so crucial
in Aristotle’s conception of the cultivation of virtue. Getting the right habits of char-
acter at a young age enables us to be virtuous adults. The result is a picture of virtue
as a kind of skilled and flexible habit, such that the ethical phronimos—​the person
of practical wisdom, who acts habitually yet ethically—​is able to come up with the
right response in all kinds of situations spontaneously. Contemporary virtue ethi-
cists have voiced similar ideas. Julia Annas writes, “People who perform brave actions
often say afterwards that they simply registered that the person needed to be rescued,
so they rescued them; even when prodded to come up further with reasons why
they did the brave thing, they often do not mention virtue, or bravery” (2008, 22).
Courageous people, in other words, immediately recognize a situation as calling for
a brave act, and act bravely without hesitation. Sometimes these virtuous impulses
run counter to what we think we ought to do. Recall Wesley Autrey, the New York
City “Subway Hero”: Autrey saved the life of an epileptic man who, in the midst of
a seizure, had fallen onto the tracks. Autrey jumped onto the tracks and held the
man down while the train passed just inches overhead. Afterward, Autrey reported
“just reacting” to the situation. But he might well have also thought, after the fact,
that he acted recklessly, particularly given that his children had been standing on the
platform with him. If so, Autrey’s case would resemble those described by Nomy

19
  Indeed, research suggests that interpersonal fluency can be impaired in much the same way as
other skills. Lonely people, Megan Knowles and colleagues (2015) find, perform worse than nonlonely
people on social sensitivity tasks when those tasks are described as diagnostic of social aptitude, but
not when the same tasks are described as diagnostic of academic aptitude. In fact, lonely people often
outperform nonlonely people on these tasks when the tasks are described as diagnostic of academic
aptitude. This is arguably a form of stereotype threat, the phenomenon of degraded performance on a
task when the group of which one is a member is stereotyped as poor at that activity and that stereo-
type has been made salient.
10 Introduction

Arpaly (2004) as cases of “inverse akrasia,” in which a person acts virtuously despite
her best judgment.
The pride of place given to spontaneity is not limited to Western virtue ethical
theories either. Confucian and Daoist ethics place great emphasis on wu-​wei, or a
state of effortless action. For one who is in wu-​wei, “proper and effective conduct
follows as automatically as the body gives in to the seductive rhythm of a song”
(Slingerland, 2014, 7). Being in wu-​wei, in other words, enables one to be simul-
taneously spontaneous and virtuous. Or as Edward Slingerland puts it, “The goal
is to acquire the ability to move through the physical and social world in a manner
that is completely spontaneous and yet fully in harmony with the proper order of
the natural and human worlds” (2014, 14). Early Chinese scholars differed on how
to achieve wu-​wei. Confucius, for example, emphasized an exacting and difficult set
of rituals—​akin in some ways to the kind of training Aristotle stressed—​while later
thinkers such as Laozi reacted against this approach and recommended instead let-
ting go of any effort to learn how to act effortlessly.
But for all of this, spontaneity has many profound and dangerous vices, and it is
surely wrong to think that acting with abandon, just as such, promotes a well-​led life.
This is the problem with the way in which these ideas have percolated into the con-
temporary American imagination, from Obi-​Wan Kenobi teaching Luke Skywalker
to “trust your feelings,” to the Nike corporation motivating athletes to “just do it,”
to Malcolm Gladwell (2007) touting “the power of thinking without thinking.”
One could easily come away from these venues thinking all too straightforwardly, “I
should just stop thinking so much and trust all my instincts!”
In some ways, we all already know this would be a terrible idea. An understand-
ing of the dangers of acting on whims and gut feelings is embedded in common
folk attitudes (e.g., are there any parents who haven’t implored their misbehaving
kid to “think before you act”?). It is also embedded in Western cultural and phil-
osophical history (e.g., in Plato’s tripartite account of the soul). Sentiments like
Kahlil Gibran’s—​“ Your soul is oftentimes a battlefield, upon which your reason and
your judgment wage war against your passion and your appetite” (1923, 50)—​are
also familiar. These sentiments have been taken up in toned-​down form by research
psychologists too. For example, Keith Payne and Daryl Cameron (2010) channel
Hobbes in characterizing our implicit attitudes as “nasty, brutish, and short-​sighted,”
in contrast to our reasoned beliefs and judgments.
Perhaps the most familiar worry is with simply being impulsive. Indeed, we often
call a nondeliberative action spontaneous when it has a good outcome but impul-
sive when it has a bad outcome.20 The costs of impulsivity are well documented.
Walter Mischel’s “marshmallow experiments” are a locus classicus. Beginning with a
group of three-​to six-​year-​olds at the Bing Nursery School at Stanford University in

20
  See the discussion in Chapter 5 of experimental research on folk attitudes toward the “deep self.”
Int roduc tion 11

the late 1960s, Mischel and colleagues began to test how children delay gratification
by offering them a choice: one marshmallow now or two in fifteen minutes—​but
only if the child could wait the fifteen minutes to eat the first marshmallow. What’s
striking about this study only emerged over time: the children who waited for two
marshmallows were better at coping with and responding to stress more than ten
years later, and even had higher SAT scores when they applied to college (Mischel
et al., 1988).
The marshmallow experiments helped give rise to an entire subfield of research
psychology focused on self-​control and self-​regulation. The objects of study in this
field are familiar. For example, the four chapters dedicated to “Common Problems
with Self-​Regulation” in Kathleen Vohs and Roy Baumeister’s Handbook of Self-​
Regulation (2011) focus on substance addictions, impulsive eating, impulsive shop-
ping, and attention-​deficit/​hyperactivity disorder. These are all problems with
impulsivity, and some in the field simply define good self-​regulation as the inhibition
of impulses (e.g., Mischel et al., 1988). The consequences of uncontrolled impulses
can be huge, of course (but so too can the conceptualization of all impulses as things
to be controlled, as I discuss later).21
Problems like these with impulsivity are compounded, moreover, when one’s
deliberative capacities are diminished. Behavioral economists have shown, for
example, that people are much more likely to impulsively cheat, even if they iden-
tify themselves as honest people, if they are cognitively depleted (Gino et al., 2011).
This finding is an example of a broad phenomenon known as “ego depletion”
(Baumeister et al., 1998).22 The key idea is that self-​control is a limited resource,
akin to a muscle, and that when we stifle impulses or force ourselves to focus on
difficult tasks, we eventually tire out. Our self-​regulatory resources being depleted,
we then exhibit poorer than usual self-​control on subsequent tasks that require self-​
control. Once depleted (by, e.g., suppressing stray thoughts while doing arithme-
tic), individuals are more prone to break their diets (Vohs and Heatherton, 2000),
to drink excessively, even when anticipating a driving test (Muraven et al., 2002),

21
  A case in point (of the idea that it is problematic to conceptualize all impulses as things to be con-
trolled) is the marshmallow experiment itself, for which various explanations are available, only some
of which have to do with impulsivity. While the most commonly cited explanation of Mischel’s find-
ings focuses on the impulsivity of the children who couldn’t wait for a second marshmallow, another
possibility is that the marshmallow experiment gauges the degree to which children trust the experi-
menter and have faith in a reliable world. See, for instance, Kidd and colleagues (2013). Thanks to
Robin Scheffler for pointing out this possibility to me. Of course, the degree to which one trusts others
may itself be a product of one’s spontaneous reactions.
22
  There is increasing concern about research on ego depletion, however. One recent meta-​analysis
found zero ego depletion effects across more than two thousand subjects (Hagger et al., 2016). But
even some who have taken the reproducibility crisis the most seriously, such as Michael Inzlicht, con-
tinue to believe in the basic ego depletion effect. See http://​soccco.uni-​koeln.de/​cscm-​2016-​debate.
html and see the Appendix for discussion of reproducibility in psychological science.
12 Introduction

and to feel and express anger (Stucke and Baumeister, 2006). In cases like these,
it seems clear that we would be better at reining in our problematic impulses if our
deliberative capacities weren’t depleted.
In addition to undermining well-​being, spontaneity can lead us to act irration­
ally. For example, in the mid-​1990s, AIDS activist and artist Barton Benes caused
controversy with an exhibition called Lethal Weapons, which showed at museums
like the Chicago Art Institute and the Smithsonian. The exhibition consisted of
ordinary objects filled with Benes’s or others’ HIV-​infected blood. In an inter-
view, Benes explained how he came up with the idea for Lethal Weapons.23 He had
experienced the worst of the early days of the AIDS epidemic in the United States,
watching nearly half his friends die of the disease. Benes recalled the rampant fear
of HIV-​positive blood and how doctors and nurses would avoid contact with him.
(Benes was HIV-​positive, and like most people at the time, he knew that HIV
affected patients’ blood, but didn’t know much else.) While chopping parsley in his
apartment one night, Benes accidentally cut his finger. On seeing his own blood on
his hands, Benes felt a rush of fear. “I was terrified I would infect myself,” he thought.
The fear persisted long enough for Benes to find and wear a pair of rubber gloves,
knowing all the while that his fear of contaminating himself made no sense, since
he already had the disease. Later, Benes saw the experience of irrationally fearing
infection from his own blood as signifying the power of the social fears and cultural
associations with HIV and blood. These fears and associations resided within him
too, despite his own disavowal of them. Lethal Weapons was meant to force these
fears and associations into public view.
Benes’s reaction was spontaneous and irrational. It was irrational in that it
conflicted with what he himself believed (i.e., that he couldn’t infect himself
with HIV). In this sense, Benes’s reaction is structurally similar to other “belief-​
discordant” behaviors, in which we react to situations one way, despite seeming
to believe that we ought to react another way. Tamar Szabó Gendler has discussed
a number of vivid examples, including people who hesitate to eat a piece of fudge
molded to resemble feces, despite acknowledging that the ugly fudge has the same
ingredients as a regular piece they had just eaten (2008a, 636); sports fans who
scream at their televisions, despite acknowledging that their shouts can’t tran-
scend space and time in order to affect the game’s outcome (2008b, 553, 559);
and shoppers who are more willing to purchase a scarf that costs $9.99 than one
that costs $10.00, despite not caring about saving a penny (2008a, 662). One of
Gendler’s most striking examples is adapted from the “problem of the precipice”
discussed by Hume, Pascal, and Montaigne. The modern-​day equivalent of what
these early modern philosophers imagined exists today in Grand Canyon National

  See http://​www.poz.com/​articles/​217_​11353.shtml. For images of Benes’s artwork, as well as


23

discussion of his story, see http://​www.radiolab.org/​story/​308930-​lethal-​weapons/​.


Int roduc tion 13

Park (among other places, such as the Willis [Sears] Tower in Chicago). It is a
glass walkway extending seventy feet out from the rim of the canyon called the
“Skywalk.”
Tourists who venture out on the Skywalk act terrified. (Search for “Grand Canyon
Skywalk” on YouTube for evidence.) They shake, act giddy, and walk in that slow,
overly careful way that dogs do when they cross subway grates. But as Gendler
makes clear, there is something strange about these behaviors. Presumably the tour-
ists on the Skywalk believe that they are safe. Would they really venture onto it if they
didn’t? Wouldn’t they accept a bet that they’re not going to fall through the platform?
Wouldn’t you?
Research by Fiery Cushman and colleagues (2012) illuminates similar phenom-
ena in controlled laboratory settings. Diabolically, they ask subjects to perform pre-
tend horrible actions, like smashing a fake hand with a hammer and smacking a
toy baby on a table. Participants in these studies acknowledge that their actions are
pretend; they clearly know that the hand and the baby are just plastic. But they still
find the actions far more aversive than equally unharmful analogous actions, like
smashing a nail with a hammer or smacking a broom on a table.
Some of these examples lead to worse outcomes than others. Nothing particularly
bad happens if you shake and tremble on the Skywalk despite believing that you’re safe.
Even if nothing particularly bad happens as a result, however, the skywalker’s fear and
trembling are irrational from her own perspective. Whatever is guiding her fearful reac-
tion conflicts with her apparent belief that she’s safe. One way to put this is that the
skywalker’s reaction renders her “internally disharmonious” (Gendler, 2008b). Her
action-​guiding faculties are out of sync with her truth-​taking faculties, causing dishar-
mony within her. Plato might call this a failure to be just, for the just person is one who
“puts himself in order, harmonizes . . . himself . . . [and] becomes entirely one, moderate
and harmonious” (Republic, 443de in Plato, 380 bce/​1992; quoted in Gendler, 2008b,
572). Whether or not failing to be just is the right way to put it, I suspect that most
people can relate to the skywalker’s disharmony as some kind of shortcoming, albeit a
minor one.
In other cases, of course, irrational belief-​discordant behavior can lead to very
bad outcomes. Arguably this is what Benes’s exhibition was meant to illustrate. The
history (and persistence) of horrific social reactions to HIV and AIDS were very
likely driven, at least in part, by the irrational fears of ordinary people. While HIV
can be lethal, so too can mass fear. Gendler discusses research on implicit bias in a
similar vein. Not only do implicit biases cause us to act irrationally, from the perspec-
tive of our own genuinely held beliefs, but they also cause us to fail to live up to our
moral ends.
A plethora of research on the “moral myopia” (Greene and Haidt, 2002) of spon-
taneous judgment supports a similar conclusion. Consider perceptions of “moral
luck.” Most people believe that there is a significant moral difference between
intended and accidental actions, such that accidental harms ought to be judged or
14 Introduction

punished more leniently than intended harms (or not at all).24 But when the out-
comes of accidents are bad enough, people often feel and act contrary to this belief,
and seem to do so on the basis of strong, and seemingly irrational, gut feelings.
Justin Martin and Cushman (2016) describe the case of Cynthia Garcia-​Cisneros,
who drove through a pile of leaves, only to find out later (from watching the news
on TV) that two children had been hiding under the pile. The children were tragi-
cally killed. Martin and Cushman described this case to participants in a study and
found that 94% of participants thought that Garcia-​Cisneros should be punished.
But when another group heard the same story, except they were told that the bumps
Garcia-​Cisneros felt under her car turned out to be sticks, 85% thought that she
deserved no punishment. This seems irrational, given that it was a matter of pure
bad luck that children were hiding in the leaves. Moreover, even though this particu-
lar study focused on participants’ considered verbal judgments, it seems likely that
those judgments were the product of brute moral intuitions. When experimental
participants are placed under cognitive load, and are thus less able to make delib-
erative judgments (compared to when they are not under load), they tend to judge
accidental harms more harshly, as if those harms were caused intentionally (Buon
et al., 2013). This is not due to a failure to detect harm doers’ innocent intentions.
Rather, it seems that people rely on an intuitive “bad outcome = punishment” heu-
ristic when they are under load.25
Experimental manipulations of all sorts create analogous effects, in the sense of
showing how easily our intuitive nondeliberative judgments and actions can “get
it wrong,” morally speaking. People tend to judge minor negative actions more
harshly when they are primed to feel disgust, for example (Wheatley and Haidt,
2005). Worse still, judges in Israel were shown to render harsher verdicts before
lunch than after (Danziger et  al., 2011). Both of these are examples of delibera-
tive judgments going awry when agents’ (sometimes literal) gut feelings get things
wrong. As Mahzarin Banaji (2013) put it, summarizing a half century of research on
“bounded rationality” like this, we are smart compared with some species, but not
smart enough compared with our own standards of rationality and morality.26 Often
we fail to live up to these standards because we are the kinds of creatures for whom
spontaneity can be a potent vice.

24
  Although see Velleman (2015) for a discussion of interesting cross-​cultural data suggesting that
not all cultures distinguish between intended and accidental harms.
25
   Martin and Cushman (2016) argue that judgments about moral luck are actually rationally
adaptive. See also Kumar (2017) for thoughtful discussion.
26
  In other words, we are error-​prone in surprising ways, by our own standards of rationality and
morality, for example, even when we are experts in a domain or when we have perfectly good intentions.
See Kahneman (2003) for a summary of the many “deficits” of spontaneous and impulsive decision-​
making. For related discussion of the moral pitfalls of acting on the basis of heuristic judgments, see
Sunstein (2005). For a brief reply to Sunstein that recommends a research framework similar in some
respects to what I present in this book, see Bartsch and Wright (2005).
Int roduc tion 15

2. Implicitness
I have suggested that understanding these virtues and vices of spontaneity requires
understanding the implicit mind. But what does “implicit” mean? Ordinarily, for
something to be implicit is, roughly speaking, for it to be communicated indirectly.27
When a friend repeatedly fails to return your calls, disrespect may be implicit in her
inaction. If she returns your calls, but just to tell you that you’re not worth her time,
then her disrespect is explicit. The disrespect is obvious, intended, and aboveboard.
You don’t have to infer it from anything.28 Similarly, if I won’t make eye contact with
you, my unfocused gaze suggests that perhaps I feel uncomfortable. My discomfort
is implicit in my lack of eye contact. Or imagine a terrific painter. Her deep under-
standing of color, texture, shape, and so on may be implicit in her work. We infer
her skill from what she does on the canvas, notwithstanding whatever she might or
might not say explicitly about her paintings.
The distinction between explicit and implicit may be familiar in these examples,
but its nature is not clear. At least three senses seem to be implied when we say that
so-​and-​so’s feelings, thoughts, or skills are implicit in her behavior:

Implicit = unarticulated: Thoughts and feelings that are implicit are often


not verbalized. Just as it would render her disrespect explicit if your friend
called to say that you’re not worth her time, the discomfort bespoke by my
lack of eye contact would be obviously more explicit if I told you, “It’s hard
to look you in the eye because you make me feel uncomfortable.” Similarly,
the painter’s understanding of color and so on would be explicit if she said,
“Dabs of bright color should never superimpose hard dark lines.” The idea
here is that thoughts and feelings become explicit when they are articu-
lated, whether in verbal speech, as in these examples, or writing, or some
other form of clear communication.29
Implicit  =  unconscious:  Perhaps the reason we don’t verbalize our
thoughts and feelings, when they are implicit, is that we are unaware of
them. When I won’t make eye contact with you, implicitly suggesting to
you that I’m uncomfortable, it could be the case that I have no idea that
I’m in this state. Who hasn’t been told by a loved one, “You seem sad” or
“You seem angry,” and realized only then that yes, it’s true, you have been

27
  The Oxford English Dictionary offers two additional common uses:  (1) essentially connected
with (e.g., “the values implicit in the school ethos”); and (2) absolute (e.g., “an implicit faith in God”).
See https://​en.oxforddictionaries.com/​definition/​implicit.
28
  I’m not referring to inference in any technical sense. Understanding explicit assertions, like
“You’re not worth my time,” may require inference-​making.
29
  Of course, posture and eye contact and so on are forms of communication, and sometimes they
can convey a message clearly.
16 Introduction

feeling sad or angry without knowing it at the time? This way of thinking
might also elucidate cases in which people seemingly can’t articulate their
feelings or thoughts. Perhaps the painter can’t express her understanding
of how to paint simply because she is unaware of how she paints the way
she does. She isn’t choosing not to say. She can’t explain it because perhaps
it’s outside the ambit of her self-​awareness.
Implicit = automatic: A final idea is that while explicit mental states are
under the agent’s control, implicit states are automatic. That I avoid eye
contact with you seems automatic in the sense that at no point did I form
the explicit intention to “avoid eye contact.” In the right circumstances,
with the right cues, it just happens. It might even be the case that if I set
myself the intention to make eye contact with you, I’ll end up avoiding
your eyes all the more, in just the way that you’ll automatically think of a
white bear if you try not to think of one (Wegner et al., 1987). Similarly,
the painter might be able to paint successfully only when “letting it flow,”
giving up the effort to enact a premeditated plan.

Each of these senses of implicitness draws upon a long history, running from
Plato’s and St. Augustine’s ideas about parts of the soul and self-​knowledge, through
Descartes’s and Leibniz’s ideas about differences between human and nonhuman
animal minds, to contemporary dual-​systems psychology.30 Often these senses of
what it is for something to be implicit overlap, for example, when I fail to articulate
a mood that I’m unaware of being in. I also take it that while implicit mental states
are often unarticulated, unconscious, and/​or automatic, sometimes they are artic-
ulated, conscious, and, to some extent, controlled. (Similarly, in my view, explicit
mental states can sometimes be unarticulated, unconscious, and automatic.) For
these reasons, moving forward, I do not define implicit mental states as being unar-
ticulated, unconscious, or automatic, although they often have some combination
of these qualities. Rather, as I describe below (§§3.1 and 3.2), I define implicit men-
tal states in terms of their cognitive components and their distinctive place in psy-
chological architecture.31
What stand out about the history of ideas underlying implicitness, particularly
in the twentieth century, are the striking parallels between streams of research that
focused, more or less exclusively, on either the virtues or the vices of spontaneity.
Consider two examples.

30
  For insightful discussion of the history of dichotomous views of the mind, see Frankish and
Evans (2009).
31
  I do, however, stress the automaticity of certain processes within implicit states. See Chapters 2
and 3. Note also that I do not offer a companion account of explicit mental states. Given that I argue,
in Chapter 3, that implicit mental states are not beliefs, I am happy to consider explicit mental states
beliefs, or belief-​like, and to rely upon an established theory of belief. See Chapter 3.
Int roduc tion 17

Beginning roughly in the middle of the twentieth century, two bodies of research
focused on the phenomenon of, as Michael Polonyi put it, “knowing more than we
can tell.” In The Tacit Dimension (1966/​2009), Polonyi argued that it is a pervasive
fact about human beings that we can do many things skillfully without being able
to articulate how we do them (cf. Walter Payton’s claim in §1). Human beings are
adept at recognizing faces, for example, but few of us can say anything the least bit
informative about how we do this. Polonyi’s focal examples were scientists, artists,
and athletes who exemplified certain abilities but nevertheless often lacked an artic-
ulate understanding of those abilities. Contemporary research in the tacit knowl-
edge tradition similarly focuses on skillful but unarticulated abilities, such as the
tacit knowledge of laboratory workers in paleontology labs (Wylie, 2015).
For Gilbert Ryle (1949) and for “anti-​intellectuals” writing in his wake, the
reason that people often know more than they can tell is that knowing how to do
something (e.g., knowing how to paint) is irreducible to knowing a set of propo-
sitions (e.g., knowing that red mixed with white makes pink). Ryle’s concept of
“knowledge-​how,” the distinct species of knowledge irreducible to “knowledge-​
that,” exerts a greater influence on contemporary philosophy than Polonyi’s concept
of tacit knowledge. Perhaps this is in part due to Ryle’s stronger claim, namely, that
knowledge-​how isn’t simply unarticulated, but is in fact unavailable for linguistic
articulation.
For most contemporary psychologists, however, the phrase “knowing more
than we can tell” is probably associated with Richard Nisbett and Timothy Wilson’s
classic paper, the title of which tellingly (though unintentionally, I assume) inverts
Polonyi’s phrase. In “Telling More than We Can Know: Verbal Reports on Mental
Processes” (1977), Nisbett and Wilson captured a less admirable side of what peo-
ple often do when they don’t know how or why they do things: they make shit up.
In the case of Nisbett and Wilson’s seminal study, people confabulated their reasons
for choosing a nightgown or pair of pantyhose from a set of options (see discussion
in Chapter 6). Implied in research like this on “choice blindness” and confabulation
is that these phenomena lead people to make decisions for suboptimal reasons. This
study and the large body of research it inspired, in other words, take the potential
gulf between the reasons for which we act and the reasons we articulate to explain
our actions to be a problem. A vice.
What in the tacit knowledge and knowledge-​how traditions appears as skillful
activity without accompanying articulation appears in the confabulation literature
as a source of poor decision-​making with confabulation covering its trail. Very sim-
ilar phenomena are described in these streams of research, the salient difference
between them being the normative status of the objects of study.
A second example stems from twentieth-​century theories of learning. William
James’s (1890) writings on habit arguably instantiated a century-​long focus on the
idea that learning involves the automatization of sequences of information. William
Bryan and Noble Harter’s (1899) study of telegraphers exemplified this idea. At
18 Introduction

first, they observed, telegraphers had to search for the letters effortfully. As they
improved, they could search for whole words. Then they could search for phrases
and sentences, paying no attention to individual letters and words. Finally, they
could search for the meaning of the message. This idea—​of learning by making auto-
matic more and more sequences of information—​was taken up in several distinct
streams of research. In the 1960s, Arthur Reber developed an influential account
of “implicit learning,” the most well-​known aspect of which was the artificial gram-
mar learning paradigm. What Reber (1967) showed was that participants who were
presented with strings of letters to memorize automatically absorbed grammatical
rules embedded in those strings (unbeknownst to them).32 In his account of “skilled
coping,” which he used to critique the idea of artificial intelligence, Hubert Dreyfus
(2002a,b, 2005, 2007a,b) described the development of expertise in strikingly simi-
lar ways to Bryan and Harter (1899). George Miller (1956), in his celebrated paper
on “the magic number seven, plus or minus two,” gave this process—​of the autom-
atization of action through grouping sequences of action—​the name “chunking.”
Some even applied the basic idea to an account of human progress. Alfred North
Whitehead wrote:

It is a profoundly erroneous truism, repeated by all copy-​books and by


eminent people making speeches, that we should cultivate the habit of
thinking of what we are doing. The precise opposite is the case. Civilization
advances by extending the number of operations which we can perform
without thinking about them. Operations of thought are like cavalry
charges in a battle—​they are strictly limited in number, they require fresh
horses, and must only be made at decisive moments. (1911, 61)

Meanwhile, as these researchers of various stripes used the concept of automa-


ticity to illuminate habit, skill, and (perhaps a bit grandly) the progress of history
itself, psychologists interested in understanding social cognition deployed the
same concept in quite a different way. Richard Shiffrin and Walter Schneider’s twin
papers (1977) on controlled and automatic information processing are sometimes
cited as the origin of contemporary dual-​process theories of the mind. Their focus
was on distinguishing the controlled search for information from the automatic
detection of patterns (in, e.g., strings of digits and letters). Notably, though, Shiffrin
and Schneider’s papers received the most uptake in dual-​process theories of social
cognition, in particular in research on the automaticity of stereotyping and preju-
dice (see, e.g., Dovidio and Gaertner, 1986; Smith and Lerner, 1986; see Frankish
and Evans, 2009, for discussions). One of the core ideas here was that, just as skilled
typists can find the location of letter keys without effort or attention once they have

32
  Reber (1989) called this implicit learning a form of tacit knowledge.
Int roduc tion 19

“overlearned” the location of the keys, many social judgments and behaviors are
the result of repeated exposure to common beliefs about social groups. The semi-
nal work of Russell Fazio and colleagues showing that attitudes about social groups
can be activated automatically was influenced by research in cognitive psychology
like Shiffrin and Schneider’s. Fazio’s (1995) “sequential priming” technique mea-
sures social attitudes by timing people’s reactions (or “response latencies”) to ster­
eotypic words (e.g., “lazy,” “nurturing”) after exposing them to social group labels
(e.g., “black,” “women”). Most people are significantly faster to identify a word like
“lazy” in a word scramble after being exposed to the word “black” (compared with
“white”). A faster reaction of this kind is thought to indicate a relatively automatic
association between “lazy” and “black.” The strength of this association is taken to
reflect how well learned it is.
Here, too, the core idea—​implicit learning understood in terms of automaticity—​
gives rise to structurally similar research on the virtues and vices of spontaneity.
What follows, then, constitutes a hypothesis. The hypothesis is that we can come to
have a better understanding of the virtues and vices of spontaneity by thinking of
both as reflecting the implicit mind.

3.  The Roadmap


Two chapters, following this one, are dedicated to each of the book’s three sets of
central questions—​the nature of the implicit mind; the relationship of the implicit
mind to the self; and the ethics of spontaneity. I describe each chapter below (§3.1–​
§3.3). In short, I argue in Chapters 2 and 3 that paradigmatic spontaneous inclina-
tions are causally explained by “implicit attitudes.” I argue that implicit attitudes,
properly understood, constitute a significantly broader class than is often described
in the social psychological literature. On my view, implicit attitudes are not simply
likings or dislikings outside agents’ conscious awareness and control. Chapters 2
and 3 develop my account of implicit attitudes. Then, in Chapters 4 and 5, I argue
that paradigmatic spontaneous actions are “ours,” in the sense that they reflect upon
us as agents. I develop an account of actions that are “attributable” to agents in this
sense, based on the notion that actions are attributable to agents when the person
cares—​in a broad sense—​about the intentional objects of her action. I then argue
that in paradigmatic cases of spontaneity, agents care about the intentional objects
of their implicit attitudes. I then go on to situate this claim with respect to questions
about moral responsibility. In Chapters 6 and 7, I consider the ethics of implicit
attitudes. In central cases, for both practical and conceptual reasons, we cannot sim-
ply deliberate our way to having more ethical inclinations and dispositions to act
spontaneously. Instead, I argue, we must cultivate ethical implicit attitudes through
practice and training. I  review the literature on self-​regulation and implicit atti-
tude change, and provide an “aerial map” of empirically supported techniques for
20 Introduction

improving these states. Finally, in Chapter 8, I offer a short conclusion, noting what
I take to be the book’s most significant upshots as well as key unanswered questions,
and in the Appendix, I describe measures like the IAT in more detail and discuss
challenges facing IAT research, as well as challenges facing psychological science
more broadly.

3.1 Mind
I begin in Chapter 2 with a description of the cognitive components of the mental
states that are implicated in paradigmatic cases of spontaneous action. I describe
four components: the perception of a salient Feature in the ambient environment;
the experience of bodily Tension; a Behavioral response; and (in cases of success)
a felt sense of Alleviation.33 States with these “FTBA” components causally explain
many of our spontaneous inclinations. Identifying these components, inter alia,
draws bounds around the class of spontaneous actions with which I’ll be concerned.
First, I  characterize the perception of salient features in the environment in
terms of their “imperatival quality.” Features direct agents to behave in a particu-
lar way immediately, in the sense that an imperative—​for example, “Heads up!”—​
commands a particular response. I offer an account of how ambient features of the
environment can be imperatival in this way. Second, I argue that the feelings that
are activated by an agent noticing a salient feature in the environment paradigmati-
cally emerge into an agent’s awareness as a subtle felt tension. I describe this feel-
ing of tension as “perceptual unquiet.” Felt tension in this sense can, but does not
necessarily, emerge into an agent’s focal awareness. I describe the physiological cor-
relates of felt tension, which are explicable in terms of the processes of low-​level
affect that play a core role in guiding perception. Third, I argue that tension in this
sense inclines an agent toward a particular behavior. This is why it is “tension” rather
than simply affect; it is akin to feeling (perhaps subtly, perhaps strongly) pulled or
pushed into a particular course of action. At this stage in the discussion I do not
distinguish between actions, properly speaking, and behavior (but see Chapter 5).
The important claim at this point is that particular features and particular feelings
are tightly tied to particular behaviors. They are, as I discuss later, “co-​activating.”
Features, tensions, and behaviors “cluster” together, in other words, and their tight
clustering provides much of their explanatory power. The fourth component of
FTBA states is “alleviation.” When a spontaneous action is successful, felt tension
subsides and the agent’s attention is freed up for the next task. Alleviation obtains
when an action is successfully completed, in other words, and it doesn’t when the
agent isn’t successful. My account of this process of alleviation is rooted in theories

  My account of FTBA states is most directly inspired by Gendler’s (2008a,b) account of the
33

RAB—​Representational-​Affective-​Behavioral—​content of “alief.” See Chapter 3. I originally presented


a version of this account of FTBA states with Alex Madva in Brownstein and Madva (2012b).
Int roduc tion 21

of “prediction-​error” learning. The way in which tension is alleviated or persists


helps to explain how an agent’s dispositions to act in spontaneous or impulsive ways
change—​and potentially improve—​over time.
After describing the components of FTBA states in Chapter  2, I  discuss their
place in cognitive architecture in Chapter 3. Are they reflexes, mere associations,
beliefs, or “aliefs?” Taking a functional approach, I distinguish their place in cogni-
tive architecture from each of these possibilities. I argue that FTBA states represent
a sui generis kind of mental state: implicit attitudes. I outline the defining feature
of implicit attitudes, which is that their F-​T-​B-​A components are co-​activating.
This leads these states to be arational, in the sense that they are not suitable for
integration into patterns of inference-​making (Levy, 2014; Madva, 2012, 2016b).
Nevertheless, I argue that implicit attitudes can provide defeasible normative guid-
ance of action. I develop this claim in terms of what Hannah Ginsborg (2011) has
called “primitive normativity.”

3.2 Self
How should we relate to our own implicit minds? This is a hard question given that
implicit attitudes are unlike ordinary “personal” states like beliefs, hopes, and so
on. But implicit attitudes are also not mere forces that act upon us. This ambiguity
renders questions of praise, blame, and responsibility for spontaneity hard to assess.
For example, do people deserve blame for acting on the basis of implicit biases, even
when they disavow those biases and work to change them? Analogous questions are
no less pressing in the case of virtuous spontaneity. When the athlete or the artist
or the ethical phronimos performs well, she often seems to do so automatically and
with little self-​understanding. Recall Walter Payton’s statement that he “just acts”
on the field. Payton illuminates the way in which some spontaneous actions seem
to flow through us, in a way that seems distinct from both “activity” and “passivity”
in the philosophy of action.
In Chapter  4, I  adopt an attributionist approach to the relationship between
action and the self. Attributionism is broadly concerned with understanding the
conditions under which actions (or attitudes) reflect upon the self.34 Attributionist
theories typically argue that actions can reflect upon the self even when those
actions are outside an agent’s control, awareness, and even, in some cases, reasons-​
responsiveness. This makes attributionism well suited to the idea that spontaneous
action might reflect upon who one is. I argue for a “care-​based” conception of attri-
butionism, according to which actions that reflect upon the self are those that reflect
upon something the agent cares about. More specifically, I  argue for an affective
account of caring, according to which cares are tightly connected to dispositions to

34
  My focus will be on attributability for actions, not attitudes as such.
22 Introduction

have particular kinds of complex feelings. To care about something, on this view, is
to be affectively and motivationally tethered to it. To care about the Philadelphia
Phillies, for instance, is to be disposed to be moved in various ways when they win
or lose. My care-​based account of attributability is very inclusive. I argue that a great
many of our actions—​including those caused by our implicit attitudes—​reflect
upon our cares. The result is that implicit attitudes are indeed “ours,” and as such
they open us to “aretaic” appraisals—​evaluations of us in our capacity as agents.
It is important to note that my focus in Chapter 4 is not on making a metaphysical
claim about the ultimate nature of the self. The self composed of one’s cares is not a
thing inside one’s body; it is not the Freudian unconscious. Rather, it is a concept to be
used for distinguishing actions that are mine, in the attributability sense, from actions
that are not mine. My use of the term “self” will therefore depart from classical philo-
sophical conceptions of the self, as found in Hume, for instance. Moreover, in the sense
I will use it, the self is typically far from unified. It does not offer a conclusive “final say”
about who one is. I clarify the relationship of the self in this sense to what some theo-
rists call the “deep self.”
The upshot of Chapter 4 is that many more actions, and kinds of action, open us to
aretaic appraisal than other theorists have suggested. Spontaneous actions that conflict
with what we believe and intend to do, that unfold outside our focal awareness and
even outside our direct self-​control, still signal to others what kinds of people we are.
Chapter 5 puts these claims into dialogue with received theories of attributionism and
responsibility. What does it mean to say that we are open to praise and blame for actions
that we can’t control? What are the implications of thinking of the self as fundamentally
disunified? In answering these kinds of questions, I respond to potential objections to
my admittedly unusual conception of actions that reflect on the self and situate it with
respect to more familiar ways of thinking about agency and responsibility.

3.3 Ethics
In the broadest terms, Chapters  2 and 3 offer a characterization of the implicit
mind, and Chapters 4 and 5 argue that the implicit mind is a relatively “personal”
phenomenon, in the sense that it reflects upon us as agents. Chapters 6 and 7 then
investigate the ethics of spontaneity by considering both what works to change our
implicit attitudes and how to best conceptualize the most effective approaches.
In Chapter 6, I argue that deliberation is neither necessary nor always warranted
in the effort to improve the ethical standing of our implicit attitudes. In part this
stems from the fact that often we cannot deliberately choose to be spontaneous.
In some cases, this is obvious, as the irony of the Slate headline “Hillary Clinton
Hatches Plan to Be More Spontaneous” makes clear.35 Even in one’s own life,

  See http://​www.slate.com/​blogs/​the_​slatest/​2015/​09/​08/​ hillary_​clinton_​reboot_​the_​nyt_​


35

reports_​she_​will_​show_​morehumor_​and_​heart.html.
Int roduc tion 23

without Clintonesque political scheming, planning to be spontaneous won’t work.


In order to enjoy the sense of freedom and abandon that can come from being
spontaneous, one can’t make a plan to be spontaneous next Thursday at 2:15 p.m.
This won’t work because deliberative planning to be spontaneous is often self-​
undermining. In cases like this, spontaneity is what Jason D’Cruz (2013) calls
“deliberation-​volatile.” One’s reasons to make spontaneous choices (e.g., drop-
ping one’s work and going for a walk) are volatile upon deliberation; once delib-
erated upon, they degrade as reasons for action. I expand upon D’Cruz’s account
of deliberation-​volatility and show it to be relevant to many cases of actions
involving implicit attitudes. I also consider the role of deliberation in combating
unwanted implicit biases.
Chapter 7 develops an alternative: in order to know when and how to trust our
implicit minds, and to minimize the risk of acting irrationally or immorally, we have
to cultivate a “feel” for acting spontaneously. This notion risks a regress, however.
Question: “How do I know whether to trust my gut right now?” Answer: “You have
to trust your gut about whether to trust your gut,” and so on indefinitely. Without
a deliberative route to safely acting spontaneously, it may seem that we are stuck in
this regress. The aim of Chapter 7 is to help us dig our way out of it. Repurposing
terms from Daniel Dennett (1989), I consider characterizing the self-​improvement
of our implicit attitudes as a way of taking the “intentional stance,” the “physical
stance,” or the “design stance” toward ourselves. Roughly, these mean treating our
implicit attitudes as if they were things to be reasoned with, things to be manipu-
lated as physical objects, or things to be re-​engineered. Because implicit attitude
change in fact requires a hybrid of all three of these approaches, I coin a new term
for the most effective strategy. In attempting to improve our implicit attitudes, we
should adopt the habit stance. This means treating these mental states as if they were
habits in need of (re)training. I provide an architecture of the most empirically well-​
supported techniques for (re)training our spontaneous inclinations and automatic
responses. The techniques I  discuss are culled from both psychological research
on self-​regulation—​including the self-​regulation of implicit bias, health behavior,
addiction, and the like—​and from research on coaching and skill acquisition. After
discussing these techniques, I consider broad objections to them, in particular the
worry that adopting the habit stance in some way denigrates our status as rational
agents.

4.  Looking Forward


Spontaneity is a pervasive feature of human decision-​making and action, and many
scholars interpret it in ominous terms. For example, in a review article, Julie Huang
and John Bargh (2014) propose understanding individual human agents as ves-
sels of independently operating “selfish goals.” These goals operate spontaneously
24 Introduction

when we encounter the right cues, and they do not require an agent’s awareness or
endorsement. Huang and Bargh draw from some of the same research I discussed
earlier, but cast it in terms that sever the sprigs of human action from human agents.
They focus on studies, as they describe them, that highlight “external influences that
override internal sources of control such as self-​values and personality” and models
that “[eliminate] the need for an agentic ‘self ’ in the selection of all behavioral and
judgmental responses” (2014, 122–​123).
One way to summarize this is to say, borrowing an apt phrase from Kim Sterelny
(2001), that we are “Nike organisms.” Much of what we do, we just do. This is not
to say that our actions are simple or divorced from long processes of enculturation
and learning. But when we act, we very often act spontaneously. One question at the
heart of this book is whether being Nike organisms is thoroughly ominous in the
way that Huang and Bargh and many others suggest.
A broad account of spontaneity—​focused on both its virtues and vices—​resists
this kind of ominous interpretation. The result is a book that is pluralist in spirit and
strategy. It is pluralist about what makes a particular kind of mental state a potentially
good guide for action. One way to put this is that mental states can be “intelligent”
without displaying the hallmarks of rational intelligence (such as aptness to inferen-
tial processing and sensitivity to logic, as I discuss in Chapter 3). It is also pluralist
about what makes an action “mine” and about the conditions of praise and blame
for actions. On the view I develop, the self may be conflicted and fractured, because
we often care about incompatible things. Folk practices of praise and blame mirror
this fact. And, finally, the book is pluralist about sound routes to ethical action. The
sorts of self-​cultivation techniques I discuss are thoroughly context-​bound; what
works in one situation may backfire in another. This means that there may be no
substantive, universally applicable principles for how to cultivate our spontaneous
inclinations or for when to trust them. But this does not foreclose the possibility of
improving them.
Overall, this pluralism may make for a messy story. Nevertheless, the payoffs
are worthwhile. No one (to my knowledge) has offered a broad account of the
structure of implicit attitudes, how they relate to the self, and how we can most
effectively regulate them.36 This account, in turn, contributes to our progress on a
number of pressing questions in the philosophical vicinity, such as how to concep-
tualize implicit/​automatic/​unconscious/​“System 1”-​type mental systems and how
to appropriately praise and blame agents for their spontaneous inclinations. Most
broadly, it promises to demonstrate how we might enjoy the virtues of spontaneity
while minimizing its vices.

  Thus while I will consider many friendly and rival theories to the various parts of my account,
36

there is no rival theory to consider to the overall view I present.


Int roduc tion 25

The progress I  make can be measured, perhaps, along two concurrent levels,
one broad and the other narrow.37 The broad level is represented by my claim that
understanding spontaneity’s virtues and vices requires understanding what I  call
the implicit mind. The narrow level is represented by my core specific claims about
implicit attitudes, their place in relation to the self, and the ethics of spontaneity.
I hope that these narrow claims promote the broad claim. However, it’s possible that
I’m right on the broad level, even if some of the specific views I advance turn out to
be mistaken, for either conceptual or empirical reasons (or both). I’ll be satisfied by
simply moving the conversation forward here. The concept of the implicit mind is
here to stay, which makes the effort of trying to understand it worthwhile.

37
  I borrow this broad-​and-​narrow framework for thinking about the promise of a philosophical
project from Andy Clark (2015).
PA RT   O N E

MIND
2

Perception, Emotion, Behavior,


and Change
Components of Spontaneous Inclinations

In 2003, near the Brackettsville Station north of the Rio Grande, a writer for the
New  Yorker named Jeff Tietz followed US Border Patrol agent Mike McCarson
as he “cut sign” across the desert. “Cutting sign” means tracking human or animal
movement across terrain. McCarson’s job was to cut sign of people trying to cross
the desert that separates Mexico from the United States. His jurisdiction covered
twenty-​five hundred square miles. To cover that much land, McCarson had to move
quickly and efficiently. Tietz describes his experience:

Ultimately, I began to see disturbance before I’d identified any evidence.


A piece of ground would appear oddly distressed, but I couldn’t point to
any explicit transformation. Even up close, I couldn’t really tell, so I didn’t
say anything. But I kept seeing a kind of sorcerous disturbance. It didn’t
seem entirely related to vision—​it was more like a perceptual unquiet. After
a while, I pointed to a little region that seemed to exude unease and asked
McCarson if it was sign. “Yup,” he said. “Really?” “Yup,” McCarson said.
McCarson experienced this phenomenon at several additional orders
of subtlety. In recalcitrant terrain, after a long absence of sign, he’d say,
“There’s somethin’ gone wrong there.” I’d ask what and he’d say, “I don’t
know—​just disturbance.” But he knew it was human disturbance, and his
divinations almost always led to clearer sign. Once, we were cutting a cattle
trail—​grazing cows had ripped up the sign, which had been laid down with
extreme faintness—​and McCarson pointed to a spot and said, “I’m likin’
the way this looks here. I’m likin’ everything about this.” He perceived some
human quality in a series of superficial pressings no wider than toadstools,
set amid many hoof-​compressions of powerfully similar sizes and colors
and depths. I could see no aberration at all, from five inches or five feet: it

29
30 Mind

was a cow path. But McCarson liked some physical attribute, and the rela-
tive arrangement and general positioning of the impressions. Maybe he
could have stood there and parsed factors—​probably, although you never
stop on live trails—​but he wouldn’t have had any words to describe them,
because there are no words. (Tietz, 2004, 99).

Cutting sign isn’t entirely spontaneous. McCarson’s skill is highly cultivated and,
moreover, it is deployed in service of an explicit (and perhaps objectionable) goal.1
Yet Tietz’s description of McCarson highlights key features of unplanned, on-​the-​
fly decision-​making. In particular, the description contains four features that point
toward the cognitive and affective components of the mental states that I will argue
are distinctive of the implicit mind.
First, McCarson “sees” what to do in virtue of extremely subtle signs in the envi-
ronment, like barely visible pressings in the dirt. Moreover, he has minimal, if any,
ability to articulate what it is he sees or what any particular sign means. This does
not mean that McCarson’s perceptual expertise is magic, despite the implication
of Tietz’s term “sorcerous disturbance.” McCarson’s is not another myth of the
chicken-​sexer, whom once upon a time philosophers thought could tell male from
female chicks through some act of seeming divination. Rather, McCarson notes
clues and sees evidence, but he does so on an order of magnitude far greater than
almost anyone else.
Second, McCarson’s perceptual expertise is affective, and in a distinctive way. He
seems to “see by feeling,” saying nothing more than that he “likes” the way something
in the terrain looks. Like the cues McCarson notices in the environment, however,
these action-​guiding feelings are extremely subtle. They are a far cry from full-​blown
emotions. Tietz’s term “perceptual unquiet” captures this nicely. On seeing the right
kind of disturbance, McCarson seems simply pulled one way or the other.
This experience of being pulled one way or the other leads to the third feature
of spontaneity that this vignette illuminates. McCarson’s particular perceptual and
affective experiences are tied to particular behavioral reactions. On the one hand,
McCarson just reacts, almost reflexively, upon seeing and feeling the right things.
On the other hand, though, his reactions are anything but random. They are coordi-
nated and attuned to his environment, even though McCarson never stops to “parse
factors.” That is, McCarson cuts sign nondeliberatively, yet expertly.
This is made possible by the fourth key feature, which is illuminated by Tietz’s
own experience of seeing sign. On the fly, spontaneous action involves distinctive
ways of learning. McCarson’s expertise has been honed over years through feedback
from trial and error, a process the beginning of which Tietz just barely experiences.

  I find McCarson’s skill impressive, and, as I discuss, it exemplifies features of the implicit mind.
1

However, I find the way he uses this skill—​to enforce the immigration policies of the United States by
hunting desperate people as if they were a kind of prey—​deeply unsettling.
Perception , Emotion , B ehav ior, and   Chang e 31

My account of the implicit mind begins with a description of these four com-
ponents of unplanned spontaneous inclinations. These are (1)  noticing a salient
Feature in the ambient environment; (2) feeling an immediate, directed, and affec-
tive Tension; (3) reacting Behaviorally; and (4) moving toward Alleviation of that
tension in such a way that one’s spontaneous reactions can improve over time.
Noticing a salient feature (F), in other words, sets a relatively automatic process in
motion, involving co-​activating particular feelings (T) and behaviors (B) that either
will or will not diminish over time (A), depending on the success of the action.
This temporal interplay of FTBA relata is dynamic, in the sense that every compo-
nent of the system affects, and is affected by, every other component.2 Crucially, this
dynamic process involves adjustments on the basis of feedback, such that agents’
FTBA reactions can improve over time.
By describing these components, I carve off the kinds of spontaneous actions
they lead to from more deliberative, reflective forms of action. The kinds of sponta-
neous actions with which I’m concerned, in other words, paradigmatically involve
mental states with the components I describe here. These components, and their
particular interrelationships, make up a rough functional profile.
I do not offer what is needed for this conclusion in this chapter alone. In this
chapter, I describe the FTBA components of spontaneous inclinations. These are
the psychological components of the phenomenon of interest in this book, in
other words. I describe how they interact and change over time. I also ground these
descriptions in relevant philosophical and psychological literature, noting alterna-
tive interpretations of the phenomena I describe. I do not, however, offer an argu-
ment in this chapter for their psychological nature. I do this in the next chapter,
where I argue that these FTBA components make up a particular kind of unified
mental state, which I call an implicit attitude. It could be, given the description I give
in this chapter alone, that the FTBA relata represent a collection of co-​occurring
thoughts, feelings, and behavior. The argument against this possibility—​that is,
the argument for the claim that these FTBA components make up a unified men-
tal state—​comes in the next chapter, where I locate states with these components
in cognitive architecture. This and the next chapter combined, then, give the argu-
ment for a capacious conception of implicit attitudes as states that have these FTBA
components.3

2
  Clark (1999) offers the example of returning serve in tennis to illustrate dynamic systems: “The
location[s]‌of the ball and the other player (and perhaps your partner, in doubles) are constantly chang-
ing, and simultaneously, you are moving and acting, which is affecting the other players, and so on. Put
simply . . . [in dynamic systems] . . . ‘everything is simultaneously affecting everything else’ (Port and
Van Gelder, 1995, 23).”
3
  For ease of exposition, I will occasionally in this chapter distinguish states with FTBA compo-
nents from an agent’s reflective beliefs and judgments. I beg the reader’s indulgence for doing so, as my
argument that states with FTBA components are not beliefs comes in Chapter 3.
32 Mind

The distinction between spontaneous and deliberative attitudes is certainly


rough, and there are ways in which these FTBA relata are affected by agents’ delib-
erative judgments, beliefs, and so on, and vice versa (i.e., these reflective states are
also affected by our implicit attitudes).4 But it is also true, I think, that this distinc-
tion between spontaneous and deliberative attitudes is fundamental to the kinds of
creatures we are. In defending his own dual-​systems theory, but also dichotomous
approaches to understanding the mind in general, Cushman offers a helpful politi-
cal analogy:

There is no sharp dividing line between Republican and Democrat.


Multiple subdivisions exist within each group, and compromise, collabo-
ration, and areas of ideological agreement exist between the two groups.
So, on the one hand, it would overstate the case to claim that the American
political landscape is characterized by exactly two well-​defined ideologies
that interact exclusively in competition. But, on the other hand, without
distinguishing between the two political parties it would be impossible to
understand the [sic] American politics at all. So it goes with dual-​systems
theories of learning and decision-​making. (2013, 282)

While I do not focus on dual-​systems theories as such, I do think that it would be
nearly impossible to understand the human mind without distinguishing between
our spontaneous inclinations and dispositions, on the one hand, and our more
deliberative and reflective attitudes, on the other.5 This and the next chapter com-
bined aim to capture the distinctiveness of the first of these fundamental features
of human minds, the unplanned and immediate thoughts, feelings, and actions that
pervade our lives.

1. Features
Features are properties of the objects of perception that make certain possibilities
for action attractive. When something is seen as a feature, it has an imperatival qual-
ity. Features command action, acting as imperatives for agents, in a sense I try to
clarify in this section. In the next section (§2), I discuss the crucial affective element
of this command. Here I aim to do three things. First, I briefly clarify what I mean
by saying that agents perceive stimuli in the ambient environment in an imperatival

4
  See Fridland (2015) for a thoughtful discussion of the penetration of spontaneous and automatic
features of the mind by reflective states and vice versa.
5
  The FTBA components I describe clearly share key elements with so-​called System I processes.
Should my account of spontaneous thought and action turn out, inter alia, to clarify dual-​systems theo-
ries, so much the better.
Perception , Emotion , B ehav ior, and   Chang e 33

way (i.e., perceive stimuli as features). In other words, I characterize the role of fea-
tures, which is to tie perception closely to action (§1.1). Second, I argue that stimuli
in the ambient environment are registered as features in virtue of the agent’s goals
and cares, as well as the context (§1.2). Third, I consider what it is like, in perceptual
experience, to encounter features (§1.3).

1.1  The Role of Features as Action-​Imperatives


When McCarson cuts sign, or Benes looks at his cut finger, or the museumgoer
looks at the big painting, what does each of them see? In one sense, the answer is
simple: one sees twigs; the other, blood; and the third, paint on canvas. But these
agents’ visual experiences have an important significance. They are not significant
in the way that Immanuel Kant’s Critique of Pure Reason is a significant book or in
the way that my birthday is a significant day for me. Rather, the twigs and blood and
paint are significant in the sense that they seem to command a particular action for
these agents in these contexts. Compared with other bits of grass and dirt, where
“nothing’s gone wrong,” bits of disturbance immediately compel McCarson to head
in their direction. Compared with the sight of his other, nonlacerated fingers, the
mere sight of Benes’s cut finger screams “danger!” Compared with other smaller
paintings, the big one seems to push the museumgoer away from it. In this sense,
features are significant because they act like imperatives, in the way that saying
“heads up!” commands a particular behavioral response.
To gain clarity on this idea, one can contrast “spectator-​first” and “action-​first”
theories of perception (Siegel, 2014). According to spectator-​first theories, the fun-
damental role of perception is to give the agent a view of what the world is like.6 These
theories conceive the perceiver as a spectator, primarily and fundamentally observ-
ing the world. The contents of perception, on this kind of view, are descriptive. (By
“content,” I mean what the agent’s mind represents in virtue of being in some token
perceptual state. For spectator theories of perception, the contents of perceptual
experience describe what the world is like.) According to action-​first theories, the
function of perception either includes, or is primarily, to guide action.7 Sometimes
action-​first theories of perception posit particular kinds of mental states, states that
are said to simultaneously describe how the world is and direct action. Various
accounts have been offered, including Andy Clark’s (1997) “action-​oriented” rep-
resentations, Ruth Millikan’s (1995, 2006)  “pushmi-​pullyu” representations, and
Sterelny’s (1995) “coupled” representations. These approaches suggest that percep-
tion is both for describing the world and guiding action. Other approaches sepa-
rate these roles into different functional systems, such as David Milner and Melvyn

  See Peacocke (1995), Byrne (2009), and Siegel (2010).


6

  See Noë (2006), Kelly (2005, 2010), Nanay (2013), and Orlandi (2014).
7
34 Mind

Goodale’s (1995, 2008) division between the ventral informational stream, which


is said to inform descriptive visual experience (i.e., what the world is like) and the
dorsal informational stream, which is for guiding motor action. (See footnote 13 for
brief discussion of this view.)
I have no horse in the race between spectator-​first and action-​first theories, at
least to the extent that these theories debate the fundamental role of perception
(i.e., to describe the world or to guide action). But I do believe that action-​oriented
perception—​or what I call perception with imperatival quality—​exists. There are
two questions that this more minimal claim that perception can have an impera-
tival  quality must answer. First, in virtue of what do elements of the ambient
environment become features for agents? That is, what ties perceptual experiences
to specific behavioral responses? Second, what is it like, in perceptual experience,
to encounter features (i.e., to perceive things in an imperatival way)? I take it that
answering these questions adds support to the claim—​supported elsewhere via
arguments from neural architecture (Millner and Goodale, 1995, 2008), phenom-
enology (Kelly, 2005, 2010), and parsimony in philosophical theories of percep-
tion (Nanay, 2013)—​that perception is sometimes imperatival.
Both questions come into focus in work on the nature of inclination by Tamar
Schapiro (2009), from whom I  borrow the term “imperatival perception.” She
writes:

Suppose all the objects in my world have little labels on them telling me
what I  ought to do in response to them. My bed says “to-​be-​slept-​in,”
and my chair says “to-​be-​sat-​upon.” All the radios say “to-​be-​turned-​
on.” . . . [But t]his is different from seeing the object and then seeing an
imperative attached to the object. When I am inclined, my attention is nec-
essarily drawn to certain features of the object that appear to me as practi-
cally salient. If I have an inclination to turn on radios, this must involve
seeing radios as to-​be-​turned-​on in virtue of certain features of radios and
of the action . . . [and such] features [must be] practically salient from the
perspective of one who has this disposition. (2009, 252)

Schapiro suggests that objects in an agent’s ambient environment command action


when they are “practically salient.” I unpack this idea in §1.2, focusing on agents’
goals and cares, as well as on the context. Schapiro also suggests that seeing things
in an imperatival way does not necessarily mean literally seeing action-​imperatives,
as if beds and radios had little labels affixed to them indicating “to-​be-​slept in” and
“to-​be-​turned-​on.” Instead, Schapiro suggests that it is in the nature of inclination
to direct attention, such that seeing something in an imperatival way means being
drawn to attend to features of that thing that are relevant to the agent. This is a claim,
as I take it, about the perceptual experience of seeing things in an imperatival way.
I consider this in §1.3.
Perception , Emotion , B ehav ior, and   Chang e 35

1.2  Practical Salience


One way to grasp the idea of “practical salience” is to compare ordinary with abnor-
mal cases, in which agents seem to see everything in an imperatival way. This phe-
nomenon is known as “utilization behavior” (UB; also known as bilateral magnetic
apraxia). UB is a neurological disorder that causes people to have trouble resisting
an impulse to utilize objects in their visual field in ways that would be appropriate
in other situations. Upon seeing a bed, for example, a UB patient may have trouble
resisting an impulse to pull the sheets down and get into it, even if the bed isn’t
her own and it’s the middle of the day. Or a UB patient might see a radio and feel
immediately inclined to turn it on. One way to understand UB is that it severs the
connection between perception and what’s relevant to the agent’s current tasks (i.e.,
what is practically salient to her). In other words, too much is significant, and thus
salient, for the UB patient. In the ordinary case, these connections make it so that
only particular features are imperatival for us.
The question, then, is why some stimuli become practically salient to agents while
others don’t. One way to interpret this would be in terms of spatial perspective. In
this sense, only agents standing in a certain physical orientation to the objects in
their perceptual field—​to twigs and radios and paintings—​will see them as imper-
atival. While this may be true—​a second museumgoer already standing far away
from the big painting won’t be compelled to step back—​it clearly won’t work to
explain other cases. Another possibility is that objects become imperatival in virtue
of being connected to the agent’s evaluative perspective. In this sense, only agents
who take turning on radios or stepping back from paintings to be a good thing to
do in particular contexts will see these objects as imperatival. But this doesn’t seem
like an adequate interpretation either. It doesn’t do justice to the perceptual quality
of features, in which the imperative to act seems to precede judgment. It also would
be difficult to understand a case like Benes’s in this way, since he does not think that
his fearful reaction is a good one.
A better option is to say that features are imperatival in virtue of a connection to
an agent’s goals, cares, and context. Imagine that the museumgoer is looking for the
restroom. She has a desire to find it, and thus adopts the goal of finding it. In virtue
of this goal, the restroom sign may become a feature in her perceptual field. The sign
will become salient, and she’ll respond immediately upon seeing it, perhaps by turn-
ing whichever way it points rather than ambling aimlessly. Notice, though, that nei-
ther the goal nor the desire to find the restroom is necessary for the sign to become
a feature for the museumgoer. She might be ambling aimlessly when the sight of
the sign catches her eye, making her realize that she has to go. In this case, it seems
clear that the museumgoer doesn’t have the goal of finding the restroom until she
sees the sign. And it also seems wrong to say that she has occurrent, or even stand-
ing desires, to find the restroom. Rather, she is disposed to encounter the restroom
sign as imperatival (in virtue of having a full bladder). In Chapter 4, I develop an
36 Mind

account of cares as dispositions in a related sense. The relevant point here, though,
is that the museumgoer’s noticing the sign arises in virtue of some practical connec-
tion she has to it, through her desires, her goals, or, even in the absence of these, her
dispositional cares. If she is not so connected to the sign in a practical sense, she will
likely not notice it at all or she’ll see it in barer representational terms (i.e., it won’t
be imperatival).8
Context also plays a key role in determining whether and when stimuli become
features for agents. In a chaotic space, the museumgoer may be less likely to notice
the restroom sign. If she is with small children, who frequently need to go, she
might be more likely to notice it. These sorts of contextual effects are pervasive
in the empirical literature. Indirect measures of attitudes, like the IAT, are highly
context-​sensitive (e.g., Wittenbrink et al., 2001; Blair, 2002; see the discussion in
Chapter 7). I return to this example in more depth in §5.9

1.3  Perceptual Content


I’ve defined features as properties of the objects of perception that make certain
possibilities for action attractive. I’ve suggested that the element of features that
makes certain possibilities for action attractive is their imperatival perceptual qual-
ity. And I’ve proposed that perceptual objects come to be imperatival for an agent
in virtue of the agent’s goals, cares, and context. But what is it like to perceive things
in an imperatival way?
Ecological psychologists—​led by J. J. Gibson—​attempt to answer this question
by providing a theory of “affordances.” The affordances of an environment are “what
it offers the animal, what it provides or furnishes, either for good or ill” (Gibson,
1979, 127). By “offers the animal,” Gibson means the possibilities for action that
appear salient to a perceiving creature. For example, terrestrial surfaces, Gibson

8
  These observations are consistent with findings that there are tight interactions between the neu-
ral systems subserving “endogenous” attention—​attention activated in a top-​down manner by internal
states like goals and desires—​and “exogenous” attention—​attention activated in a bottom-​up matter by
environmental stimuli. See Corbetta and Shulman (2002) and Wu (2014b). Note also that Erik Rietveld
and Fabio Paglieri (2012) present evidence suggesting that UB is the result of damage to the neural
regions subserving endogenously driven attention. This, too, suggests that in the ordinary case features
become imperatival for agents in virtue of connections between stimuli and agents’ goals and cares (due
to interaction between the neural systems subserving endogenous and exogenous attention).
9
  Jennifer Eberhardt and colleagues make a similar point in discussing the ways in which implicit
bias affects perception, writing that “bidirectional associations operate as visual tuning devices by
determining the perceptual relevance of stimuli in the physical environment” (2004, 877). See §5 for
more on FTBA states and implicit bias. By “bidirectional,” Eberhardt means that social groups and con-
cepts prime each other. For example, exposure to black faces increases the perceptual salience of crime-​
related objects, and exposure to images of crime-​related objects also increases the perceptual salience
of black faces (Eberhardt et al., 2004). For a fascinating discussion of the neural implementation of the
way in which task-​relevant intention affects the perception of features, see Wu (2015).
Perception , Emotion , B ehav ior, and   Chang e 37

wrote, are seen as “get-​underneath-​able” or “bump-​into-​able” relative to particular


animals (1979, 128). What-​affords-​what depends on the creature perceiving the
affordance. A rock that is get-​underneath-​able to a mouse is not get-​underneath-​
able to a person, although it may be step-​over-​able. In my earlier examples, the
blood on Benes’s finger might be seen as to-​be-​fled-​from, while the big painting
might be seen as to-​be-​stepped-​back-​from for the museumgoer. This generates a
proposal: features are perceived as affordances. What it is like to perceive things in
an imperatival way is what it is like to perceive things as to-​be-​φed.
Features are akin to affordances in two senses. First, theories of affordances
are clearly focused on the action-​guiding role of perception. Ecological psycholo-
gists argue that we don’t represent couches and balls and holes in the ground as
“just there” as if we were spectators viewing the world from a detached perspec-
tive. Rather, we see couches as affording sitting, balls as affording throwing, and,
for animals of a certain size and flexibility, holes in the ground as affording hiding.
Moreover, theories of affordances usually stress the spontaneity of actional percep-
tion. Environments are sufficiently rich to directly provoke behavior, without the
mediating work of inferences, beliefs, computations, or other sorts of “mental gym-
nastics,” Anthony Chemero (2009) argues.
However, I refrain from arguing that features are perceived as affordances, for
two reasons. First, I do not accept many of the claims theorists make about affor-
dances, such as their perceptibility without mental computation. Second, affor-
dances are usually characterized as properties, not of the objects of perception, but
rather of various relations between agents and those objects.10 Affordances are often
described as action-​possibilities, such that seeing the big painting means seeing
something like “back-​up-​ability.” In contrast, on my view, features are properties
of the objects of perception, not properties of the actions made salient by those
objects. My claim is about the perception of the big painting as to-​be-​backed-​away-​
from, not about a perception of the back-​up-​ability of big paintings.
With these clarifications in hand, what is it like to perceive things as to-​be-​φed?
A central feature of this kind of perceptual experience, on my view, is that it involves
“rich” properties. Rich properties of perceptual experience are any properties other
than color, texture, spatial relation, shape, luminance, and motion—​the so-​called
thin properties of perceptual experience (Siegel and Byrne, 2016). (I restrict my
discussion here to visual perceptual experience.) Examples of rich properties that
may be part of visual experience are age (i.e., being old or young), emotions (e.g.,

10
  Thomas Stoffregen (2003), for example, argues that affordances are not just embedded “in”
environments; they are also properties of “animal-​environment systems.” Chemero (2009) claims that
affordances are relations between aspects of agents, which he calls “abilities,” and features of “whole”
situations (which include agents). Michael Turvey and colleagues (1981) suggest that affordances are
dispositional properties of objects complemented by dispositional properties of agents, which they call
“effectivities,” which are in turn explained by niche-​specific ecological laws.
38 Mind

looking angry), and kinds (e.g., being a person or a bird). Rich property theorists
hold that we perceive these properties themselves, that is, not just by way of perceiv-
ing thin properties. For example, we may see a face as angry, rather than judging or
inferring anger from the shape, motion, and so on of the face. There are several rea-
sons to think that rich properties are part of perceptual experience.11 These include
the fact that the rich properties view is consistent with ordinary perceptual phenom-
enology, whereas the thin properties view must explain how agents get from being
presented solely with properties like shape, motion, and the like to seeming to see
anger in a face. The rich property view also appears to be supported by research in
developmental psychology suggesting that infants perceive causality (Carey, 2009).
Finally, the rich property view appears to explain perceptual learning, the effect
of which is that things look differently to you (in vision) if you’ve learned about
them. It is plausible to think that a curveball, for example, looks one way to a Major
League Baseball batter and another way to a novice at the batting cage. And, further,
it seems plausible that this difference is due to the way in which the pitch looks hit-
table (or not), not due to changes in thin properties alone. Perceptual learning is
apropos in vice cases of spontaneity too, where, for example, the color of a person’s
skin comes to be automatically associated with certain traits. These associations are
culturally learned, of course, and the explanatory burden would seem to be on the
thin property view to explain how when blackness, for example, becomes salient, it
does so only through certain thin properties coming to be salient.
The rich property of features is the way in which they are perceived as to-​be-​φed.
Some theorists suggest that perceiving things in roughly this way—​as imperatival—​
precludes the possibility that we ever see things descriptively. Imperatives exhaust
perceptual experience, on this view. According to what I’ll call “Exhaust,” when you
see the restroom sign, you don’t see any thin properties like rectangular or white and
red. Instead, all that is presented in your perceptual experience is the action-​direct-
ing content of the sign. Your experience is of something like a force compelling you
to head in the direction of the bathroom. Dreyfus’s (2002a,b, 2005, 2007a,b) influ-
ential interpretation of “skilled coping” seems to subscribe to Exhaust. On Dreyfus’s
view, which he in turn derives from Maurice Merleau-​Ponty (1962/​2012), at least
some kinds of action entail perceiving nothing but action-​directing attractive and
repellent forces. For example, Dreyfus writes,

consider a tennis swing. If one is a beginner or is off one’s form one might
find oneself making an effort to keep one’s eye on the ball, keep the racket
perpendicular to the court, hit the ball squarely, etc. But if one is expert
at the game, things are going well, and one is absorbed in the game, what

  I  present the following considerations—​which are drawn largely from Siegel and Byrne
11

(2016)—​as prima facie reasons to support the rich properties view. For developed arguments in favor
of this view, see Siegel (2010) and Nanay (2011).
Perception , Emotion , B ehav ior, and   Chang e 39

one experiences is more like one’s arm going up and its being drawn to
the appropriate position, the racket forming the optimal angle with the
court—​an angle one need not even be aware of—​all this so as to complete
the gestalt made up of the court, one’s running opening, and the oncoming
ball. One feels that one’s comportment was caused by the perceived condi-
tions in such a way as to reduce a sense of deviation from some satisfactory
gestalt. But that final gestalt need not be represented in one’s mind. Indeed,
it is not something one could represent. One only senses when one is get-
ting closer or further away from the optimum. (2002b, 378–​379)

As I interpret Dreyfus, his claim is that when one is coping intelligently with the fea-
tures of the environment that are relevant to one’s tasks—​following signs, opening
doors, navigating around objects—​one’s attention is absorbed by an action-​guiding
feeling indicating whether things are going well or are not going well. This feeling
exhausts perceptual experience and keeps one on the rails of skilled action, so to
speak. What Dreyfus crucially denies, as I understand him, is any role to represen-
tational content, at least in skillful action-​oriented perception, that describes the
world as being a certain way. Dreyfus claims that one only senses one’s distance from
the “optimum gestalt.”12
Exhaust is not entailed by my description of features. To perceive something as
to-​be-​φed does not preclude seeing that thing as a certain way. To see the painting
as to-​be-​backed-​away-​from does not preclude seeing it as rectangular, for example.
Moreover, while I do not want to deny that Exhaust describes a possible experience,
in which one is so absorbed in an activity that all one experiences are deviations and
corrections from a gestalt, I do not think Exhaust offers an informative description
of the perceptual content of paradigmatic cases of spontaneous activity. The basic

12
  Of course, Dreyfus doesn’t think that the expert tennis player could play blindfolded. My inter-
pretation of his view is that ordinary thin perceptual properties are registered in some way by the agent,
but not represented in perceptual experience. I take Dreyfus’s view to be consistent with the “Zombie
Hypothesis,” according to which perception-​for-​action and perception-​for-​experience are distinct (see
footnote 13). But this is my gloss on Dreyfus and not his own endorsement. Dreyfus is, however, quite
clear about eschewing all talk of representations in explaining skilled action (which may leave his view
at odds with some formulations of the Zombie Hypothesis). His claim is not just that agents don’t
represent the goal of their action consciously, but that the concept of a representation as it is typi-
cally understood in cognitive science is explanatorily inert. For example, the first sentence of Dreyfus
(2002a) reads, “In this paper I want to show that two important components of intelligent behavior,
learning, and skillful action, can be described and explained without recourse to mind or brain repre-
sentations” (367). What I take Dreyfus to mean, in this and other work (though I note that he is not
always consistent), is that in skilled action, perception is saturated by action-​imperatives, and action-​
imperatives are conceptually distinct from mental or neural representational states (explicit or tacit)
that could be understood as indicating that things are thus-​and-​so. In this sense, Dreyfus would use the
term “content” differently from me, given that I defined perceptual content earlier in terms of mental
representations.
40 Mind

properties of disturbance presumably look like something to McCarson. For exam-


ple, a twig will look brown rather than purple. This something, though perhaps not
consciously accessible to McCarson, has accuracy conditions, such that McCarson’s
perceptual experience is accurate if the twig really is brown and not purple. Others
have elaborated this criticism against Exhaust (e.g., Pautz, 2010; Siegel, 2010,
2014). I also think that Exhaust faces the difficult challenge of explaining what it
is like, in perceptually rich phenomenological terms, to experience deviations and
corrections from a gestalt.13 Minimally, my account of features is not committed to
anything like Exhaust.
A second crucial issue is whether positing rich contents in perceptual experi-
ence entails the penetration of perception by cognitive states (beliefs, intentions,
etc.). It doesn’t. Features are perceived as imperatival in perceptual experience. But
this could have a variety of causal origins, some of which would count as instances
of cognitive penetration and some of which wouldn’t. Consider the view that our
goals and intentions shift even the thin properties of visual experience. According
to “Shift,” a hill that we plan to climb will look steeper than a hill that we intend
to paint a picture of, for example. Shift is an anti-​modular theory of perception.
Perceptual properties such as distance and height are said to change given the
agent’s cognitive and emotional states. Shift is championed by, among others,
Dennis Proffitt and colleagues, who claim to have shown that hills look steeper to
people wearing heavy backpacks (Bhalla and Proffitt, 1999); distances look farther
to people who are asked to throw a heavy ball, compared with a light one (Witt et al.,
2004); balconies look higher to acrophobes than to non-​acrophobes (Stefanucci
and Proffitt, 2009); and so on. Proffit and colleagues take these results to vindicate
Gibsonian ecological psychology. “We have expanded on Gibson’s work to show
that perceiving one affordance as opposed to another changes the perceived met-
ric properties of the environment,” Jessica Witt and Proffitt (2008, 1490) write.
According to Shift, actions and plans for action are cognitive penetrators, even of

13
  Note what might be considered a spin-​off of Exhaust, that whatever content there is in imperatival
perception plays no role in guiding action. Consider briefly Milner and Goodale’s (1995, 2008) influ-
ential dual-​streams account of vision, according to which visual processing is divided into two kinds,
the ventral stream that informs conscious visual experience and the dorsal stream that directs motor
action. Originally, Milner and Goodale supported a strong dissociation of the two streams. This entails
what is known as the Zombie Hypothesis (Koch and Crick, 2001; Wu, 2014a): the dorsal stream pro-
vides no information contributing to conscious visual experience. Persuasive arguments have been
given against the Zombie Hypothesis, however, and Milner and Goodale themselves have backed off
from it, recognizing that the ventral and dorsal streams are, to some extent, integrated (Wu, 2014a).
What this suggests is that at the level of physical implementation of visual processing, directive infor-
mation for action in the dorsal stream influences what the objects of visual perception appear to be like,
that is, the way vision describes them as being. Assuming the neural locations of the ventral and dor-
sal streams are correct, then the physical location of integration of information from the two streams
might help settle the question of what gets represented in action-​oriented perception.
Perception , Emotion , B ehav ior, and   Chang e 41

thin—​and so presumably rich too—​properties of perception. The museumgoer’s


restroom sign will look bigger or smaller depending on what the agent is doing or
planning to do.
But as Chaz Firestone and Brian Scholl (Firestone, 2013; Firestone and Scholl,
2014, 2015) have shown, many of the experimental findings cited by defenders of
Shift are consistent with rival interpretations that do not involve cognitive penetra-
tion. The core ambiguity in Shift is in distinguishing cases of change within per-
ceptual processes from cases of change due to changing inputs into perception. For
example, as Firestone and Scholl (2015) argue, shifts in attention can account for
many examples of apparent shifts in perceptual experience. Consider viewing a
Necker cube, the appearance of which can change depending on one’s intentions
(i.e., one can choose to see which cube appears in the foreground of the image).
Some have taken this to be a clear instance of the penetration of low-​level visual
properties by a higher-​order cognitive state (namely, one’s intentions). But as
Firestone and Scholl point out, citing Mary Peterson and Bradley Gibson (1991)
and Thomas Toppino (2003), changes in the Necker cube’s appearance are actually
driven by subtle changes in attention, in particular by which corner of the cube one
attends to. These shifts in attention provide different input into perceptual process-
ing. This discovery preserves the boundary between perception and cognition and
undermines Shift. Firestone and Scholl convincingly reinterpret many of the find-
ings taken to support Shift in this way.14
The upshot is that positing rich contents in perception—​like the perception of
features—​is orthogonal to the question of whether perception is penetrated by cog-
nition.15 What I accept from Shift is the broad idea that actions and action plans
change agents’ perceptual experiences. Moreover, and more specific to my thesis,
actions and action plans shape the imperatives in perceptual experience (i.e., the φ
in things being seen as to-​be-​φed), for example, as the museumgoer looks for the
restroom. But I take it that in some cases seeing things as imperatival might be due
to changes in attention, in other cases to perceptual judgment, and perhaps in other
cases to genuine cognitive penetration. But nothing depends, for me, on this last
possibility being true.
I have suggested that perceiving features involves the perceptual experience of
things as to-​be-​φed. This is a rich property of perceptual experience. I have argued

14
  Firestone and Scholl show that some of the experiments taken to support Shift have thus far
failed to replicate. But Firestone and Scholl (2015) also report many successful replications, which
they then reinterpret in terms that do not support Shift.
15
  To establish an instance of cognitive penetration, one must hold constant the object being per-
ceived, the perceiving conditions, the state of the agent, and the agent’s attentional focus (Macpherson,
2012). For an elaboration of the claim that one can accept a rich theory of perceptual content without
thereby being committed to cognitive penetration or anti-​modularism, see Siegel and Byrne (2016),
Toribio (2015), and Newen (2016).
42 Mind

that this claim does not deny a role for descriptive content in perceptual experience,
pace Exhaust. And I have also argued that this claim does not entail a claim about
the cognitive penetration of perception. What remains to be seen, though, is how
perceiving features is tied to behavior. For it is one thing to see the big painting
as commanding a particular response and another to actually feel commanded to
respond in that way. Why, for example, does seeing blood as to-​be-​fled-​from make
Benes flee? I now argue that, in paradigmatic cases of spontaneous behavior, affect
plays a crucial role.

2. Tension
Tension refers to the affective component of states with FTBA components, but it
is not a blanket term for any emotion. Tension has two key characteristics: (1) it is
“low-​level”; and (2) it is “geared” toward particular behavioral responses.
The term “perceptual unquiet”—​used by Tietz to describe the affective qual-
ity of McCarson’s perceptual experience—​captures the low-​level affective qual-
ity of tension. The museumgoer sees things in a way that are not quite right, or as
McCarson says, “There’s somethin’ gone wrong there.” This is a kind of “unquiet” in
the sense that things seem out of equilibrium.16 In the context of discussing intui-
tive decision-​making about how much money to give to charity, Railton describes
a similar phenomenon:

Such spontaneous affective reactions might be no more articulate than a


stronger positive feeling toward giving aid in one kind of case as opposed
to another, or a stronger negative feeling toward failing to give. Such a sum-
mative positive or negative “intuitive sense,” of greater or lesser strength,
perhaps colored by one or another emotion, is typical of the way implicit
simulation and evaluation manifest themselves in consciousness. Over
the course of experience, we acquire a “feel” for the things we know, and
an increasingly sensitive and effective “feel” for that which we know well.
(2015, 36)

Note that Railton describes these spontaneous affective reactions as comparatively


more strongly positive or more strongly negative. These two vectors—​strength
and valence—​are crucial for determining the action-​guiding “meaning” of a per-
ceived feature. First, consider the valence of a felt tension. While the person who
feels an immediate inclination to give to charity after seeing images of starving chil-
dren in impoverished countries experiences a positive imperative—​“Give!”—​the
museumgoer, by contrast, experiences a negative imperative—​“Back up!” In this

16
  See Kelly (2005) for discussion of motor action and perception being drawn toward equilibrium.
Perception , Emotion , B ehav ior, and   Chang e 43

context, positive and negative are not moral terms, but rather are more closely akin
to approach and avoid tendencies.
Second, consider the strength of a felt tension. The museumgoer might feel only
the slightest unease, perhaps in the form of a barely noticed gut feeling.17 This unease
might not enter into her focal awareness, but she feels it nevertheless, in just the
way that one can be in a mood—​feeling grumpy or excitable or whatever—​without
noticing it.18 Tension in this sense is marked by an array of bodily changes in an
agent’s autonomic nervous system, including changes in cardiopulmonary param-
eters, skin conductance, muscle tone, and endocrine and immune system activities
(Klaassen et al., 2010, 65; Barrett et al., 2007). These changes in the agent’s body
may be precipitating events of full-​fledged emotions, but they need not. They are
most likely to become so in jarring or dangerous situations, such as standing on
glass-​bottomed platforms and facing diseased blood. It is an open empirical ques-
tion when, or under what conditions, felt tensions emerge into focal awareness.
One possibility is that this emergence is tied to a degree of deviation from ordinary
experience. When the features one encounters are ordinary, one may be less likely
to notice them explicitly. Perhaps the degree to which one notices them is propor-
tional to the degree to which they are unexpected.
It is also important to note that affective and action-​directive felt tensions need
not be the product of an evaluative judgment or be consistent with an agent’s evalu-
ative judgments. The museumgoer likely has no commitment to the correctness of
a claim that stepping back from the big painting is the correct thing to do.19 A candy
bar in the checkout lane of the grocery store might call to you, so to speak, pulling
you to grab it, despite your judging that you don’t want to buy it. The dissociation
of felt tension and evaluative judgment can go in the other direction too. You might
think that spoiling yourself a little with a spontaneous impulse purchase is a good
idea, but feel no inclination to buy the candy bar.20
Research by Lisa Feldman Barrett can be understood as exploring the processes
involved in the unfolding of felt tension. She argues that all visually perceived

17
  James Woodward and John Allman (2007) propose that von Economo neurons (VENs), which
are found in humans and great apes but not in other primates, may be integral in the processing under-
lying these experiences. They write, “The activation of the serotonin 2b receptor on VENs might be
related to the capacity of the activity in the stomach and intestines to signal impending danger or pun-
ishment (literally ‘gut feelings’) and thus might be an opponent to the dopamine D3 signal of reward
expectation. The outcome of these opponent processes could be an evaluation by the VEN of the rela-
tive likelihood of punishment versus reward and could contribute to ‘gut feelings’ or intuitive decision-​
making in a given behavioral context” (2007, 188).
18
  On unconscious affect, see Berridge and Winkielman (2003) and Winkielman et al. (2005).
19
  These affective states are also typically nonpropositional, and so differ from the concept-​laden
emotional evaluations that Richard  Lazarus (1991) calls “appraisals.” See Colombetti (2007) and
Prinz (2004); see also Chapter 4.
20
  I take this example from Susanna Siegel (2014).
44 Mind

objects have “affective value” and that learning to see means “experiencing visual
sensations as value added” (Barrett and Bar, 2009, 1327). Value is defined in terms
of an object’s ability to influence a person’s body, such as her breathing and heart
rate (Barrett and Bar, 2009, 1328). Objects can have positive or negative value,
indicating an approach or avoid response. In a novel way, Barrett’s research extends
models of associative reward-​learning—​in which the brain assigns values to actions
based on previous outcomes—​to the role of affect in visual perception. In short,
when features of the environment become salient, the brain automatically makes
a prediction about their value for us by simulating our previous experience with
similar features (see §4). Barrett calls these predictions “affective representations”
that shape our visual experiences as well as guide immediate action.21 In her words:

When the brain detects visual sensations from the eye in the present
moment, and tries to interpret them by generating a prediction about
what those visual sensations refer to or stand for in the world, it uses not
only previously encountered patterns of sound, touch, smell and tastes, as
well as semantic knowledge. It also uses affective representations—​prior
experiences of how those external sensations have influenced internal sen-
sations from the body. Often these representations reach subjective aware-
ness and are experienced as affective feelings, but they need not. (Barrett
and Bar, 2009, 1325)

The two roles of affective representation—​enabling object recognition and stimu-


lating immediate action—​occur in virtue of integration of sensory information in
the orbitofrontal cortex (OFC). The two functional circuits of the OFC are tied to
the dorsal and ventral streams for information processing.22 While the medial OFC,
which is tied to the dorsal stream, “crafts an initial affective response” to a perceived
feature, the lateral OFC integrates information from the other senses in order to cre-
ate a unified experience of an object in context. Or as Barrett and Moshe Bar (2009,
1330) describe these two roles:

The medial OFC estimates the value of gist-​level representations. A small,


round, object might be an apple if it is resting in a bowl on a kitchen
counter, associated with an unpleasant affective response if the perceiver
does not enjoy the taste of apples, a pleasant affective response if he or
she is an apple lover and is hungry, and even no real affective change in
an apple lover who is already satiated. The medial OFC not only realizes

21
  Note that this picture of affective representation does not necessarily entail cognitive penetra-
tion of perception. For discussion of the relationship between predictive coding, reward learning, and
cognitive penetration, see Macpherson (2016).
22
  See footnote 13.
Perception , Emotion , B ehav ior, and   Chang e 45

the affective significance of the apple but also prepares the perceiver to
act—​to turn away from the apple, to pick it up and bite, or to ignore it,
respectively.23

Barrett’s claim that all visually perceived objects have affective value is consistent
with work on “micro-​valences,” or the low-​level affective significance with which
perceived entities can be tagged. Sophie Lebrecht and colleagues (2012) also pro-
pose that valence is an intrinsic component of all object perception. On their view,
perceptions of “everyday objects such as chairs and clocks possess a micro-​valence
and so are either slightly preferred or anti-​preferred” (2012, 2). Striking demon-
strations of micro-​valences are found in social cognition, for example, in a dem-
onstration of the evaluative significance of stereotypes in Natasha Flannigan et al.
(2013). They find that men and women who are perceived in counterstereotypical
roles (e.g., men nurses, women pilots) are “implicitly bad”; that is, the sheer fact that
these stimuli are counterstereotypical leads individuals to implicitly dislike them.24
My claim, however, is not that all perception is affect-​laden. An ultimate theory
of perception should clarify what role affect does and does not play in object rec-
ognition and action initiation. I cannot provide such a theory. It is clear that ordi-
nary cases of explicit dissociation between affect and perception, such as seeing the
candy bar but having no desire for it, don’t cut strongly one way or the other on the
question of the role of affect in perception. For the view of Barrett and colleagues
is that “the affect-​cognition divide is grounded in phenomenology” (Duncan and
Barrett, 2007); thus even apparently affectless perception may be affectively laden
after all. Other cases of dissociation may be harder to explain, though. Patients with
severe memory deficits due to Korsakoff ’s syndrome, for example, appear to retain
affective reactions to melodies that they have previously heard, even if they can’t
recognize those melodies; similarly, while they cannot recall the biographical infor-
mation about “good” and “bad” people described in vignettes, they prefer the good
people to the bad people when asked twenty days later ( Johnson et al., 1985).25
In this critical vein, it seems unclear, on the basis of the claim that there is no such
thing as affectless perception, how an agent who has “no real affective response” to
an apple because she is satiated (as Barrett and Bar describe in the earlier quote),
recognizes the object as an apple.
For my purposes, what I take Barrett and colleagues’ research to offer is a plau-
sible account of how specific affective values can combine with specific features

23
  For more on “affective force,” see Varela and Depraz (2005, 65).
24
  This and other findings regarding evaluative stereotypes are inconsistent with the view that
“implicit stereotypes” and “implicit prejudices” represent separate constructs, reflecting separate men-
tal processes and neural systems (Amodio and Devine, 2006, 2009; Amodio and Ratner, 2011). See
Madva and Brownstein (2016) for critique of this view. See also §5.
25
  Thanks to an anonymous reviewer for suggesting this example.
46 Mind

in the environment to instigate specific behavioral responses. What a percep-


tual sensation means to the agent—​what sort of action guidance or imperative
it might generate—​may be created, as Barrett and Bar say, using feedback from
previously encountered patterns of sound, touch, smell, taste, semantic knowl-
edge, and previous affective representations. The point is not that perception
must work this way. The point is that this offers a way to explain the imperati-
val quality of perceived features. More specifically, it helps to show how features
do not generate merely generic feelings—​mere liking or disliking—​but, rather,
particular feelings of tension, which are geared toward particular actions. Benes’s
blood, for example, elicits an embodied reaction of fear that repels him from the
scene, much as the skywalker’s fear repels her from the walkway. The same is true
of the museumgoer, whose feelings of tension compel her to back up, not just to
move around in any direction. This “directedness” of felt tensions is apparent in
research on implicit bias too. For example, Brandon Stewart and Payne (2008)
found that racial weapon bias could be reduced when people rehearsed the plan to
think the word “safe” upon seeing a black face. This is, on its face, both a semanti-
cally relevant and a highly affect-​laden word to think in this context (in contrast
to the more cognitive terms “quick” and “accurate,” which failed to reduce weapon
bias). This suggests that the potential for affect to influence weapon bias is not
via a generic dislike, as if people would be more likely to “shoot” anything they
disliked. A more relevant emotion is clearly fear. Thinking the word “safe” likely
activates both thoughts and feelings that interfere with the association of black
men with weapons.26
I will have more to say about theories of emotion later, in particular in Chapter 4.
Here I  mean only to characterize the kind of low-​level affective force that paradig-
matically (on my view) accompanies the perception of salient features in the ambient
environment. These relata—​perceived Features and felt Tension—​are closely tied to
particular Behavioral responses.

3. Behavior
Perceived features and felt tensions set a range of anatomical and bodily reactions
in motion, including limb movements, changes in posture, and vocalizations. In this
context, behavior refers to motor activity, rather than mental action. The anatomical
and bodily reactions to features and tension I describe feed into not only the explicit
behavior of people like Benes, museumgoers, skywalkers, and the like, but also into
marginal actions, like nonverbal communicative acts. I consider whether the motor

26
  I take this point about weapons bias from Madva and Brownstein (2016).
Perception , Emotion , B ehav ior, and   Chang e 47

routines set in motion by perceived features and felt tension rise to the level of full-​
fledged action in Chapters 4 and 5.27
It is clear, though, that the ordinary bodily changes and movements associated
with the action-​guiding states of agents in these cases are, at least minimally, integral
parts of coordinated response patterns.28 The perceptual and affective components
of agents’ automatic states in these cases are oriented toward action that will reduce
their subtle feelings of discomfort. Upon seeing the big painting, the museumgoer
feels that something is not quite right and moves in such a way as to retrieve equilib-
rium between herself and her environment.29 Agents’ spontaneous behavioral reac-
tions, in other words, are typically oriented toward the reduction of felt tension. I call
this state of reduced tension “alleviation” (§4). In a crude functional sense, allevia-
tion is akin to the process whereby a thermostat kicks the furnace off. The tempera-
ture that has been set on the thermostat is the optimum point for the system, and
deviations from this point trigger the system to pull the temperature back toward
the optimum, at which point the system rests. Alleviation is the “resting point” with
respect to spontaneous action. The particular bodily changes and motor routines
that unfold in a moment of spontaneity are coordinated to reach this point, which
is signaled by the reduction of felt tension (i.e., a returning to perceptual quiet, so
to speak). When this point is reached, the agent is freed to engage in other activities
and motor routines.
The thermostat analogy is inapt, however, because the interactions of FTB relata
are temporally extended and influence each other reciprocally. The salience of a per-
ceived feature of the environment influences the duration and vivacity of tension,

27
  As I do not distinguish in this chapter between behavior and action, I also do not distinguish
between mental activity and mental action. In Chapters 4 and 5, I address the conditions under which
behavior reflects upon the character of an agent. This is distinct from—​albeit related to—​the basic
conditions of agency. To foreshadow: I intend my account of actions that reflect on the character of an
agent to cover both motor and mental action. But here, in §3, I restrict my discussion to motor reac-
tions to perceived features and felt tension.
28
  Woodward and Allman (2007) argue that insular cortex is the prime input into this process. They
write, “Just as the insula has the capacity to integrate a large array of complex gustatory experience into
visceral feelings leading to a decision to consume or regurgitate, so fronto-​insular cortex integrates a
vast array of implicit social experiences into social intuitions leading to the enhancement or withdrawal
from social contact” (2007, 187).
29
  Rietveld (2008) captures the directedness of action-​guiding tension by distinguishing what he
calls “directed discontent” from “directed discomfort.” Directed discontent—​which Rietveld derives
from an analysis of Ludwig Wittgenstein (1966)—​is characterized by affective experiences of attrac-
tion or repulsion that unfold over time and are typically not reportable in propositional form (Klaassen
et al., 2010, 64). States of directed discontent are typically accompanied immediately by perceptions of
opportunities for action (or affordances). Directed discomfort, by contrast, characterizes a “raw, undif-
ferentiated rejection” of one’s situation, which does not give the agent an immediate sense of adequate
alternatives (Rietveld, 2008, 980). The kind of affect I am describing is akin to directed discontent, not
directed discomfort.
48 Mind

which in turn influences the strength of the impulse to act. The salience of features
and vivacity of tension is itself influenced by how the agent does act. For example,
the degree of salience of Benes’s blood would be influenced by the state of his goals
and cares; if he had just finished watching a fear-​inducing news report about HIV,
the sight of his own blood might dominate his attention, setting in motion a strongly
aversive feeling. This pronounced sense of tension would cause him to feel a rela-
tively stronger impulse to act. How he acts—​whether he grabs a nearby towel to
stanch the bleeding from his cut or whether he screams and rushes to the sink—​will
affect the duration of his tension, and in turn will affect the degree to which the
blood continues to act as an action-​imperative to him. Similarly, the museumgoer
who is distracted, perhaps by overhearing a nearby conversation, may be slower to
step back from the large painting, in which case the impulse to step back may per-
sist. Or the museumgoer who encounters the large painting in a narrow hallway
may be unable to step far enough back, thus needing to find other ways to reduce
her felt tension, perhaps by turning around to ignore the painting or by moving on
to the next one. Conflicts in behavioral reactions to felt tensions are possible too.
The frightened moviegoer who simultaneously covers her eyes with her hands but
also peeks through spread fingers to see what’s happening may be reacting to two
opposed felt tensions. How these conflicting tensions are (or are not) resolved will
depend on the success of the agent’s behavioral reaction, as well as the changing
stimuli in the theater.30
Thus while the bodily movements and readjustments involved in spon-
taneity may be relatively isolated from the agent’s reflective beliefs and judg-
ments, they constantly recalibrate as FTB relata dynamically shift in response
to one another over tiny increments of time. This is the process I describe as
Alleviation.

4. Alleviation
States with FTBA components modify themselves by eliminating themselves. In
response to perceived features and felt tensions, agents act to alleviate that ten-
sion by changing their bodily orientation to the world. Gordon Moskowitz and

30
  I thank an anonymous reviewer for this last example. The same reviewer asks about the case of
people who are drawn to the feeling of tension. Horror movie aficionados, for example, seem to enjoy a
kind of embodied tension, as perhaps do extreme sports junkies and daredevils. While the psychology
of agents like these is fascinating, I take them to be outliers. Their inclination toward felt tension illumi-
nates the paradigmatic case in which agents’ spontaneous behavioral reactions are aimed at reducing
felt tension. I also suspect that, in some cases at least, agents like these make explicit, unspontaneous
decisions to put themselves in a position in which they will feel extreme tension (e.g., making a delib-
erative decision to skydive). They might then represent an interesting case of discordance between
spontaneous and deliberative inclinations. See Chapter 6 for discussion of such cases.
Perception , Emotion , B ehav ior, and   Chang e 49

Emily Balcetis (2014) express a similar notion when talking about goals more
broadly (and offering a critique of Huang and Bargh’s [2014] theory of “selfish
goals”):

[G]‌oals are not selfish but instead suicidal. Rather than attempting to pre-
serve themselves, goals seek to end themselves. To be sure, goals are strong;
they remain accessible as time passes (Bargh et al. 2001) and do not dissi-
pate if disrupted (e.g., Zeigarnik 1927/​1934). However, goal striving decays
quickly after individuals attain their goals (e.g., Cesario et al. 2006; Förster
et al. 2007; Martin & Tesser 2009; Moskowitz 2002; Moskowitz et al. 2011;
Wicklund & Gollwitzer 1982). That is, goals die once completed. They do not
seek to selfishly propagate their own existence, but instead seem compelled to
work toward self-​termination and, in so doing, deliver well-​being and need-​
fulfillment. (2014, 151)

In this sense, states with FTBA components are suicidal. How does this work?
As behavior unfolds, one’s felt sense of rightness or wrongness will change in turn,
depending on whether one’s initial response did or did not alleviate the agent’s sense
of tension. These feelings suggest an improvement in one’s relation to a perceived envi-
ronmental feature or a relative failure to improve. The temporal unfolding and interplay
between behavior and tension can be represented as a dynamic feedback system. It is a
dynamic system in the sense, as I have said, that every component of the system affects,
and is affected by, every other component. It is a feedback system in the sense that,
ceteris paribus, the agent’s spontaneous inclinations change and improve over time
through a process of learning from rewards—​reduced tension, enabling the agent to
notice new features and initiate new behavior—​and punishments—​persisting tension,
inhibiting the agent’s ability to move on to the next task. Feelings of (un)alleviated ten-
sion feed back into further behavior, as one continues to, say, crane one’s neck or shift
one’s posture in order to get the best view of the painting. This concept of dynamic
feedback is crucial for two reasons: (1) it shows how states with FTBA components
improve over time; and (2) as a result of (1), it shows how these states guide value-​
driven action spontaneously, without requiring the agent to deliberate about what to
do. These states are self-​alleviating.
Research in computational neuroscience illuminates how spontaneous incli-
nations like these may change and improve over time, both computationally and
neurologically. I  refer to “prediction-​error” models of learning (Crockett, 2013;
Cushman, 2013; Huebner, 2015; Seligman et  al., 2013; Clark, 2015). Later I’ll
describe two distinct mechanisms for evaluative decision-​making: “model-​based”
and “model-​free” systems (§4.1). My aim is not to provide anything like a com-
prehensive account of these systems, but rather to suggest that model-​free systems
in particular provide a compelling framework for understanding how spontaneous
inclinations self-​alleviate (i.e., change and improve over time without guidance from
50 Mind

an agent’s explicit goals or beliefs) (§4.2).31 This is not to say, however, that model-​
free learning is invulnerable to error, shortsightedness, or bias (§4.3). Indeed, the
abilities and vulnerabilities of model-​free learning make it a compelling candidate
for the learning mechanisms underlying both the virtues and vices of our spontane-
ous inclinations.

4.1  Model-​Based and Model-​Free Evaluative Learning


Model-​based evaluative learning systems produce maplike representations of the
world.32 They represent the actions that are available to an agent, along with the
potential outcomes of those actions and the values of those outcomes. This con-
stitutes what is often called a “causal model” of the agent’s world. In evaluating an
action, a model-​based system runs through this map, calculating and comparing the
values of the outcomes of different choices, based on the agent’s past experiences,
as well as the agent’s abstract knowledge. A simple example imagines a person navi-
gating through a city, computing and comparing the outcomes of taking one route
versus another and then choosing the fastest route to her destination. The agent’s
internal map of the city comprises a causal model of the agent’s action-​relevant
“world.” This model can also be thought of as a decision tree. Running through
the “branches” of a decision tree is what many commonly refer to as “reasoning”
(Cushman, 2013).33 That is, model-​based systems are thought to subserve the pro-
cess in which agents consider the outcomes of various possible actions, and com-
pare those outcomes, in light of what the agent cares about or desires. This sort of
reasoning is inherently prospective, since it requires projecting into the likely out-
comes of hypothetical actions. For this reason, model-​based systems are sometimes
referred to as “forward models.”
In contrast, a model-​free system computes the value of one particular action,
based on the agent’s past experience in similar situations. Model-​free systems
enable value-​based decision-​making without representing complex maplike causal
links between actions and outcomes. Essentially, model-​free systems compute the
value of one particular action, without modeling the “world” as such. The compu-
tation is done on the basis of a comparison between the agent’s past experience in
similar situations and whether the current action turns out better or worse than
expected. For example, a model-​free system can generate a prediction that turn-
ing left at a familiar corner will or will not be valuable to the agent. The system

31
  As I have already mentioned, my argument for why these processes do not issue in beliefs them-
selves comes in Chapter 3.
32
  Parts of this section are adapted from Brownstein (2017).
33
  I am indebted to Cushman for the metaphor of navigating through the city. Note that there are, of
course, important differences between maps and decision trees. Thanks to Eliot Michaelson for point-
ing this out.
Perception , Emotion , B ehav ior, and   Chang e 51

does this based on calculations of the size of any discrepancy between how valuable
turning left was in the past and whether turning left this time turns out better than
expected, worse than expected, or as expected. Suppose that in the past, when the
agent turned left, the traffic was better than she expected. The agent’s “prior” in this
case for turning left would be relatively high. But suppose this time, the agent turns
left, and the traffic is worse than expected. This generates a discrepancy between
the agent’s past reinforcement history and her current experience, which is negative
given that turning left turned out worse than expected. This discrepancy will feed
into her future predictions; her prediction about the value of turning left at this
corner will now be lower than it previously was. Model-​free systems rely on this
prediction-​error signaling, basing new predictions on comparisons between past
reinforcement history and the agent’s current actions. While there is no obvious
folk-​psychological analogue for model-​free processing, the outputs of this system
are commonly described as gut feelings, spontaneous inclinations, and the like.34
This is because these kinds of judgments do not involve reasoning about alternative
possibilities,35 but rather they offer agents an immediate positive or negative sense
about what to do.
It is important to note a distinction between two views about these kinds of
learning systems. A “single-​system” view claims that all mental functioning can be
(or ultimately will be) explained via these mechanisms, including deliberative rea-
soning, conscious thought, and so on. This seems to be what Clark (2015) thinks.
By contrast, a “multiple-​system” view (Crockett, 2013; Cushman, 2013; Railton,
2014; Huebner, 2016) aims to show how different mechanisms for evaluative learn-
ing underlie different kinds of behavior and judgment. The single-​system view is
more ambitious than the multiple-​system view. I adopt the latter.
Proponents of multiple-​system views disagree about how many systems there
are, but they agree that, in Daniel Gilbert’s words, “there aren’t one.” For example,
in the domain of social cognition, Cushman (2013) holds that a prediction-​error
system subserves representations of actions, but not of outcomes of actions. For
example, recall the work of Cushman and colleagues (2012) on pretend harm-
ful actions, discussed in Chapter 1. This research shows that people find pretend
harmful actions, like smacking a toy baby on a desk, more aversive than analogous

34
  It is important to note that while model-​free processing involves mental representations of the
expected outcomes of actions, it is not necessary that these representations be propositionally struc-
tured or otherwise formatted in belief-​like ways. Indeed, some view prediction-​error models as com-
prising a “store” of practical know-​how, only on the basis of which we are able to form states like beliefs
and intentions. Railton, for example, argues that “it is thanks to our capacity for such nonpropositional,
bodily centered, grounded mental maps and expectations that we are able to connect human proposi-
tional thought to the world via de re and de se beliefs and intentions” (2014, 838). Of course, this view
of the relationship between model-​free processing and belief depends upon what one takes beliefs to
be. I address this issue in Chapter 3.
35
  But see §4.2 for a discussion of model-​free systems and counterfactual reasoning.
52 Mind

harmless actions, like smacking a broom on a desk. Cushman (2013) suggests that
the key difference between these scenarios is that the first triggers model-​free rep-
resentations of a harmful action, regardless of its consequences, while the second
does not trigger model-​free representations of a harmful action, thus enabling
agents to represent the action’s outcome. Swinging something perceived as lifelike
(i.e., a toy baby), in other words, carries a negative value representation—​a “gist”-​
level impression—​which has been refined over time via prediction-​error feedback.
Swinging something perceived as an object doesn’t carry this kind of negative value
representation. This enables the broom-​smacking to be represented in terms of
its outcome. Representations of outcomes, on Cushman’s view, are model-​based.
These representations resemble models of decision trees, on the basis of which dif-
ferent actions, and the values of their outcomes, can be compared.36
Model-​based systems are often described as flexible, but computationally costly,
while model-​free systems are described as inflexible, but computationally cheap.
Navigating with a map enables flexible decision-​making, in the sense that one can
shift strategies, envision sequences of choices several steps ahead, utilize “tower of
Hanoi”–​like tactics of taking one step back in order to take two more forward, and
so on. But this sort of navigation is costly in the sense that it involves computing
many action–​outcome pairs, the number of which expand algorithmically even in
seemingly simple situations. On the other hand, navigating without a map, on the
basis of information provided by past experiences for each particular decision, is
comparatively inflexible. Model-​free systems enable one to evaluate one’s current
action only, without considering options in light of alternatives or future conse-
quences. But navigating without a map is easy and cheap. The number of options
to compute are severely constrained, such that one can make on-​the-​fly decisions,
informed by past experience, without having to consult the map (and without risk-
ing tripping over one’s feet while one tries to read the map, so to speak).
It is important not to oversell the distinction between model-​free and model-​
based learning systems. There is increasing evidence to suggest mutual interaction.37
But if one accepts the aforementioned rough distinction—​that model-​based sys-
tems are flexible but costly and that model-​free systems are inflexible but cheap—​
then one might be disinclined to think that model-​free learning provides an apt
framework for the mechanisms underlying the process of alleviation. For it is this
process, ex hypothesi, that accounts for the persistent recalibration and improve-
ment over time of spontaneous inclinations. An inflexible system for evaluative

36
  My aim is not to endorse Cushman’s view per se, but to illustrate how a multiple-​systems view
explains various kinds of judgment and behavior in terms of multiple mechanisms for evaluative
learning.
37
  See Kishida et al. (2015) and Pezzulo et al. (2015) for evidence of information flow between
model-​free and model-​based evaluative learning systems. Kenneth Kishida and colleagues, for exam-
ple, offer evidence of fluctuations in dopamine concentration in the striatum in response to both actual
and counterfactual information.
Perception , Emotion , B ehav ior, and   Chang e 53

learning seems like a bad model for the process I have called alleviation. But, pace
the common characterization of model-​free learning systems as dumb and inflex-
ible, there is reason to think that this kind of system can support socially attuned,
experience-​tested, and situationally flexible behavior.

4.2  Model-​Free Learning and Self-​Alleviation


Evidence for the adaptive improvement over time of model-​free learning systems
is found across varied domains of activity. Statistical competence, for example, has
been traced to model-​free learning mechanisms (Daw et al., 2011).38 Findings like
these are consistent with wide-​ranging research suggesting that agents’—​even non-
human agents’—​spontaneous judgments are surprisingly competent at tracking
regularities in the world (e.g., Kolling, 2012; Preuschoff et al., 2006). Kielan Yarrow
and colleagues (2009) combine this with research on motor control to understand
expertise in athletics. They focus on experts’ ability to make predictive, rather than
reactive, decisions, on the basis of values generated for particular actions.
It is relatively uncontroversial, however, to say that model-​free learning helps to
explain the competencies of spontaneous judgment and behavior in cases in which
the relevant variables are repeatedly presented to the agent in a relatively stable and
familiar environment. In cases like batting in baseball, this repeated presentation of
outcomes in familiar situations enables the agent to update her predictions on the
basis of discrepancies between previous predictions and current actions. But what
about cases in which an agent spontaneously displays appropriate, and even skilled,
behavior in unfamiliar environments? Interpersonal social fluency, for example,
requires this. Offering a comforting smile can go terribly awry in the wrong circum-
stance; interpersonal fluency requires deploying the right reaction in changing and
novel circumstances. The question is whether model-​free systems can explain a kind
of “wide” competence in spontaneous decision-​making and behavior.39 Wide com-
petencies are not limited to a particular familiar domain of action. Rather, they can
manifest across a diverse set of relatively unfamiliar environments. One reason to
think that model-​free systems can subserve wide rather than narrow (context-​bound)
abilities alone is that these systems can treat novel cues that are not rewarding as pre-
dictive of other cues that are rewarding. Bryce Huebner describes this process:

For example, such a system may initially respond to the delicious taste
of a fine chocolate bar. But when this taste is repeatedly preceded by

38
  Nathaniel Daw and colleagues (2011) do find, however, that statistical learning typically involves
the integration of predictions made by both model-​based and model-​free systems. To what extent the
integration of these systems is necessary for statistical learning and other spontaneous competencies is
an open and important empirical question.
39
  So far as I  know, Railton (2014) first discussed wide competence in spontaneous decision-​
making in this sense.
54 Mind

seeing that chocolate bar’s label, the experience of seeing that label will
be treated as rewarding in itself—​so long as the label remains a clear
signal that there is delicious chocolate is on the way. Similarly, if every
trip to the chocolate shop leads to the purchase of that delicious choco-
late bar, entering the shop may come to predict the purchasing of the
chocolate bar, with the label that indicates the presence of delicious
chocolate; in which case entering the shop will come to be treated
as rewarding. And if every paycheck leads to a trip to the chocolate
shop . . . (2016, 55)40

This kind of “scaffolding” of reward prediction is known as “temporal difference


reinforcement learning” (TDRL; Sutton, 1988; Cushman, 2013). It enables
model-​free systems to treat cues in the environment that are themselves not
rewarding, but are predictive of rewards, as intrinsically rewarding. The chocolate
shop is not itself rewarding, but is predictive of other outcomes (eating chocolate)
that are rewarding. (Better: the chocolate shop is predictive of buying chocolate,
which is predictive of eating chocolate, which is predictive of reward.) The key
point is that the chocolate shop itself comes to be treated as rewarding. The agent
need not rely on a causal map that abstractly represents A leading to B, B leading to
C, and C leading to D.
This helps to explain how a spontaneous and socially attuned gesture like
Obama’s grin might rely on model-​ free learning.41 Smiling-at-​chief-justices-

  See also Cushman (2013, 280).


40

  My view is much indebted to Huebner (2016), who argues that implicit attitudes are constructed
41

by the aggregate “votes” cast by basic “Pavlovian” stimulus–​reward associations, model-​free reward
predictors, and model-​based decision trees. Pavlovian stimulus–​reward associations are distinguished
from model-​free reward predictors in that the former passively bind innate responses to biologically
salient rewards, whereas the latter compute decisions based on the likelihood of outcomes and incor-
porate predictors of predictors of rewards. The view I present here is different from Huebner’s, how-
ever, as I argue that states with FTBA components—​in particular the process of alleviation—​are best
explained by model-​free learning in particular. My view is that behavior is the result of the competition
between Pavlovian, model-​free, and model-​based systems. That is, behavior is the result of the com-
bined influence of our reflexive reactions to biologically salient stimuli (which are paradigmatically
subserved by Pavlovian mechanisms), our FTBA states (paradigmatically subserved by model-​free
mechanisms), and our explicit attitudes (paradigmatically subserved by model-​based mechanisms).
Now, this picture is surely too simplistic, given evidence of mutual influence between these systems
(see footnote 37). But to accept the mutual penetration of learning systems is not tantamount to adopt-
ing the view that these systems mutually constitute FTBA states. It is one thing to say that these states
paradigmatically involve model-​free learning, which is in important ways influenced by other learning
systems. It is another thing to say that these states are produced by the competition between these
learning systems themselves. Architecturally, my somewhat loose way of thinking is that biologically
attuned reflexes are the paradigmatic causal outputs of Pavlovian mechanisms, FTBA states are the
paradigmatic outputs of model-​free learning mechanism, and explicit attitudes are the paradigmatic
outputs of model-​based learning mechanisms.
Perception , Emotion , B ehav ior, and   Chang e 55

d​ uring-​presidential-​inaugurations is not itself rewarding. Or, in any case, Obama


did not have past experiences that would have reinforced the value of this par-
ticular action in this particular context. But presumably Obama did have many
experiences that contributed to the fine-​tuning of his micro-​expressions, such that
these spontaneous gestures have come to be highly adaptive. Indeed, this adap-
tiveness is unusually salient in Obama’s case, where he displayed a high degree of
interpersonal social fluency. And yet he might have very little abstract knowledge
that he should smile in situations like this one (Brownstein and Madva, 2012a).
As Cushman (2013) puts it, a model-​free algorithm knows that some choice feels
good, but it has no idea why.
Martin Seligman and colleagues (2013) make a similar claim. In contrast to
researchers who view unconscious and automatic processes as crude and short-
sighted, they describe prediction-​error models as illuminating a kind of unreflective
“prospection.” They write:

The implicit processes of prospection we have emphasized appear to


be selective, flexible, creative, and fine-​grained. They fit into a view in
which subcortical affective processes are integral with cognition (Pessoa,
2008) and serve to “attune” the individual intelligently and dynamically
in response to a changing environment (Ortny et  al. 1988; Reynolds &
Berridge 2008; Schwartz 2002; Schwartz & Clore 2003). (2013, 126)

Note that Seligman and colleagues say that this learning process attunes individuals
to changing environments. This is saying more than that prediction-​error learning
attunes agents to previously encountered regularities in the environment. Consider
the museumgoer, whose impulse to step back from the large painting isn’t fixed to
any one particular painting. Any painting of sufficient size, in the right context, will
trigger the same response. But what counts as sufficient size? Does the museum-
goer have a representation of a particular square footage in mind? Perhaps, but then
she would also need representations of metric units tied to behavioral routines for
paintings of every other size too, not to mention representations of size for every
other object she encounters in the environment, or at least for every object that
counts as a feature for her. This does not seem to be how things work. Instead, as
the museumgoer learns, she becomes more sensitive to reward-​predicting stimuli,
which over time can come to stand in for other stimuli.
The key point is that prediction-​error models of learning like these help to show
how states with FTBA components can improve over time.42 Experiences of tension
initiate coordinated responses directed toward their own alleviation. In turn, token

  This and the paragraphs following in this section are adapted from Brownstein and Madva
42

(2012b).
56 Mind

experiences of (un)alleviation contribute to a gradual “fine-​tuning” of affective-​


behavioral associations to specific types of stimuli. This tuning process can happen
rapidly or slowly. Consider Aaron Zimmerman’s (2007) example of Hope, who
continues to look for the trash can under the sink even after she replaced it with
a larger bin by the stove. The first, rapid type of change is visible just after Hope’s
automatic impulses drive her to the wrong place. A salient feature of Hope’s envi-
ronment (F: “coffee grounds!”) induces tension (T: “yuck!”) and behavioral reac-
tions (B: “dispose under sink!”), but the response misfires and her feelings persist
unalleviated (A: “still yuck!”). This state of un-​alleviation represents a discrepancy
between Hope’s expectation about the location of the bin—​of which she may or may
not be aware—​and the outcome of her action, which failed to eliminate her FTBA
state. These unresolved feelings in turn automatically activate further responses
(“Not a trash can!” “Argh!” “Move to the stove!” “Ah, garbage dispensed”). The
immediacy and vivacity of these responses depend upon the size of the discrepancy
between her expectations and the outcome of her behavioral response. Large dis-
crepancies between expectations and outcomes will cause tension to persist; small
discrepancies will lead to diminished tension; and no discrepancy will lead to alle-
viation. Her automatic reactions to these discrepancies, in the form of affective and
motor responses, are not just one-​and-​done reactions to a salient cue, but integrally
related components that work in concert to guide her toward alleviation and that
in turn update her priors and act as a teaching signal for future automatic reactions.
In cases of slower change, an agent’s sense of (un-​)alleviation in one context will
contribute to the degree of tension she feels toward related actions in the future.
This will strengthen or weaken the associative connections that obtain between
perceptions of salient features, experiences of tension, initiation of motor routines,
and new experiences of (un-​)alleviation. In this way, better or worse responses in
particular situations guide agents toward better and better responses to further ten-
sions over time. This is true even when no apparent belief revision is necessary.
While Hope’s beliefs about the trash can’s location update immediately, she need
not have, and may be unable to form, accurate beliefs about the different amounts
of force required to toss out coffee grounds, banana peels, and peach pits (though
perhaps she should be composting these anyway). The improvement in her abil-
ity to navigate through the kitchen efficiently rather than awkwardly is due to her
gradually self-​modifying FTBA states. Her visceral sense of frustration when she
errantly tosses some of the coffee grounds on the floor instead of in the bin will,
ceteris paribus, make her less likely to repeat the mistake in the future.

4.3  Model-​Free Learning and Failure


Seligman and colleagues (2013) name four related reasons for thinking that
prediction-​error learning can support wide competence in spontaneous decision-​
making. First, these systems enable agents to learn from experience, given some
Perception , Emotion , B ehav ior, and   Chang e 57

prior expectation or bias. Second, they enable prior expectations to be overcome


by experience over time, through the “washing out” of priors. Third, they are set up
such that expected values will, in principle, converge on the “real” frequencies found
in the environment, so that agents really do come to be attuned to the world. And
fourth, they adapt to variance when frequencies found in the environment change,
enabling relatively successful decision-​making in both familiar and relatively novel
contexts.
Of course, our spontaneous inclinations can get things wrong. One kind of failure
can occur within the dynamics of FTBA relata. Suppose a friend leans in to whisper
something important. The whisperee might rightly perceive the inward lean as an
action-​imperative but under-​or overreact affectively, by being coldly indifferent or
disconcertingly solicitous. That would be a failure of feeling excessive or deficient
tension, or of one’s tension having the wrong valence. Alternatively, the whisperee
might feel tension in the right way but under-​or overcompensate behaviorally, by
leaning in too little to hear the secret or leaning in so close that she bumps heads
with the whisperer. Yet another possibility for error arises even if the whisperee feels
the right kind of tension and responds with an appropriate behavioral adjustment,
but fails to feel the right alleviation. She might continue to fidget awkwardly after
finding the optimal posture for listening to the whisperer. The whisperee can fail
in any of these respects even when all of the conditions were right for her impulse
to reduce her feelings of tension appropriately (even when the conditions were, as
Gendler [2008b] and Huebner [2009] would say, stable and typical).43
Similarly, if one restrains one’s behavior in some way, the sense of tension will,
ceteris paribus, persist. The persistent force of unalleviated feelings is evident in
how the skywalker continues to shiver and tremble while restraining the impulse
to flee or how the museumgoer might feel if she encountered the large painting in
a narrow hallway that prevented her from stepping back. It is possible that when
tension persists it has the potential to affect a person’s thoughts and behavior in vari-
ous problematic ways. One speculative possibility is that unalleviated FTBA states
contribute to the Zeigarnik Effect, which describes the way in which agents experi-
ence intrusive thoughts and feelings when they do not complete a task or activ-
ity (Zeigarnik, 1927/​1934). Hope might decide to leave the coffee grounds on the
floor to clean up another time, but if she does so, she might feel hurried later on, as
if she had too many items to cross off her list for the day. A second possibility is that
unalleviated FTBA states contribute to experiences of vague and undirected anxi-
ety, of the sort described by writers from Jean-​Paul Sartre (1956/​1993) to David
Foster Wallace (2011). Sometimes tension lingers in our experience and seems to
detach from any particular task. Metaphorically, we just can’t figure out what the

43
  This possibility of FTBA states failing even in stable and typical environments distinguishes them
from rote reflexes. See Chapter 3.
58 Mind

analogue to stepping back from the large painting should be.44 Perhaps repeated
failures to alleviate tension are additive, leading to a kind of (physiological and/​or
metaphorical) hypertension.
Other kinds of failure reflect conflicts with agents’ reflective values and/​or with
moral goods. As Huebner (2009, 2016) emphasizes, these learning systems will be
only as good as the environment in which they are trained.45 Prediction-​error learn-
ing is vulnerable to small and unrepresentative samples of experience. A person who
lives in a world in which most high-​status jobs are held by men and most low-​status
jobs are held by women is likely to come to implicitly associate men with concepts
like competence, leadership, authority, and so on. This association will likely be rein-
forced by experience, not (rapidly or slowly) undermined by it. Similar outcomes
will happen in the case of merely perceived environmental regularities, for example,
when a person is bombarded with images in the media of black or Arab men acting
in violent ways. In cases like these, agents’ spontaneous social inclinations may still
be thought of as attuned to the social world, but to the wrong features of it.
I consider the significance of these kinds of successes and failures in Chapter 3,
where I  argue that states with paradigmatic FTBA components are implicit atti-
tudes, a sui generis kind of mental state. My aim here has been to describe a plau-
sible model of the evaluative learning system underlying the process of alleviation,
that is, the process through which our spontaneous inclinations change over time
and (sometimes) attune us to the realities and demands of the local environment.

5.  Two Case Studies


I summarize my case for the FTBA components of paradigmatic spontaneous incli-
nations by considering two examples: skilled athletic action and implicit bias.
Imagine an expert tennis player in the midst of playing a point. At the right
moment she notices a salient feature, perhaps the underspin and arc of an oppo-
nent’s dropshot, and reacts immediately (F: “dropshot!”). In virtue of her goal of
winning the point, the opponent’s shot is perceived as imperatival, indicating to
her not just low-​level properties such as spin and arc, but an action-​oriented com-
mand to charge the net. This command is tied to her perception of the shot through
a felt tension pulling the player toward the net (T: “get there!”), just as a deep shot
will repel the player backward (T: “get back!”). Perhaps the contents of her percep-
tion are swamped by the imperative (although I doubt it; see discussion of Exhaust
in §1.3); perhaps they shift depending on whether the dropshot is gettable or too
good (as per Shift); or perhaps they shift only because of a change in her atten-
tion. Any of these is consistent with my account, and none are required by it. The

44
  Thanks to Daniel Kelly for suggesting this metaphor.
45
  But this is not to say that intra-​individual differences don’t exist or don’t matter.
Perception , Emotion , B ehav ior, and   Chang e 59

player’s behavioral reaction to these features and tensions—​a hard charge in the
case of the dropshot—​is, as I  emphasized, highly context-​sensitive. If she is up
40–​love, the player’s felt tension might be relatively anemic, thus diminishing the
imperatival force of the dropshot, considering that she cares little about the point.
By comparison, at four games all and deuce in the third set, the dropshot will likely
trigger explicit tension, which the player is almost sure to notice explicitly, as she
is immediately inclined to give it everything she has. In other contexts, as in the
museumgoer case, the player’s goals and cares might be revealed to her through
these reactions; she might find herself charging harder than she realized or dogging
it when she believes she ought to be playing her hardest.
In all of these cases, the successes and failures of the player’s reaction feed back
into future FTBA reactions. The player who overreacts to dropshots, perhaps by
misperceiving the arc of the shot, feeling too much tension, or charging too hard,
such that she fails to have her racquet ready to play her own shot, is left with a dis-
crepancy between the expectation associated with these F-​T-​B reactions and the
actual outcome of her response. This discrepancy will update her priors for expecta-
tions of outcomes for future reactions, in much the same way that prediction-​error
learning has been shown to underlie batters’ ability to predict where pitches will
cross the plate (Yarrow et al., 2009). And it will do so in a very finely individuated
way, such that a love–​40 context might generate different expectations than a deuce
context. In the moment, the success or failure of her responses will or will not allevi-
ate tension, as the player either is able to move on to the next shot ( A1: “ready!”) or
dwells with the feeling of having misplayed the dropshot (A2: “damn!”). Of course,
the player’s improvement is not guaranteed. Her reactions may fail to self-​alleviate
for a host of endogenous or exogenous reasons (as in the case of hearing someone
whisper to you; see §4.3). Perhaps the player feels too much tension because she
cares excessively about the point; maybe money is on the line, and her concern for
it interferes with the successful unfolding of her FTBA reaction. Perhaps she can-
not avoid ruminating on her fear of failure and, like Richie Tennenbaum, takes her
shoes off and simply gives up on the game (but hopefully not on life too).
Of course, different interpretations of this scenario are possible, some of which
are rivals and some of which are simply open questions. For example, as I discussed
earlier (§1.3), there are multiple possible interpretations of the perceptual content
of perceived features. Likewise, there are open questions about whether some forms
of perception are relatively less entwined with affect than what I have depicted, or
perhaps are entirely nonaffective (§2). Other, broader strategies for understanding
these cases are not consistent with my account, however. One is that the tennis play-
er’s reaction to the dropshot can be explained in terms of a cultivated disposition to
react to particular shots in particular ways. On this interpretation, there is no occur-
rent state, with FTBA components or otherwise, that explains her action. Another
rival interpretation is that the player’s action is best explained in terms of ordinary
belief–​desire psychology. On this interpretation, upon seeing the oncoming ball
60 Mind

and judging it to be a dropshot, the player forms a set of beliefs, to the effect of “That
is a dropshot,” “One must charge to retrieve dropshots,” or the like, and in addition,
the player forms a desire (or accesses a standing desire), to the effect of “I want to
retrieve the dropshot” or “I want to win the point.” These beliefs and desires would
rationalize her action, with no need of mentioning special mental states with FTBA
components. These are the sort of rival interpretations of cases of spontaneity that
I consider in depth in the next chapter. I don’t deny that these rival interpretations
describe possible cases, but I deny that they offer the best explanation of the para-
digmatic cases I’ve described.
Much of the same can be said in cases involved implicit bias. Consider a professor
evaluating a stack of CVs. A salient feature (F: “female name on CV!”) induces ten-
sion (T: “risky hire!”) and behavioral reactions (B: “place CV in low-​quality pile!”),
which either do (A1: “next CV!”) or do not self-​alleviate (A2: “something wrong!”).
Or consider a standard shooter bias case: a salient feature (F: “black face!”) induces
tension (T:  “danger!”) and behavioral reactions (B:  “shoot!”), which either do
(A1: “fear diminished!”) or do not self-​alleviate (A2: “still afraid!”).
As in the analysis of the tennis player, there are many open questions and unsaid
details here. What exactly the CV evaluator or shooter bias test participant perceives
is not clear. Indirect measures like the IAT don’t typically distinguish behavior from
underlying mental processes, although some suggestive evidence regarding partici-
pants’ perceptual experience exists. Using eye-​trackers, for example, Joshua Correll
and colleagues (2015) found that participants in a first-​person shooter task require
greater visual clarity before responding when faced with counterstereotypic targets
(i.e., armed white targets and unarmed black targets) compared with stereotypic
targets (i.e., unarmed white targets and armed black targets). This is suggestive that
stereotypic targets are relatively imperative for these agents. The imperatival qual-
ity of the perceived feature in these trials appears to be stronger, in other words.
Whether this means that the feature is easier to see, or relatively more affective, or
something else is not clear from this experiment. Future research could perhaps
try to replicate Correll and colleagues’ results while testing whether salient feel-
ings of fear on stereotype-​consistent trials moderate the outcome. Such a finding
would build on previous research showing that people can and do make use of their
implicit biases when directed to focus on their “gut feelings” ( Jordan et al., 2007;
Ranganath et al., 2008; Richetin et al., 2007; Smith and Nosek, 2011).
This finding would speak to an open question in the literature on implicit
bias: how putatively “cold” affectless stereotypes relate to “hot” affective prejudices.
A number of researchers hold a “two-​type” view of implicit biases, that is, that there
are fundamentally two kinds: implicit stereotypes and implicit evaluations (Correll
et  al., 2007, 2015; Glaser, 1999; Glaser and Knowles, 2008; Stewart and Payne,
2008). On this view, a white person (for example) might have warm feelings toward
black people (i.e., positive implicit evaluations) but stereotype them as unintelligent
(Amodio and Hamilton, 2012). There is, however, both conceptual and empirical
Perception , Emotion , B ehav ior, and   Chang e 61

evidence against the two-​type view.46 For example, Jack Glaser (1999) and Bertram
Gawronski and colleagues (2008) found that retraining implicit stereotypes changes
agents’ implicit evaluations. David Amodio and Holly Hamilton’s (2012) own data
reflected a difficulty participants had in associating blacks with intelligence. This
suggests that, rather than tracking coldly cognitive stereotypes alone, their implicit
stereotyping measure tracks the insidious and plainly negative stereotype that black
people are unintelligent. This negative evaluative stereotype combines the perception
of perceived features with particular affective responses in order to induce action; a
salient feature (F: “black face!”) induces tension (T: “danger!”) and specific behav-
ioral reactions (B: “shoot!”). Research supports similar conclusions in the case of
biased CV evaluations. Dan-​Olof Rooth and colleagues, for example, found that
implicit work-​performance stereotypes predicted real-​world hiring discrimination
against both Arab Muslims (Rooth, 2010) and obese individuals (Agerström and
Rooth, 2011) in Sweden. Employers who associated these social groups with lazi-
ness and incompetence were less likely to contact job applicants from these groups
for an interview. In both cases, measures of evaluative stereotypes were used, and
these predicted hiring discrimination over and above explicit measures of attitudes
and stereotypes.
As in the case of the tennis player, these co-​activating responses will vary with
agents’ cares and context. Joseph Cesario and Kai Jonas (2014) find, for example,
differential effects of priming white participants with the concept “young black
male” depending on the participants’ physical surroundings and how threatening
they perceive black males to be. Using as an outcome measure the activation of fight
versus flight words, they compared the effects of the prime on participants who
were seated in an open field, which would allow a flight response to a perceived
threat, with the effects of the prime on participants seated in a closed booth, which
would restrict a flight response. Cesario and Jonas write:

Consistent with predictions, we found that the relative activation of fight-​


versus escape-​related words differed as a function of whether participants
were in the booth or the field and the degree to which participants associ-
ated blacks with danger. When participants were in the booth, the more
they associated black males with danger, the greater the activation of fight-​
related words. In the open field, however, the more participants associated
black males with danger, the more they showed increased activation of
escape-​related words. (2014, 130)47

46
  See Madva and Brownstein (2016) and brief discussion in the Appendix. “One-​type” theo-
rists hold that only one type of implicit mental state exists (Greenwald et al., 2002; Gawronski and
Bodenhausen, 2006, 2011). Different one-​type theories conceptualize the relationship between ste-
reotypes and affect within implicit mental states in different ways.
47
  For more on the context sensitivity of implicit mental states, see Chapter 7.
62 Mind

I discuss in the Appendix how findings like these speak to concerns about the
predictive validity of indirect measures of attitudes—​like the IAT—​as well as to
broader concerns about replicability in psychological science. The point here is that
familiar implicit biases are activated and affect agents’ behavior in the way that is
paradigmatic of our spontaneous inclinations.
This is true, too, of the ways in which implicit biases change over time, incorpo-
rating successes and failures through the feedback processes of alleviation. “Success”
and “failure” are tricky notions in this context, however. Within the narrow terms of
model-​free learning, an agent’s spontaneous reaction to a situation will be success-
ful when her expectations match the outcome of her actions and her feelings of felt
tension subside, thus eliminating the FTBA response (§§4.2 and 4.3). A boy who
holds gender–​math stereotypes and who also lives in an environment that appears
to confirm these stereotypes—​for example, he might go to a school in which no
girls take or succeed in math classes, because of implicit or explicit social norms,
long-​standing discriminatory organizations of career paths, and so on—​may auto-
matically think of his father and not his mother when he needs help with his math
homework. In this context, this reaction will be rewarded, updating and strengthen-
ing the boy’s prior association of males with math. The boy’s reaction might not be
successful as well, and in (at least) two different senses. His FTBA reaction might
go awry, as in the whispering case. He might misperceive a crucial detail of the math
problem, or over-​or underreact affectively, or fail to behave at all. But in the sec-
ond, more weighty sense in this context, his FTBA reaction might be (or is!) an
epistemic and moral failure. His mother might be better suited to helping him with
his math homework than his father. More broadly, his spontaneous reaction fails to
track the truth about women’s mathematical ability as well as the overriding reasons
to treat people with equal respect.48 I discuss the relationship between successful
FTBA reactions and moral goods in Chapter 7.
Finally, as I said in the case of the tennis player, there are rival interpretations on
offer of the aforementioned claims. The potentially most threatening ones to my
account interpret implicit biases in entirely different ways than I have, for example,
as ordinary beliefs rather than as implicit associations. I address these rival interpre-
tations in the next chapter.

48
  Regarding his epistemic failure, I am assuming that the boy’s gender–​math association is general-
ized to women’s innate mathematical ability. His gender–​math association could be thought of as accu-
rate, if applied more narrowly to the girls and women in his town, contingent on their being raised in
this particular environment. But implicit attitudes aren’t sensitive to contingencies like this, as I discuss
in Chapter 3. Regarding the boy’s moral failure, I am here assuming a kind of moral realism, such that
his FTBA reactions are morally problematic regardless of his explicit moral beliefs.
Perception , Emotion , B ehav ior, and   Chang e 63

6. Conclusion
The descriptions given in this chapter are meant to show how a range of seemingly
disparate examples of spontaneity can be identified in terms of their FTBA com-
ponents. I have not yet given an argument that these components are structured in
a unified way. I do so in the next chapter, where I consider cognitive architecture.
I now turn to my argument that states with co-​activating FTBA components consti-
tute implicit attitudes. They are not akin to physiological reflexes, mere associations,
beliefs (whether conscious or nonconscious, truth-​tracking or modular), or traits
of character. While they share a number of properties with what Gendler (2008a,b)
calls “aliefs,” implicit attitudes are different in crucial respects from these too.
3

Implicit Attitudes and the Architecture


of the Mind

In the preceding chapter, I  argued that paradigmatic spontaneous inclinations are


characterized by four components:  (1) noticing a salient Feature in the ambient
environment; (2) feeling an immediate, directed, and affective Tension; (3) reacting
Behaviorally; and (4) moving toward Alleviation of that tension, in such a way that
one’s spontaneous reactions can improve over time. In this chapter, I argue that these
co-​activating FTBA reactions make up a unified and sui generis mental state, which
I call an implicit attitude. After making some clarifying remarks about terminology,
I begin by ruling out more mundane possibilities. First, I argue that implicit attitudes
are not basic stimulus-​response reflexes (§1). Nor are they mere mental associations
that are explainable in terms of the laws of spatiotemporal contiguity (§2). Neither are
they best understood as beliefs or as traits (§§3 and 4). While they are closest to what
Gendler calls “aliefs,” implicit attitudes are distinct from these as well (§5). I conclude
the chapter by offering considerations that favor interpreting them as a unified mental
kind (§6).
I call the sui generis type of mental state implicated in paradigmatic spontane-
ous action an “implicit attitude” for several reasons. I began Chapter 1 by talking
about “instincts,” as in “gut instincts,” but these terms are imprecise and also have
the connotation of being innate. The kind of state under discussion here is, by
contrast, strongly affected by social learning. Another option, following Schapiro
(Chapter 2), would be to call this kind of state an “inclination.” This has the benefit
of simplicity, but its simplicity comes at the cost of leaving out—​to my ear at least—​
any sense of the temporally unfolding quality of states with FTBA components.
Inclinations as such might change over time, but not as a result of becoming more
attuned to feature(s) of the world. Another option is “mental habits” or, perhaps
better, “habitudes,” which one might define as something like multitrack habits1

1
  I use this term in the way that theorists of virtue use it, to describe patterns of action that are
durable and arise in a variety of contexts, and are also connected to a suite of thoughts, feelings, etc.
(e.g., Hursthouse, 1999).

64
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 65

with cognitive and affective components.2 I discuss limitations of this dispositional


approach in §4.
While I’ll argue that implicit attitudes are sui generis, I  don’t mean to sug-
gest that I’ve discovered anything radically new about the mind or am positing
a category that must be included in a “final theory” of the mind.3 The history of
Western philosophy is suffused with discussion of the relationship between, on the
one hand, animal spirits, appetite, passion, association, habit, and spontaneity and,
on the other hand, reason, deliberation, reflection, belief, and judgment.4 Implicit
attitudes are sui generis in the sense that they don’t fit neatly into any one of these
familiar categories. But they fit naturally within this long history of dichotomous
ways of thinking of the mind and represent, inter alia, a proposal for understand-
ing key commonalities among the phenomena variously called passion, habit, and
so on.5
In calling states with FTBA components implicit attitudes, I  draw upon the
relevant empirical literature on social cognition. While there is no clearly agreed-​
upon definition of implicit attitudes in this literature, this term stems from what
“attitudes” are understood to be in psychology, paired with a general sense of what
it means for a mental state to be “implicit.” In psychology, attitudes are preferences
(or “likings” and “dislikings”). This is, of course, different from the understanding
of attitudes in philosophy, which usually refers to states with propositional content
(e.g., beliefs). “Implicitness,” in psychology, is generally taken to mean outside of
awareness and control. So implicit attitudes are thought to be, roughly, unconscious
preferences that are difficult to control.
My use of the term “implicit attitudes” is broader. In contrast to the typical
usage in the psychological literature (as well as much of the relevant philosophi-
cal literature), I skirt the issues of awareness and control. Or, rather, I skirt these
issues in this chapter. I  address them in subsequent chapters, where I  consider

2
  See Railton (2011) for discussion of “habitudes.” Pierre Bourdieu’s “habitus” presents a related
option. Bourdieu defines the habitus as being composed of “[s]‌ystems of durable, transposable dis-
positions, structured structures predisposed to function as structuring structures, that is, as principles
which generate and organize practices and representations that can be objectively adapted to their
outcomes without presupposing a conscious aiming at ends or an express master of the operations
necessary in order to attain them” (1990, 53). While I think Bourdieu’s concept of habitus has much
to offer, I decline to adopt it, largely because I think it is too wedded to Bourdieu’s broader sociologi-
cal theories.
3
  See Gendler (2008b, 557, n 5) for a similar point about the sui generis nature of alief.
4
  See Evans and Frankish (2009) for more on this history.
5
  Neil Levy (2014, 2016) argues that we cannot answer questions about the implicit mind—​for
example, whether we are responsible for implicit biases—​by consulting our intuitions, because the way
the implicit mind works is radically foreign to the perspective of folk psychology. While I am sympa-
thetic with this point and think that our intuitions about implicit attitudes are likely to be challenged by
empirical facts, I find more continuity between empirical research and folk psychology than Levy does.
66 Mind

the relationship between implicit attitudes and the self, and the ethics of sponta-
neity. But in this chapter, where my focus is on psychological architecture, I focus
elsewhere. Awareness and control can be understood as conditions under which
implicit attitudes sometimes operate. Instead of focusing on these conditions, my
aim is to provide an account of the psychological nature of implicit attitudes.6
Moreover, while there is research on the role of implicit cognition in relatively
banal contexts like brand preferences (e.g., Coke vs. Pepsi), by far the greatest focus
in the field is on pernicious and unwanted biases. One often sees the terms “implicit
attitudes” and “implicit bias” used interchangeably. This is perhaps the most obvi-
ous sense in which my use of the term “implicit attitudes” is broader than the stand­
ard usage. My focus is on the explanatory role of implicit attitudes in the context of
both the virtues and vices of spontaneity.
With these caveats in mind, I now move on to consider where to locate implicit
attitudes in the architecture of the mind. Given that these states are activated rela-
tively automatically when agents encounter the relevant cues, one possibility is that
they are basic stimulus-​response reflexes.

1. Reflexes
One sense of “reflex” describes involuntary motor behavior.7 In this context, invol-
untariness often means “hard-​wired.” For example, infants’ “primary reflexes”—​
such as the palmar grasp reflex and the suckling reflex—​are present from birth (or
before) and then disappear over time. Some somatic reflexes that are present at
birth, like the pupillary reflex, don’t disappear over time, while others not present
at birth emerge through development, such as the gag and patellar reflexes. All of
these reflexes are “hard-​wired” in the sense that they involve no social learning. They
are, however, involuntary to various degrees. While infants will suckle at any stimuli
with the right properties, adults can control their somatic reflexes, albeit indirectly.
To control the patellar reflex, one can wear a stiff leg brace that prevents one’s leg
from moving. To control the pupillary reflex, one can put dilation drops in one’s
eyes. This suggests that involuntariness alone is not sufficient for carving reflexes off
from non-​reflexes. Better are the following four properties.
Reflexes are (1)  instantaneous, taking very little time to unfold. They are
(2) involuntary, not in a grand sense that threatens free will, but in the local sense

  See Payne and Gawronski (2010) for similar ideas.


6

  A different sense of “reflex” is used in physiology, in which “reflex arcs” describe the neural path-
7

ways that govern autonomic and somatic processes. These pathways are reflexive in the sense that they
are physically located outside the brain. It is not this sense of reflexes that I have in mind when con-
sidering whether and how implicit attitudes are reflex-​like states. What makes these states potentially
reflex-​like has little to do with their physical implementation in (or outside of) the brain.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 67

that they are relatively isolated from the agent’s intentions, goals, or desires.8 Note
the term “relatively.” Reflexes are not isolated from an agent’s intentions and so on
in the sense that one’s intentions are utterly powerless over them. As I said, while
one cannot directly prevent one’s pupil from constricting in bright light, one can put
hydroxyamphetamine drops in one’s eyes to force the iris dilator muscles to relax.
A related way to put this is that reflexes are (3) relatively isolated from changes in
context. To put it bluntly, the patellar reflex doesn’t care if it’s morning or night, if
you’re alone or in the company of others. When the right nerve is thwacked, the
response unfolds. Of course, on a very capacious interpretation of context, one
could construe leg braces and hydroxyamphetamine drops as elements of the con-
text that affect how a reflex unfolds (or doesn’t). But in the narrower sense I mean it,
contextual factors affect behavioral events when they bear some relationship to the
meaning of the event for the agent. This need not be a conscious relation, nor one
involving the agent’s beliefs. Moderation by context can mean that some feature of
the situation is related to the agent’s goals or cares. Finally, reflexes are (4) ballistic.
Once a reflexive reaction starts, it unfolds until it’s complete (unless its unfolding is
otherwise prevented).
Implicit attitudes display some of these characteristics. They are instantaneous,
in the sense that they unfold rapidly, and they are involuntary, in the sense I just
described namely, that they can be initiated without being intended, planned, or the
like. However, implicit attitudes are neither isolated from the agent’s context nor are
they ballistic.9
Recall that agents are constantly confronted by myriad stimuli, many of which
could in principle stand as features for them. The large painting stands as a fea-
ture for the museumgoer, but the light switch on the wall, ceteris paribus, doesn’t.
Benes’s blood sets off his reaction, but someone else’s blood probably wouldn’t. This
is because the restroom sign and Benes’s own blood are connected to the agent’s rel-
evant goals or cares, as I discussed in Chapter 2 (also see Chapter 4). Paradigmatic
cases of reflexive behavior lack this connection. Much as in the case of reflexes,
implicit attitudes are set in motion more or less instantaneously and automatically
when the agent encounters particular stimuli. But which stimuli set them in motion
has everything to do with other elements of the agent’s psychology (her goals, cares,
and her past value-​encoding experiences). While the skywalker’s knee-​knocking
and quivering might persist despite her beliefs, she does stay on the Skywalk after
all. Similarly, while implicit biases are recalcitrant to change by reason or willpower

8
  Of course, I haven’t said anything to convince a determinist that free will is unthreatened in these
cases. All I mean is that reflexes are involuntary in a sense distinct from the one raised by questions
about free will.
9
  For a broadly opposing view, focused on understanding intuitions as primitive and relatively
automatic “alarm signals” that paradigmatically lead us astray, both prudentially and normatively, see
Greene (2013).
68 Mind

alone, they are not altogether isolated from agents’ explicit attitudes and are in fact
quite malleable (see Chapters 6 and 7).
Psychological research on implicit attitudes also demonstrates that these states
are highly affected by context. Both physical and conceptual context cues shape
the way in which they unfold.10 For example, Mark Schaller and colleagues (2003)
show that the relative darkness or lightness of the room in which participants
sit shifts scores on implicit racial evaluations across several indirect measures,
including the IAT.11 Conceptual elements of context influence implicit attitudes
primarily in light of perceived group membership and social roles. Jamie Barden
and colleagues (2004), for example, varied the category membership of targets
by presenting the same individual in a prison context dressed as a prisoner and
dressed as a lawyer, and found that implicit evaluations of the person dressed
as a prisoner were considerably more negative. Similarly, Jason Mitchell and col-
leagues (2003) showed that implicit evaluations of the same individual—​Michael
Jordan—​depended on whether the individual was categorized by race or occupa-
tion. The agent’s own emotional state also affects the activation of implicit atti-
tudes. Nilanjana Dasgupta and colleagues (2009) found that participants who
were induced to feel disgust had more negative evaluations of gay people on an
IAT, although their implicit evaluations of Arabs remained unchanged. However,
participants who were induced to feel anger had more negative evaluations of
Arabs, while their evaluations of gay people remained unchanged.12 Context simi-
larly influences other cases. Being on the Skywalk on a blustery day might increase
a typical tourist’s aversive reaction, while taking a stroll when the wind is perfectly
still might make things slightly more tolerable.
Finally, implicit attitudes are not ballistic. This is because they involve dynamic
feedback learning, which enables the agent’s behavior to adjust on the fly as her
tension is or is not alleviated and as circumstances change. Some reflexes similarly
shift and adjust, such as pupillary dilation in a room with the lights being turned
off and on. But there is no learning going on there. Each reaction of the iris dilator
muscles to the brightness of the room is independent of the others. Moreover, the
dilator muscles will find the optimum point unless interrupted by further stimuli.
In the case of spontaneous action, however, the agent’s reaction is a reaction to felt
tension, and its alleviation (or lack of alleviation). The distance-​stander adjusts how
far she stands from her interlocutor in respect of her feelings of tension and allevia-
tion. This is an integrated and ongoing process, not a series of one-​and-​done reflex-
ive reactions. This process is, as I discussed in the preceding chapter, what allows

10
  W hat follows in this paragraph is adapted from Brownstein (2016). See Chapter 7 for further
discussion of context and implicit attitude malleability.
11
  Although I note the relatively small number of participants in this study (n = 52 in experiment
2). I note also that these results obtained only for subjects with chronic beliefs in a dangerous world.
12
  See Gawronski and Sritharan (2010) for a summary and discussion of these data.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 69

implicit attitudes to improve over time—​instrumentally, epistemically, or morally,


depending on the case—​and this, too, distinguishes them from mere reflexes.

2. Associations
While the F, T, B, and A relata of implicit attitudes are associatively linked, these
states themselves are not mere associations. By “mere” associations, I mean mental
states that are activated and that change exclusively according to associative prin-
ciples (Buckner, 2011). Mere associations are wholly insensitive to the meaning of
words and images, as well as to the meaning of an agent’s other thoughts. They are
thoroughly “subpersonal.”
In the usual sense, associations are mental states that link two concepts, such as
salt and pepper or donald and duck. But associations can also stand between
concepts and a valence—​for example, in the association between snake and
“bad”—​as well as between propositions—​for example, in the association between
“I don’t want to grow up” and “I’m a Toys R Us kid.” To say that these are asso-
ciations or, more precisely, to say that they have associative structure is to say that
thinking one concept, like salt, will activate the associated concept, pepper.13 Why
does this happen? In these examples, it is not typically thought to be because the
subject has a structured thought with content like “Salt and pepper go together.”14
The proposition “Salt and pepper go together” reflects the way an agent takes the
world to be. Propositions like this often express one of an agent’s beliefs. An asso-
ciation of salt with pepper, however, simply reflects the causal history of the agent’s
acquisition of the concepts salt and pepper. More specifically, it reflects what
Hume (1738/​1975) called contiguity: salt and pepper (either the concepts or the
actual substances) were frequently paired together in the agent’s learning history,
such that every time, or most times, the agent was presented with salt, she was also
presented with pepper. Eric Mandelbaum (2015b) distinguishes between mental
states with propositional and associative mental structure this way:  “Saying that
someone has an associative thought green/​toucan tells you something about the
causal and temporal sequences of the activation of concepts in one’s mind; saying
that someone has the thought there is a green toucan tells you that a person
is predicating greenness of a particular toucan.” Hume’s other laws of associative
learning—​resemblance and cause and effect—​similarly produce mental asso-
ciations (i.e., mental states with associative structures). These ways of coming to
associate concepts, or concepts with valences, are well known to be exploited by
behaviorist theories of learning.

13
  See Mandelbaum (2015b) for discussion of associative structure, compared with associative
theories of learning and thinking.
14
  For an exception, see Mitchell et al. (2009).
70 Mind

Behavioral evidence tends to be a poor tool for determining which mental states


are structured associatively. One approach is to focus on how states change over
time. Propositional states are thought to change in virtue of the meaning of words
or images. This is because there are specific relations between the constituents of
propositional states (Levy, 2014). If I  believe that “salt and pepper go together,”
I have placed the constituents of this thought—​salt and pepper—​together in virtue
of some meaningful relation (e.g., that they are well-​paired seasonings). One would
expect this state to change upon encountering thoughts or images that are relevant
to this relation (e.g., the thought that tarragon complements salt better than pep-
per). This can be thought of as “content-​driven” change. Content-​driven changes are
evident when a mental state changes in virtue of the content of some other mental
state (Levy, 2014).15 Here’s another example: I might go from the thought that “I
don’t need a hat today” to the thought that “I need a hat today” in virtue of acquiring
the belief that “it’s cold outside.” My thought about needing a hat counts as content-​
driven if it is true that it changed in virtue of the meaning of the thought “It’s cold
outside.” My hat-​attitude counts as content-​driven, in other words, and therefore
isn’t a mere hat–​cold association, if it demonstrates sensitivity to the meaning
of my, or others’, thoughts about the weather. Merely thinking the concept cold
wouldn’t change my hat-​attitude in this case. Nor would my hat-​attitude change
in virtue of the association cold–​today coming to mind, since this association
doesn’t predicate the quality of coldness onto the weather today.
In contrast, associations are thought to change exclusively in terms of associa-
tive principles. Traditionally, two such principles are recognized (Mandelbaum,
2015a). The first is extinction, in which one part of an associative pair is repeatedly
presented without the other and the agent unlearns the association over time. The
second is counterconditioning, in which the valence of an association is reversed
(e.g., by frequent pairing of snake with positively valenced stimuli).
Implicit attitudes appear to change in both associative and propositional ways.
Implicit attitudes (in the ordinary sense) change through counterconditioning, for
instance, by exposing agents to images of counterstereotypes (Blair et al., 2001;
Dasgupta and Greenwald, 2001; Weisbuch et  al., 2009). This process shifts the
agent’s exposure to stereotypes that in turn form the basis for associative learning
through contiguity (i.e., the frequent pairing of stimuli). The literature on evalu-
ative conditioning is vast, suggesting similar processes (Olson and Fazio, 2006).
However, these same implicit attitudes also seem to be sensitive to the mean-
ing of words and images. The context effects I  discussed earlier (§1) suggest as
much. For example, Jennifer Kubota and Tiffany Ito (2014) showed that happy
versus angry emotional facial expressions moderate the activation of black–​danger

  This is not to say, though, that states like belief can’t change due to a change in an agent’s associa-
15

tions. Culinary beliefs about seasonings are surely influenced or even driven by frequency of pairings.
No claim of modularity is being made here.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 71

implicit stereotyping. Smiling black faces elicit lower levels of implicit stereotyping
of black men as dangerous compared with angry black faces (for white subjects).
This change in implicit attitudes, due to changes in the perception of the emotional
expression of others’ faces, appears to be driven by sensitivity to the meaning of the
relationship between facial expression and danger (i.e., to the thought that people
who smile are less likely to be dangerous than people who look angry).16 Other
evidence for content-​driven changes in implicit attitudes stems from research on
“halo” and “compensation” effects. These are adjustments along particular dimen-
sions that reflect quite a lot of complexity in agents’ social attitudes. “Benevolent
sexism” is a classic compensation effect, for example. Stereotyping women as
incompetent leads to greater feelings of warmth toward them (Dardenne et  al.,
2007). Perceptions of competence and feelings of warmth are sometimes inversely
related like this (i.e., a compensation effect), but other times are positively related,
such as when liking one’s own group leads one to think of it as more competent
(i.e., a halo effect). These phenomena are found in research on implicit attitudes.
Rickard Carlsson and Fredrik Björklund (2010), for example, found evidence for
implicit compensation effects toward out-​groups, but not toward in-​groups. While
psychology students implicitly stereotyped lawyers as competent and cold, and
preschool teachers as incompetent and warm, preschool teachers implicitly stereo-
typed their own group as both warm and competent.
Other cases that I’ve discussed display similar patterns. Consider the skywalker’s
acrophobia. Exposure therapies appear to be effective for treating fear of heights.
Over time, simply being exposed to the Skywalk is likely to cause the agent to
cease her fear and trembling. This looks to be a case of extinguishing a high up–​
danger association by repeatedly presenting one element of the association to the
agent without the other. But most exposure therapies don’t just expose the agent
to one element of the associative pair without the other. Rather, most also include

16
  Alternative explanations are available here, however. One is that presentation of the stimuli
changes subjects’ moods, and their implicit attitudes shift in response to their change in mood. In this
case, the change wouldn’t be due to sensitivity to the meaning of the relationship between facial expres-
sion and danger. (Thanks to an anonymous reviewer for suggesting this possibility.) An associative
explanation is possible here as well. Gawronski and colleagues (2017) argue that multilayer neural
networks involving both excitatory and inhibitory links are capable of explaining these phenomena.
This is an open question for future research, one which depends upon experimental designs that can
isolate changes in implicit attitudes themselves rather than changes in controlled processes that affect
the activation of implicit attitudes. New multinomial “process dissociation” models, which are used
for breaking down the contributing factors to performance on tests such as the IAT, are promising for
this purpose. Note also that Gawronski and colleagues’ proposal regarding multilayer neural networks
departs from the basic claim that I am evaluating here, which is that implicit attitudes are mere associa-
tions. So if their interpretation of this kind of experiment were to be vindicated, it wouldn’t entail that
implicit attitudes are mere associations.
72 Mind

cognitive elements, such as teaching the agent to replace problematic thoughts with
helpful ones. This looks to be a case of content-​driven change.17
Of course, it’s difficult, if not impossible, to tell what’s happening on a case-​
by-​case basis. Apparent content-​driven change, for example, can sometimes be
explained using associative tools. This is possible, for example, in cases in which
implicit attitudes appear to demonstrate “balance” and “cognitive dissonance”
effects. The thought “I am a good person” (or a good–​me association), paired with
the thought “Bad people do bad things” (or a bad people–​bad things associa-
tion), can shift one’s assessment of something one just did from bad to good, thus
“balancing” one’s identity concept with a positive self-​assessment. This sort of effect
is found in implicit attitudes (Greenwald et al., 2002). Mandelbaum (2015b) argues
that this sort of effect is evidence that implicit attitudes cannot be associatively
structured, since it appears that these are content-​driven transitions. But Anthony
Greenwald and colleagues (2002) offer a model purporting to explain precisely
these balance effects in implicit cognition using associative principles. More evi-
dence is needed to sort through these interpretations, particularly given the reliance
of each interpretation on behavioral measures, which are only proxies for changes in
the attitudes themselves.
That implicit attitudes appear to change in both content-​driven and associative
ways is suggestive of my claim that these states are sui generis. All I hope to have
shown so far, though, is that these states are not mere associations, in the strong
sense that they are implemented exclusively by associative structures.18 Implicit
attitudes are sensitive to the meaning of words and images. Does this mean that
they are sensitive to the meaning of words and images in the way that beliefs
paradigmatically are?

3. Belief
In all of the cases I’ve discussed, it’s clear that agents have many beliefs about what
they’re doing that are relevant to explaining their actions. Benes believes that HIV is
carried in blood; the skywalker believes that she really is suspended over the Grand
Canyon; athletes believe that they are trying to score goals and hit winners; and

17
  See Chapter 7 for more on cognitive therapies. Of course, acrophobia is complex, involving not
only implicit attitudes, but also explicit reasoning. One possibility is that the counterconditioning
involved in exposure therapy shifts the agent’s implicit attitude, while the cognitive elements of therapy
affect the agent’s beliefs. This leaves open the possibility that the agent’s implicit attitude is functioning
in a merely associative way, that is, in response to counterconditioning alone and not in response to the
meaning of one’s desired thoughts about heights. More research is needed in order to learn how these
kinds of interventions actually work, as I mentioned in footnote 16 as well. See also Chapter 6.
18
  I give more detail in §3 about hypothesized conditions under which changes in attitudes are in
fact content-​driven changes.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 73

so on. On closer inspection, though, in many of these cases, agents seem to lack
crucial beliefs about what they’re doing, or even seem to have mutually contradic-
tory beliefs. Ask the museumgoer what the optimal distance from which to view an
8′ × 18′ painting is, and your question will likely be met with a puzzled stare.19 And
Benes’s and the skywalker’s behaviors seem to conflict with other of their beliefs.
These agents seem to believe one thing—​that my own blood can’t infect me with
a disease I already have, or that I am safe on the glass platform—​but their behav-
ior suggests otherwise. These cases raise questions about belief-​attribution—​that
is, about determining what agents really believe—​and questions about belief-​
attribution depend in turn on questions about what beliefs are. So one way to deter-
mine whether implicit attitudes are beliefs is to consider whether the kinds of action
in which they are implicated are explainable in terms of the agents’ beliefs.
On the one hand, beliefs are something like what one takes to be true of the
world.20 On the other, beliefs are also thought to guide action, together with one’s
desires and ends. Belief-​attribution in the case of spontaneous and impulsive action
becomes tricky on the basis of these two roles that states of belief are traditionally
thought to play in the governance of thought and action. Many people can relate to
the feeling of having beliefs but failing to live by them, particularly when they act
spontaneously or impulsively. One potential explanation for this is that beliefs are
truth-​taking but not necessarily action-​guiding. This explanation would be consist­
ent with a truth-​taking view (a), given the following interpretive options in cases of
apparent belief–​behavior discord:

(a) A truth-​taking view, which attributes beliefs on the basis of agents’ reflective
judgments and avowals (Gendler, 2008a,b; Zimmerman, 2007; Brownstein
and Madva, 2012b)21
(b) An action-​guiding view, which attributes beliefs on the basis of agents’ sponta-
neous actions and emotions (Hunter, 2011)
(c) A context-​relative view, which takes both judgment and action to be relevant
to belief-​attribution, and attributes to agents beliefs that vary across contexts
(Rowbottom, 2007)
(d) A contradictory-​belief view, which takes both judgment and action to be inde-
pendently sufficient for belief-​attribution, and attributes to agents contradictory

19
  My point is not that beliefs must be articulable, but that an agent lacking an articulable answer to
a question like this offers prima facie reason to suspect that the agent lacks the relevant belief.
20
  See Gilbert (1991) for a psychological discussion of belief. See Schwitzgebel (2006/​2010) for
a review of contemporary philosophical approaches to belief. The following presentation and some
of the analysis of interpretive options for belief-​attribution are adapted from Brownstein and Madva
(2012b).
21
  By judgment, I mean what the agent takes to be true. While judgments are typically tied to avow-
als, I do not define judgment (or belief) in terms of avowals. See footnote 19.
74 Mind

beliefs (Egan, 2008; Gertler, 2011; Huddleston, 2012; Huebner, 2009; Muller
and Bashour, 2011)
(e) An indeterminacy view, which takes neither judgment nor action to be independ­
ently sufficient, and attributes to agents no determinate belief at all, but rather
some “in-​between” state.22

Each of these views fits more naturally with some cases than others, but (a), the truth-​
taking view, which attributes belief on the basis of agents’ reflective judgments and
avowals, outperforms the alternatives in paradigmatic cases. First consider (b)–​(e) in
the context of the skywalker case.
Proponents of (b), the action-​guiding view, which attributes belief on the basis of
agents’ spontaneous actions and emotions, would have to argue that the skywalker
merely professed, wished, or imagined the platform to be safe. But in this case, if the
skywalker simply professed, wished, or imagined the platform to be safe, and thus failed
to believe that the platform was safe, she would have to be clinically ill to decide to walk
on it. Harboring any genuine doubt about the platform’s safety would keep most people
from going anywhere near it. Perhaps some agents venture onto the Skywalk in order to
look tough or to impress a date. In this case, perhaps the agent does indeed believe that
the platform is only probably safe and yet walks onto it anyway, due to the combination
of some other set of beliefs and desires. But this explanation can’t be generalized.23
Are the skywalker’s beliefs just (c) unstable across contexts? While it is surely the
case that agents’ beliefs sometimes flip-​flop over time, the skywalker seems to treat
the platform as both safe and unsafe in the same context. Perhaps, then, she (d) both
believes and disbelieves that the Skywalk is safe. But in attributing contradictory
beliefs to her in this case, one seems to run up against Moore’s paradox, in the sense
that one cannot occurrently endorse and deny a proposition.24 Is the skywalker then

22
  This sketch of the possible responses is drawn from Eric Schwitzgebel (2010), who points out
that a similar array of interpretive options arises in the literature on self-​deception (see Deweese-​Boyd,
2006/​2008). Also see Gendler (2008a,b) for some discussion of historical predecessors of these
contemporary views.
23
  See Gendler (2008a, 654–​656). Another possibility, consistent with the action-​guiding view, is
that the skywalker believes both that the platform is safe and that the platform is scary to experience.
But this seems to posit unnecessary extra beliefs in the skywalker’s mental economy. The skywalker’s
fear and trembling do not seem to be a reaction to a belief that being on the platform is scary. Rather,
they seem to be a reaction to being on the platform. In other words, it seems likely that the skywalker
does indeed believe that the platform is scary to experience, but this belief seems to be a result of the
skywalker’s fear-​inducing experience, not a cause of it. Thanks to Lacey Davidson for suggesting this
possibility.
24
  I  take slight liberty with Moore’s paradox, which more accurately identifies a problem with
asserting P and asserting disbelief in P. I take it that at least one reason this is problematic is that these
assertions suggest that the agent occurrently believes and disbelieves the same thing. And this causes
problems for belief-​attribution; it seems impossible to know what the agent believes or it seems that
the agent doesn’t have a relevant belief. One might avoid Moore’s paradox by positing unconscious
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 75

(e) in an irreducibly vague state of “in-​between belief ” (Schwitzgebel, 2010)? While


problems associated with vagueness are ubiquitous and apply as much to ascriptions
of belief and desire as they do to ascriptions of tallness and baldness, any positing
of “in-​between” states seems to defer those problems rather than solve them.25 Why
not draw the distinctions as precisely as we can and just acknowledge that there are
outliers? Another problem facing the contradictory-​belief (d) and indeterminacy
(e) views is imagining how one could reasonably distinguish between them, a task
comparable to distinguishing the view that a person is both tall and not tall from
the view that the person is neither tall nor not tall. What could, in this context, be
evidence for one over another?
The truth-​taking view (a) countenances the fact that the safety of the Skywalk is
both what the agent judges to be true all things considered and what guides the agent’s
intentional decision to walk across it. At the same time, her emotional responses
and behavioral inclinations “go rogue.” The difference between her explicit judg-
ment and intentional decision, on the one hand, and her affective-​behavioral incli-
nations, on the other, can be seen in the way that she would or would not react to
information. No amount of data about the Skywalk’s weight-​bearing load capacities
or the number of visitors who haven’t fall through the platform is likely to still one’s
knocking knees and racing heart. Yet the slightest credible report indicating that the
platform isn’t safe would cause the skywalker to revise her beliefs and get the hell off
the platform immediately. Affective-​behavioral states like the skywalker’s are, in this
sense, irredeemably yoked to how things perceptually seem to the agent, independ­
ently of whether she has ample reason to judge that the seeming is misleading.26
Such “seemings” should not be confused with beliefs, as Yuri Cath points out:

We should not confuse mere seemings with beliefs. Even if one knows that
the two lines in a Müller-​Lyer figure are of the same length, it will still seem
to one that they differ in length. And as Bealer (1993) has pointed out, the
same point applies not only to perceptual but also to intellectual seemings,
[sic] it can still seem to one that the naïve axiom of set theory is true even

contradictory beliefs. See below for unconscious belief approaches. But see also Huddleston (2012)
and Muller and Bashour (2011).
25
  For more on the indeterminacy view (e), see §4. See also Zimmerman (2007, 73–​75).
26
  Chirstopher  Peacocke (2004, 254–​257) similarly endorses what he calls the “belief-​indepen-
dence” of emotional responses, citing Gareth Evans’s (1982, 123–​124) discussion of the belief-​inde-
pendence of perception (evident in perceptual illusions like the Müller-​Lyer illusion). While Peacocke
(2004) seems to defend (a), the truth-​taking view of belief, he is sometimes cited as a defender of (b),
an action-​guiding view, because of his (1999, 242–​243) discussion of a case akin to aversive racism. It is
natural to interpret Peacocke’s considered position as privileging the role of judgment in belief attribu-
tion, while acknowledging that in some cases so many of an agent’s other decisions and actions may fail
to cohere with her reflective judgments that it would be wrong to attribute the relevant belief to her.
76 Mind

though one does not believe that it is true, because one knows that it leads
to a contradiction. (2011, 124)

Further reasons to support the truth-​taking view of belief stem from common
reactions to dispositionally muddled agents.27 Although we might judge the sky-
walker to be phobic or lacking in (some ideal form of) self-​control, we would not
impugn her with ignorance or irrationality. The skywalker is met with a different
kind of reaction than one who both believes and disbelieves P.
The same considerations apply to other relevant cases. Proponents of the action-​
guiding view (b) with respect to cases of implicit bias would have to argue that an
agent’s egalitarian values aren’t really reflective of her beliefs. Perhaps in some cases
this is true. Perhaps some agents just say that they believe, for instance, that men and
women are equally qualified for a lab-​manager position, but don’t really believe it.
But in many cases this is implausible. Activists and academics who have spent their
entire careers fighting for gender and racial equity have implicit biases. These agents
clearly genuinely believe in egalitarianism (broadly understood). As in the case
of the skywalker, these genuine beliefs also guide agents’ intentional actions. One
might, for instance, attend diversity trainings or read feminist philosophy. Similarly,
the concerns I  raised earlier about the context-​relative (c), contradictory-​belief
(d), and indeterminacy (e) views obtain here too, regardless of whether the target
behavior is the skywalker’s trembling or the biased agent’s prejudiced evaluations
of CVs. This leaves the truth-​taking view (a) again. Here, too, the way in which the
agent in question reacts to information is crucial. Paradigmatic implicit biases are
remarkably unaffected by evidence that contradicts them (see discussion below of
Tal Moran and Yoav Bar-​Anan [2013] and Xiaoqing Hu et al. [2017]). Just as in the
skywalker’s case, implicit biases are yoked to how things seem to an agent, regardless
of what the agent judges to be true.
But this is too quick, one might argue, given the evidence that beliefs can be
unconscious, automatic, and unresponsive to an agent’s reflective attitudes and
deliberations (Fodor, 1983; Gilbert, 1991; Egan, 2008, 2011; Huebner, 2009;
Mandelbaum 2011). This evidence suggests a picture of “fragmented” or “compart-
mentalized” belief that supports the contradictory-​belief view (d) in the kinds of
cases under discussion. Andy Egan offers a helpful sense of what a mind full of frag-
mented beliefs looks like. “Imagine two machines,” he suggests.

Both do some representing, and both do some acting on the basis of their
representations. One is a very fast, very powerful machine. It keeps all
of its stored information available all of the time, by just having one big,
master representation of the world, which it consults in planning all of its

27
  See Zimmerman (2007).
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 77

behavior. It also updates the single representation all at once, to maintain


closure and consistency: as soon as it starts representing P, it stops rep-
resenting anything incompatible with P, and starts representing all of the
consequences of P. It’s just got one, always-​active, instantly-​updated repre-
sentation of the world.
The other machine is a smaller, slower, less impressive machine. It can’t
hold a lot of information in its working memory. So it divides up its store
of information about the world into lots of little representations, and calls
up different ones at different times to guide particular bits of its behavior.
Not all of its information is available to it all the time, or for every pur-
pose, and so it sometimes fails to act on things that, in some sense or other,
it “knows” or “believes.” Since its information about the world is divided
into many different representations, updates to one representation don’t
always percolate all the way through the system. Sometimes this machine
“learns” that P, but fails to erase all of its old representations of things that
are incompatible with P. Sometimes it represents that P (in one represen-
tational item), and that if P then Q (in another representational item),
but fails to represent that Q, since the two representational items don’t
communicate with one another in the right sort of update-​inducing way.
(2008, 48–​49)

A fragmented mind is like the second machine, and “we are, of course, machines of
the second kind,” Egan writes. If this is correct, then perhaps implicit attitudes may
be just one kind of belief, which guide particular bits of behavior but fail to respond
to the rest of what the agent knows and believes.
Our minds are like the second machine, but this doesn’t necessarily support a
contradictory-​belief (d) interpretation of the present cases (i.e., that paradigmatic
implicit attitudes are beliefs). As I’ve presented them, implicit attitudes are belief-​
like. They encode risk, reward, and value. They update in the face of an agent’s chang-
ing experiences and have the potential to provide normative guidance of action (see
§6). And they reflect some connection to an agent’s practical concerns (i.e., her
goals and cares). However, beliefs as such have additional properties.28 Most central
among these is the way in which beliefs communicate and integrate inferentially
with one another and with other mental states.

28
  R ailton makes a similar point in arguing that soldiers who are skilled in fearing the right
things—​in “picking up” signs of danger—​express belief-​like states that are not, nevertheless, just
as such beliefs: “Well-​attuned fear is not a mere belief about the evaluative landscape, and yet we
can speak of this fear, like belief, as ‘reasonable,’ ‘accurate,’ or ‘justified.’ It thus can form part of
the soldier’s ‘practical intelligence’ or ‘intuitive understanding,’ permitting apt translation from
perception to action, even when conscious deliberation is impossible and no rule can be found”
(2014, 841).
78 Mind

One conception of this property is that beliefs are “inferentially promiscuous”


(Stich, 1978). This is the idea that any belief can serve as a premise for a huge array
of other possible beliefs. For example, my belief that it’s sunny outside today (S)
can lead me inferentially to believe just about anything else (X), so long as I infer
that if S then X.29 The action-​guiding representations involved in the cases I’ve
described don’t appear to display this property. Considering the skywalker’s rogue
affective-​behavioral reactions as premises in an inference works only in a very
restricted sense, namely, as premises of the inference that she should get off the
platform. These states are “inferentially impoverished.” This is akin to what Stephen
Stich (1978, 515) says about subdoxastic states: it is not that inferences obtain only
between beliefs; rather, it is that the inferential capacities of subdoxastic states are
severely limited. Stich writes that there is a “comparatively limited range of poten-
tial inferential patterns via which [subdoxastic states] can give rise to beliefs, and a
comparatively limited range of potential inferential patterns via which beliefs can
give rise to them” (1978, 507). (Of course, Stich is not describing implicit attitudes.
He is describing subdoxastic states generally. My claim is that implicit attitudes are
paradigmatically subdoxastic in this way. While I accept that human minds are frag-
mented in the way Egan describes, I am arguing that those fragments that deserve to
be considered beliefs have properties, such as inferential promiscuity, which para-
digmatic implicit attitudes lack.)
In discussing the difference between subdoxastic states and beliefs proper, Neil
Levy (2014) similarly draws upon the concept of inferential impoverishment. He
argues that states that count as beliefs must demonstrate the right systematic pat-
terns of inference in response to the semantic content of other mental states. For
example, a person’s implicit associations of men with leadership qualities might
shift upon hearing a persuasive argument about the history of sexism, and this
would appear to show that the implicit association is shifting because the agent has
inferred, in a rationally respectable way, the conclusion. But implicit attitudes dem-
onstrate this kind of inference pattern in only a “patchy” way, according to Levy. The
shifts in attitude do not necessary last long or persist outside the context in which
the persuasion manipulation was applied. This suggests that they aren’t beliefs.
Earlier I discussed some ways in which implicit attitudes are responsive to the
meaning of words and images (e.g., in exhibiting halo and compensation effects; see
§2). But sometimes implicit attitudes are demonstrably unresponsive to prompts in
“intelligent” ways. Moran and Bar-​Anan (2013), for example, presented participants
with a neutral stimulus (drawings of alien creatures in various colors) that either

29
  By inferences, I mean transitions between mental states in which the agent in some way takes the
content of one state (e.g., a premise) to support the content of another (e.g., a conclusion). I take this
to be roughly consistent with Crispin Wright’s view that inference is “the formation of acceptances for
reasons consisting of other acceptances” (2014, 36). For discussion of a comparatively less demanding
conception of inference than the one I use here, see Buckner (2017).
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 79

started or stopped a pleasant or unpleasant sound.30 When asked directly, partici-


pants unsurprisingly reported that they preferred the drawings that started pleas-
ant sounds and stopped unpleasant sounds to the drawings that started unpleasant
sounds and stopped pleasant sounds. But when participants’ attitudes toward the
drawings were assessed using an IAT, their implicit preferences merely reflected co-​
occurrence. That is, their implicit attitudes simply reflected whether the drawings
co-​occurred with the pleasant or the unpleasant sound. Their implicit attitudes did
not seem to process whether the drawing started or stopped the sound.
Even more striking are results from Hu and colleagues (2017), who showed that
implicit attitudes appear to fail to process the difference between the words “causes”
and “prevents.” Participants were shown pairings of pharmaceutical products and
information about positive or negative health outcomes. They were also explicitly
told that some of the products cause the outcome and some of the products prevent
the outcome. Again, when asked directly, participants unsurprisingly favored the
products that they were told cause positive health outcomes and those that prevent
negative health outcomes. But their implicit evaluations didn’t show this pattern.
Instead, they simply reflected whether the products were paired with positive or
negative outcomes, regardless of whether they were told that the product caused or
prevented the outcome.
Results like these seem to support Levy’s contention that implicit attitudes inte-
grate into respectable patterns of inference only in a “patchy” way. Can anything
be said about the conditions under which integration occurs in the right ways,
compared with when it doesn’t? Alex Madva (2016b) has proposed more specific
conditions—​which he calls “form-​sensitivity”—​according to which states that
do and do not respect systematic patterns of inference can be identified. Madva’s
view is also developed as an account of the structure of implicit attitudes. Form-​
sensitivity has two necessary conditions. First, a mental state is form-​sensitive only
if it responds to other states with different form differently. For example, a form-​
sensitive state will respond to P and not-​P differently, but a state that fails to be
form-​sensitive might treat P and not-​P similarly. In this example, sensitivity to
negation stands in for sensitivity to the logical form of a state with semantic con-
tent. A wealth of evidence suggests that spontaneous reactions are driven by form-​
insensitive states in this way (e.g., Wegner 1984; Deutsch et al., 2006; Hasson and
Glucksberg, 2006). For example, similar implicit reactions can be elicited by prim-
ing people with the words “good” and “not good.” Madva’s second condition is that
a mental state is form-​sensitive only if it responds to other states with similar form
similarly. According to this condition, a form-​sensitive state will respond to “I will
Q, if I see P” in the same way as it will respond to “If I see P, I will Q.” These two

30
  I note the relatively small number of participants in Moran and Bar-​Anan’s (2013) study. I take
the conceptual replication of this study in Hu et al. (2017), discussed in the next paragraph, as some
reason for placing confidence in these findings.
80 Mind

conditional statements have the same logical form; they mean the same thing. But
there are cases in which it appears that implicit attitudes treat statements with minor
differences in form like this differently. This is particularly the case in the literature
on “implementation intentions,” or “if–​then plans.”31 These sorts of plans appear to
respond to semantically identical but syntactically variable formulations differently.
For example, in a shooter bias scenario, rehearsing the plan “I will always shoot a
person I  see with a gun” appears to have different effects on one’s behavior than
does rehearsing the plan “If I see a person with a gun, then I will shoot” (Mendoza
et al., 2010).
These points touch on the ongoing “associative” versus “propositional” debate
in research on implicit social cognition. Findings such as those reported by Moran
and Bar-​Anan (2013) and Hu et al. (2017) have been taken to support the associa-
tive view, which is most prominently spelled out in the “Associative-​Propositional
Evaluation” model (APE; Gawronski and Bodenhausen, 2006). Meanwhile, research
I discussed in the preceding section, suggesting that implicit attitudes process the
meaning of propositions, has been taken to support the propositional view (e.g.,
Mitchell et al., 2009; De Houwer, 2014). One reason this debate is ongoing is that
it is unclear how to interpret key findings. For example, Kurt Peters and Gawronski
(2011) find that the invalidation of evaluative information doesn’t produce disso-
ciation between implicit and explicit evaluations when the invalidating information
is presented during encoding. This suggests that implicit attitudes are processing
negation in the way that Madva (2016b) suggests they don’t. But when the invali-
dating information is presented after a delay, implicit and explicit attitudes do disso-
ciate. This complicates both the associative and propositional pictures. And it does
so in a way that suggests that neither picture has things quite right.32 In my view, this
supports Levy’s contention that while implicit attitudes integrate into patterns of
inference in some ways, they don’t do so in ordinary and fully belief-​like ways.
Alternatively, one might bypass the claim about the difference between infer-
entially promiscuous and inferentially impoverished states by adopting a radically
expansive view of belief.33 The theory of “Spinozan Belief Fixation” (SBF), for exam-
ple, argues that minds like ours automatically believe everything to which they are
exposed (Huebner, 2009; Mandelbaum 2011, 2013, 2014, 2015b). SBF rejects the
claim that agents are capable of evaluating the truth of an idea before believing or
disbelieving it. Rather, as soon as an idea is encountered, it is believed. Mandelbaum
(2014) illustrates SBF vividly: it proposes that one cannot entertain or consider or
imagine or even encounter the proposition that “dogs are made out of paper” with-
out immediately and unavoidably believing that dogs are made out of paper. Inter
alia, SBF provides a doxastic interpretation of implicit biases, for as soon as one

31
  See Chapter 7 for more in-​depth discussion of implementation intentions.
32
  See Gawronski, Brannon, and Bodenhausen (2017) for discussion.
33
  W hat follows in this paragraph is adapted from Brownstein (2015).
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 81

encounters, for example, the stereotype that “women are bad at math,” one puta-
tively comes to believe that women are bad at math. The automaticity of believing
according to SBF explains why people are likely to have many contradictory beliefs.
In order to reject P, one must already believe P (Mandelbaum, 2014).
Proponents explain the unintuitive nature of SBF by claiming that most peo-
ple think they have only the beliefs that they consciously know they have. So, for
example, a scientifically-informed person  presented with the claim that dinosaurs
and humans walked the earth at the same time will both believe that dinosaurs and
humans walked the earth at the same time and that dinosaurs and humans did not
walk the earth at the same time, but they will only think they have the latter belief
because it is consistent with what they consciously judge to be true.
But this doesn’t go far enough in explaining the unintuitive nature of SBF. At
the end of the day, SBF requires abandoning the entire normative profile of belief.
It scraps the truth-​taking role of belief altogether. This in turn means that there is
no justified epistemic difference between believing P and considering, testing, or
rejecting P. For when we reject P, for instance, we come to believe not-​P, and accord-
ing to SBF we will have done so just upon encountering or imagining not-​P.
SBF asks a lot in terms of shifting the ordinary ways in which we think about
belief and rationality. More familiar conceptions of belief don’t shoulder nearly so
heavy a burden. If one accepts some existing account of belief that preserves belief ’s
two principal roles—​of action-​guiding and truth-​taking—​then it appears that the
truth-​taking view (a) of belief-​attribution in the case of spontaneous and impulsive
action is still preferable to the other options. If the truth-​taking view is best, then
these actions are not most persuasively explicable in terms of agents’ beliefs, since
the agents’ affective and behavioral reactions in these cases don’t reflect what they
take to be true. The upshot of this is that implicit attitudes are not beliefs.

4. Dispositions
In §3, I cited Eric Schwitzgebel (2010) as a defender of the indeterminacy view.
This view takes neither judgment nor action to be independently sufficient for
belief-​attribution. Instead, in cases of conflict between apparent belief and behavior,
this view attributes to agents no determinate belief at all, but rather an “in-​between”
state. Schwitzgebel (2010) explicitly calls this state “in-​between belief.” This follows
from Schwitzgebel’s (2002, 2010) broader account of attitudes, in the philosophi-
cal sense, as dispositions. All propositional attitudes, on this view—​beliefs, inten-
tions, and so on—​are broad-​based multitrack dispositions. To believe that plums
are good to eat, for example, is to be disposed to have particular thoughts and feel-
ings about eating plums and to behave in particular ways in particular situations.
These thoughts, feelings, and behaviors make up what Schwitzgebel calls a “disposi-
tional profile.” You are said to have an attitude, on this view, when your dispositional
82 Mind

profile matches what Schwitzgebel (2013) calls the relevant folk-​psychological ster­
eotype. That is, you are said to have the attitude of believing that plums are good
to eat if you feel, think, and do the things that ordinary people regard as broadly
characteristic of this belief.
This broad dispositional approach to attitudes—​particularly to the attitude of
belief—​is hotly contested (see, e.g., Ramsey, 1990; Carruthers, 2013). One core
challenge for it is to explain away what seems to be an important disanalogy between
trait-​based explanations of action and mental state–​based explanations of action.
As Peter Carruthers (2013) points out, traits are explanatory as generalizations. To
say that Fiona returned the money because she is honest is to appeal to something
Fiona would typically do (which perhaps matches a folk-​psychological stereotype
for honesty). But mental states are explanatory as token causes (of behavior or
other mental states). To say that Fiona returned the money because she believes
that “honesty is the best policy” is to say that this token belief caused her action.
Schwitzgebel’s broad dispositional approach seems to elide this disanalogy (but for
replies see Schwitzgebel, 2002, 2013).
In §3, I noted several challenges this approach faces specifically in making sense
of cases of conflict between apparent belief and behavior. One challenge is that
the broad dispositional approach seems to defer the problem of belief-​attribution
rather than solve it. Another is that it underestimates the intuitive weight of avowals,
for example, in the case of a social justice activist who explicitly decries racism but
nevertheless holds implicit racial biases. It seems appropriate to attribute the belief
that racism is wrong to her (which is not the same as saying that she is blameless; see
Chapters 4 and 5). In addition, because this approach takes folk-​psychological ste-
reotypes as foundational for attitude-​ascription, it must be silent on cases in which
folk-​psychological stereotypes are lacking. This is precisely the case with implicit
attitudes. Our culture lacks a stereotypical pattern of thought, feeling, and action
against which an agent’s dispositional profile could be matched. Earlier I noted that
implicit attitudes are a version of what have historically been called animal spirits,
appetite, passion, association, habit, and spontaneity, but there is no obvious folk-​
psychological stereotype for these either.
The problem with the dispositional approach, in this context, is that it ultimately
tells us that we cannot attribute an implicit attitude to an agent, not at least until
ordinary people recognize an implicit attitude to have signature features. But in
some cases, I think, science and philosophy can beat folk psychology in attitude-​
ascription. Consider by analogy the question of ascribing beliefs to nonhuman
animals. As Carruthers (2013) puts it, there is good (scientific) reason to think
that apes share some basic concepts with human beings, concepts like grape and
ground. And so there is good reason to think that apes have the capacity to believe
that “there is a grape on the ground.” But my sense is that there is no discernible folk-​
psychological stereotype for belief-​attribution in apes. Here it seems obvious that
folk psychology is outperformed by science and philosophy for attitude-​ascription.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 83

Schwitzgebel’s is not the only dispositional approach to theorizing the nature of


implicit attitudes. According to Edouard Machery (2016), attitudes in the psycho-
logical sense are dispositions. He retains a mental state account of propositional atti-
tudes, according to which states like beliefs can occur and are realized in brain states.
But attitudes in the psychological sense—​likings or dislikings—​are multitrack dis-
positions, akin to personality traits, on Machery’s view. They do not occur and are
not realized in the brain (though they depend in part on brain states). Machery
argues that there are various bases that together comprise an attitude. These include
feelings, associations, behavior, and propositional attitudes like beliefs. Indirect
measures like the IAT, on this picture, capture one of the many psychological bases
of the agent’s overall attitude. Explicit questionnaire measures capture another psy-
chological basis of the agent’s attitude, behavioral measures yet another basis, and
so on. An important upshot of this view is that there are no such things as implicit
attitudes. The reason for this is that the view asserts that attitudes are traits, and traits
(like being courageous or honest) do not divide into implicit and explicit kinds.
A concept like “implicit courageousness” is meaningless, Machery argues. Rather,
associations between concepts—​which may be implicit in the sense of unconscious
or difficult to control—​are simply one of the bases of an attitude, as are low-​level
affective reactions and the like.
One challenge for Machery’s version of the trait view is the ubiquity of disso-
ciations between agents’ propositional attitudes, implicit associations, affective
reactions to stimuli, and behavior. If these are dissociated, then the agent lacks
the attitude in question. For example, on Machery’s view, a person who believes
that men and women are equally competent for a job, but associates men with
competence on an IAT, fails to display a multitrack disposition and thus fails
to have a relevant attitude. The putative bases of their attitude are dissociated.
This would be akin to a person who speaks highly of honesty but implicitly
associates dishonesty with herself (and perhaps also fails to act honestly). Such
a person would fail to display the multitrack trait of honesty (or dishonesty).
Because dissociations between thoughts, feelings, and behavior are ubiquitous
(cf. Chapter 1), it will be rare to find agents who have attitudes. This is an unwel-
come result if the concept of attitude is to play a role in our understanding of
human thought and action.
Moreover, one of the central premises of Machery’s trait view rests on the
claim that various indirect measures dissociate. For example, there are only weak
relations between the IAT and variations of evaluative priming (Sherman et al.,
2003). Machery argues that if implicit attitudes were a distinct kind of mental
state, then we should expect different measurement tools for assessing them to
be more strongly correlated with each other. The trait view doesn’t predict close
relations between these various measures, on the other hand, because rather than
tapping into one kind of mental state—​an implicit attitude—​they are said to tap
into various psychological bases of an agent’s attitude. But there are plausible
84 Mind

alternative explanations for the correlational relations between indirect mea-


sures. Bar-​Anan and Brian  Nosek (2014) find that correlations between seven
of the most commonly used indirect measures are highly content-​specific.34
Correlations are weak for self-​esteem, for example, moderate for race, and strong
for politics. This mirrors the content-​specific relations between direct and indi-
rect attitude measures. That is, correlations are strongest between direct and
indirect political attitude measures and are weakest between direct and indirect
self-​esteem attitude measures. Bar-​Anan and Nosek (2014, 683) suggest that this
may be due to the complexity of the attitude target. The concept self is multifac-
eted, while a concept like republican or democrat is comparatively univocal.
If this is right, then the variability between indirect attitude measures is no threat
to the standard mental state view of implicit attitudes. Such variability is content-​
rather than measure-​specific, and for predictable reasons (i.e., the complexity of
the attitude target).
In a related vein, Machery appeals to other apparent weaknesses in measures
of implicit attitudes in support of his trait view. For example, the test-​retest sta-
bility of the IAT is relatively low (r  =  .5–​.6; Gschwendner et  al., 2008). This
means that the same person taking the IAT at two different times is relatively
likely to have different results. This is consistent with the high context-​sensitiv-
ity of implicit attitudes (see §1.2 in Chapter 2 and the discussion in Chapter 7).
Machery argues that this context-​sensitivity is predicted by the trait picture but
not by the mental state picture. But this is debatable. Some models of implicit
attitudes, which take them to be mental states, make specific predictions about
the contexts in which token attitudes will and will not be activated (see, e.g.,
Tobias Gschwendner et al. [2008]—​who demonstrate conditions under which
the test-​retest validity of the IAT is significantly boosted—​and Cesario and
Jonas’s [2014] “Resource Computation Framework,” both of which I discuss in
the Appendix). The same goes for Machery’s claim that measures like the IAT
are poor predictors of behavior and that this is predicted by the trait picture but
not the mental state picture. A  full sweep of the literature on implicit attitude
measurement does not suggest that the IAT is altogether a poor predictor of
behavior. Just how to interpret the predictive power of the (certainly imperfect)
IAT is a matter of current debate.35

34
  Bar-​Anan and Nosek use several criteria: “internal consistency, test-​retest reliability, sensitivity
to known-​groups effects, relations with other indirect measures of the same topic, relations with direct
measures of the same topic, relations with other criterion variables, psychometric qualities of single-​
category measurement, ability to detect meaningful variance among people with nonextreme attitudes,
and robustness to the exclusion of misbehaving or well-​behaving participants” (2004, 682). See the
Appendix for definitions and discussion.
35
  See the Appendix.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 85

5. Alief
If implicit attitudes are not reflexive processes, mere associations, beliefs, or dis-
positional traits, what are they? Perhaps they are what Gendler (2008a,b, 2011,
2012) calls “aliefs.” More primitive than belief, an alief is a relatively inflexible dis-
position to react automatically to an apparent stimulus with certain fixed affective
responses and behavioral inclinations (Gendler 2008b, 557–​60). While aliefs are
said to dispose agents to react to stimuli in particular ways, aliefs are not proposed
to be dispositions in the sense of traits of character. Rather, they are proposed to
be a genuine mental state, one that deserves to be included in the taxonomy of the
human mind. In Gendler’s parlance, a firm believer that superstitions are bogus
may yet be an abiding aliever who cowers before black cats and sidewalk cracks. An
agent may be sincerely committed to anti-​racist beliefs, but simultaneously harbor
racist aliefs. Indeed, Gendler proposes the concept of alief in order to make sense
of a wide swath of spontaneous and impulsive actions. She goes so far as to argue
that aliefs are primarily responsible for the “moment-​by-​moment management” of
behavior (2008a, 663). Moreover, she proposes an account of the content of alief
in terms of three tightly co-​activating Representational, Affective, and Behavioral
components. This account of the RAB content of alief is clearly closely related to my
description of implicit attitudes.36
I will elaborate on the similarities between my account of implicit attitudes and
alief later. However, I will also argue that implicit attitudes are not aliefs.37
Gendler proposes the notion of alief in order to elucidate a set of cognitive
and behavioral phenomena that appear to be poorly explained by either an
agent’s beliefs (in concert with her desires) or her mere automatic reflexes. She
writes:

We often find ourselves in the following situation. Our beliefs and desires
mandate pursuing behaviour B and abstaining from behaviour A, but we
nonetheless find ourselves acting—​or feeling a propensity to act—​in A-​like
ways. These tendencies towards A-​like behaviours are highly recalcitrant,
persisting even in the face of our conscious reaffirmation that B-​without-​A

36
  Note that Gendler uses the term “content” in an admittedly “idiosyncratic way,” referring to the
suite of RAB components of states of alief and leaving open whether aliefs are propositional or concep-
tual, as well as insisting that their content in some sense includes “affective states and behavioral dispo-
sitions” (2008a, 635, n. 4). This is a different usage of the term “content” than I deployed in Chapter 2.
37
  Although it would not be far off to say that what I  am calling implicit attitudes are aliefs but
that my characterization of alief differs from Gendler’s. In previous work (Brownstein and Madva,
2012a,b), I have presented my view as a revised account of the structure of alief. Ultimately, I think this
is a terminological distinction without much substance. “Alief ” is a neologism after all, and Gendler is
notably provisional in defining it. What matters ultimately is the structure of the state.
86 Mind

is the course of action to which we are reflectively committed. It seems


misleading to describe these A-​like behaviours as fully intentional: their
pursuit runs contrary to what our reflective commitments mandate. But
it also seems wrong to describe them as fully reflexive or automatic: they
are the kinds of behaviours that it makes sense—​at least in principle—​to
attribute to a belief–​desire pair, even though in this case (I argue) it would
be mistaken to do so. (2012, 799)

Aliefs are, Gendler argues, relations between an agent and a distinctive kind of inten-
tional content, with representational, affective, and behavioral (or RAB) components.
They involve “a cluster of dispositions to entertain simultaneously R-​ish thoughts,
experience A, and engage in B” (2008a, 645). These components are associatively
linked and automatically co-​activating. On the Skywalk, for instance, the mere percep-
tion of the steep drop “activates a set of affective response patterns (feelings of anxi-
ety) and motor routines (muscle contractions associated with hesitation and retreat)”
(2008a, 640). The RAB content of this alief is something like “Really high up, long long
way down. Not a safe place to be! Get off!” (2008a, 635). Likewise, the sight of feces-​
shaped fudge “renders occurrent a belief-​discordant alief with the content: ‘dog-​feces,
disgusting, refuse-​to-​eat’ ” (2008a, 641).
Aliefs share an array of features. Gendler writes:

To have an alief is, to a reasonable approximation, to have an innate or habit-


ual propensity to respond to an apparent stimulus in a particular way. It is
to be in a mental state that is . . . associative, automatic and arational. As a
class, aliefs are states that we share with non-​human animals; they are devel-
opmentally and conceptually antecedent to other cognitive attitudes that
the creature may go on to develop. Typically, they are also affect-​laden and
action-​generating. (2008b, 557)

It should be clear that my description of the components of implicit attitudes in


Chapter 2 is indebted to Gendler’s formulation of alief. In particular, I have retained
the ideas that the states in question—​whether they are aliefs or implicit attitudes—​
dispose agents to respond automatically to apparent stimuli with certain affective
responses and behavioral inclinations and that these states can be relatively insensitive
to what agents themselves take to be definitive evidence for or against a particular
response.
One of the promises of Gendler’s account of alief is that it represents a proposal
for unifying philosophical belief-​desire theories of mind and action with contem-
porary dual-​systems cognitive psychology. Most dual-​systems theories suggest that
fast, associative, and affective processes—​so-​called System 1—​are developmentally
antecedent to slow, inferential, and/​or cognitive processes—​System 2. And most
dual-​systems theories also suggest that human beings share at least some elements
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 87

of System 1 processes with nonhuman animals.38 Crudely, the suggestion would


be, then, that System 1 produces aliefs while System 2 produces beliefs. As Uriah
Kriegel notes, this connection would then provide important explanatory unity in
the sciences of the mind:

Given the prevalence of dual-​process models in cognitive science, another


model positing a duality of causally operative mechanisms in a single cogni-
tive domain, a belief-​producing mechanism and an alief-​producing mech-
anism, integrates smoothly into a satisfyingly unified account of human
cognition and action. Indeed, if we hypothesize that the alief-​producing
mechanism is one and the same as Sloman’s associationist system and the
belief-​producing mechanism one and the same as his rationalist (“rule-​
based”) system, the explanatory unification and integration of empirical
evidence and firm conceptual underpinnings becomes extraordinarily
powerful. (2012, 476)

But as formulated, I do not think the concept of alief can fulfill the promise of
integrating belief-​desire psychology with the dual-​systems framework. One rea-
son for this is based on a concern about dual-​systems theorizing. It is hard to find
a list of core features of System 1 states or processes that are not shared by some
or many putative System 2 states or processes. As in Hume’s example of hearing a
knock on the door causing me to believe that there is someone standing outside
it, beliefs can be formed and rendered occurrent automatically, without attention,
and so on.39 This may not be an insurmountable problem for dual-​systems theo-
rists.40 Regardless, there is a broader interpretation of Kriegel’s point about inte-
grating the concept of alief with dual-​systems theorizing. What is more broadly at
stake, I think, is whether the concept of alief is apt in the full suite of the virtues
and vices of spontaneity. Does alief—​as formulated—​provide a satisfying account
of the psychology of both poles of spontaneity? Note how this is not an exogenous
demand I  am putting to the concept of alief. Rather, it follows from Gendler’s
formulation of the concept. If aliefs are causally implicated in much of moment-​
to-​moment behavior, then they must be operative in both Gendler-​style cases of

38
  For reviews of dual-​systems theories, see Sloman (1996) and Evans and Frankish (2009). In the
popular press, see also Kahneman (2011).
39
  Thanks to an anonymous reviewer for pushing me to clarify this and for reminding me of the
example. See Mandelbaum (2013) for a discussion.
40
  Jonathan Evans and Keith Stanovich (2013) respond to counterexamples like this by presenting
the features of System 1/​2 as “correlates” that should not be taken as necessary or sufficient. On the
value of setting out a list of features to characterize a mental kind, see Jerry Fodor’s (1983) discussion
of the nine features that he claims characterize modularity. There, Fodor articulates a cluster of traits
that define a type of mental system, rather than a type of mental state. A system, for Fodor, counts as
modular if it shares these features “to some interesting extent.”
88 Mind

belief-​discordant alief and in ordinary forms of norm-​concordant and even skilled


automaticity.
Several commentators have pointed out how alief seems poised to explain cases
of reasonable and apparently intelligent, yet spontaneous and unreflective behavior.
Of “Gendler cases” (e.g., the Skywalk), Greg Currie and Anna Ichino write:

Notice that Gendler cases are by and large characterised by their conserva-
tiveness; they are cases where behaviour is guided by habit, often in ways
which subvert or constrain the subject’s openness to possibilities. There is
a class of human behaviours at the opposite extreme: cases where, with-
out the mediation of conscious reasoning or exercise of will, we generate
creative solutions to practical or theoretical problems. Creative processes
involve unexpected, unconventional and fruitful associations between
representations and between representations and actions—​without the
subjects generally having access to how that happens. We suggest that both
creative and habitual processes are triggered by an initial representation—​
not always specifiable at the personal level and sometimes occurring
beyond conscious will—​that leads, through a chain of barely conscious
associations, to various other states, the last of which can be either another
representative state or an action. While in Gendler cases the associations
lead to familiar patterns of thought and behaviour, in creative processes
they lead to unpredictable, novel outcomes. Paradigmatic instances of
those outcomes—​jazz improvisations, free dance/​speech performances—​
are unlikely to derive from some kind of (creative) conceptual represen-
tation, since they exceed in speed and fineness of grain the resources of
conceptual thought. (2012, 797–​798)

While Currie and Ichino suggest that automaticity is implicated in creative


action, Jennifer Nagel (2012) argues in a related vein that our “gut responses” can,
in some cases, be “reality-​sensitive,” and perhaps sometimes be even more sensitive
to reality than our conscious explicit judgments. Nagel cites the “Iowa Gambling
Task” (Bechara et al., 1997, 2005) as an example.41 This is a task in which partici-
pants are presented with four decks of cards and $2,000 in pretend gambling money.
The participants must choose facedown cards, one at a time, from any of the decks.
Some cards offer rewards and some offer penalties. Two of the decks are “good”
in the sense that choosing from them offers an overall pattern of reward, despite
only small rewards offered by the cards at the top of the deck. Two of the decks are
“bad”; picking from them gives the participant a net loss, despite large initial gains.
The interesting finding, Nagel notes, is that it takes subjects on average about eighty

41
  See Lewicki et al. (1987) for a related experiment.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 89

card turns before they can say why they prefer to pick from the good decks. But after
about fifty turns, most participants can say that they prefer the good decks, even if
they aren’t sure why. And, most strikingly, after about only twenty turns, while most
participants do not report having any reason for distinguishing between the decks
(i.e., they don’t feel differently about them and say that they don’t see any difference
between them), most participants do have higher anticipatory skin conductance
responses before picking from the bad decks. This suggests that most participants
have some kind of implicit sensitivity to the difference between the decks before
they have any explicit awareness that there is such a difference. Nagel takes this to
mean that agents’ putative aliefs—​represented outwardly by a small change in affect
(and measured by anticipatory skin conductance)—​may be, in some cases, more
sensitive to reality than their explicit beliefs.
Gendler concedes in reply to these commentators that alief-​like processes may
be implicated in creative and reality-​sensitive behavior, and thus seems to allow
room for “intelligent aliefs” (2012, 808). This is not a surprising concession, given
Gendler’s earlier claim that “if alief drives behavior in belief-​discordant cases, it
is likely that it drives behavior in belief-​concordant cases as well” (2008a, 663).
Gendler understandably emphasizes vivid cases of belief-​discordance—​skywalkers
and fudge-​avoiders and implicitly biased agents—​because these are the cases in
which an agent’s aliefs are forced into the light of day. These cases are a device for
the presentation of alief, in other words.
But Gendler’s claim that alief is primarily responsible for the moment-​to-​moment
management of behavior entails that these states ought to be attuned to changes in
the environment and be implicated in (some forms of) intelligent behavior. While
“conservative” aliefs—​like those implicated in the behaviors of skywalkers, fudge-​
avoiders, and the like—​are pervasive, they don’t dominate our moment-​to-​moment
lives. Mostly agents more or less cope successfully with the world around them
without constantly doing combat with gross conflicts between their automatic and
reflective dispositions. We move closer to see small objects and step back from large
ones in appropriate contexts; we extend our hands to shake when others offer us
theirs (in some cultures); we frown sympathetically if someone tells us a sad story
and smile encouragingly if the story is a happy one; and so on. Ordinary success-
ful coping in cases like these requires rapid and efficient responses to changes in
the environment—​exactly those features of alief that one would expect. If aliefs
are responsible for the moment-​to-​moment management of behavior, then these
states must be sufficiently flexible in the face of changing circumstances and attuned
to subtle features of the environment to drive creative, reality-​sensitive, intelligent
behavior.
The deeper concern is not about Gendler’s emphasis on particular kinds of cases,
that is, cases of norm-​and belief-​discordant automaticity. The deeper concern is
that the RAB structure of alief cannot make sense of putative intelligent aliefs. It
90 Mind

cannot do so because it lacks a mechanism for dynamic learning and internal feed-
back, in short, what I called “alleviation” in Chapter 2.
Another related concern about Gendler’s formulation of alief stems from what
I will call the “lumpiness” question. Gendler argues that aliefs are a unified men-
tal kind. The RAB components are operative, as she says, “all at once, in a single
alief ” (2008b, 559). But why make the strong claim that alief represents a distinct
kind of psychological state rather than a set of frequently co-​occurring appearances
or beliefs, desires, and behaviors?42 Why say, for example, that the skywalker has
a singular occurrent alief—​as Gendler does—​with content like “Really high up,
long long way down. Not a safe place to be! Get off!” rather than something like
a co-​occurring representation of being high up, a feeling of being in danger, and a
response to tremble and perhaps flee? What is the value, as Nagel (2012) asks, of
explaining action in terms of “alief-​shaped lumps?”
Gendler’s answer is that aliefs are “lumpy” in a way that reflective states like
beliefs are not. Belief–​desire–​action trios are composites that can be combined
in any number of ways (in principle). One way to see this is to substitute a differ-
ent belief into a common co-​occurrence of a belief, desire, and action (Gendler,
2012). For instance, around noon each day I may experience this belief–​desire–​
action trio: I believe that cheese is reasonably healthy for me; I desire to eat a
healthy lunch; I make a cheese sandwich. Should I overhear on the radio, how-
ever, just as I am reaching in the fridge, that the kind of cheese I have is defini-
tively linked to an elevated risk for cancer and will soon be banned by the FDA,
and I have credence in the source of the information, it is likely that I’ll avoid
the cheese and eat something else. If the acquisition of a new belief changes
my behavior on the spot like this, then I  know that this particular composite
of a belief, desire, and action is, as Gendler puts it, psychologically contingent.
By contrast, no new belief, it seems, will immediately eliminate the skywalker’s
fear and trembling. The skywalker’s representations (“high up!”), feelings (“not
safe!”), and behavior (“get off!”) are too tightly bound, forming a fairly cohesive
lump, as it were.
This reply to the lumpiness question is partly successful. It offers a clear criterion
for marking the alief–​belief distinction:  the constituent RAB relata of alief auto-
matically co-​activate, while ordinary beliefs, feelings, and behavior do not (they are
“psychologically continent” or, as Gendler also puts it, “fully combinatoric”). In a
related sense, Gendler’s reply to the lumpiness question provides a test of sorts for
determining when this criterion is met. Provide a person with some evidence that
runs contrary to her putative alief and see what happens. If it interrupts the auto-
matic co-​activation of the RAB relata, as in the case of the cheese sandwich, then this
was not an alief-​guided action.

42
  See Nagel (2012), Currie and Ichino (2012), and Doggett (2012).
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 91

But several open questions remain. The first is how and why the presentation of
“evidence” affects the agent in such a way as to mark the alief–​belief divide. On a
broad interpretation, evidence might just refer to any information relevant to the
agent. But on this interpretation, an evidence-​insensitive state would seem inher-
ently uncreative and unresponsive to changing environments. It would by defini-
tion be incapable of playing a role in the cases of virtuous spontaneity discussed by
critics like Currie and Ichino (2012) and Nagel (2012). Moreover, it is clear that
the presentation of evidence that contradicts agents’ beliefs sometimes utterly fails
to dislodge their attitude. Something more specific has to be said about the condi-
tions under which evidence does and does not affect the agent’s occurrent mental
states and her related affective and behavioral reactions. Finally, Gendler’s reply to
the lumpiness question relies on the idea that the RAB relata of alief automatically
co-​activate. But what properties does the process of co-​activation have? Do all puta-
tive aliefs co-​activate in the same way?

6.  Implicit Attitudes


Implicit attitudes are conceptual cousins of aliefs. But understanding implicit atti-
tudes as states with FTBA components offers the better formulation for capturing
the psychological structure of our spontaneous inclinations. This represents, inter
alia, a step forward on the question of whether implicit attitudes are a distinctive
mental kind. Here I summarize and highlight the key features of implicit attitudes
that favor this view.
The first is that the FTBA relata of implicit attitudes, as I  described them in
Chapter 2, are apt in cases of both the virtues and vices of spontaneity. Alleviation,
in particular, describes how implicit attitudes are capable of dynamic learning and
revising in the face of internal and external feedback. I also described in Chapter 2
how the process of alleviation can fail, such that one’s spontaneous inclinations can
run afoul of either one’s reflective beliefs or the normative demands of the environ-
ment. This feature of implicit attitudes situates them to provide an account of how
our spontaneous inclinations can guide much of moment-​to-​moment behavior.
Second, in contrast to the evidence-​sensitivity criterion for distinguishing alief
from belief, in §3 I  discussed more specific criteria for distinguishing implicit
attitudes from beliefs. These focused on the inferential promiscuity and form-​
sensitivity of beliefs. Drawing on Levy (2012, 2014) and Madva (2016b), I argued
that implicit attitudes are inferentially impoverished and form-​insensitive. That is,
there is a limited range of patterns through which implicit attitudes can give rise to
other mental states and to behavior. Paradigmatically, these do not include doing so
through inference. Madva’s account of form-​sensitivity provides additional detail
for why this may be the case with implicit attitudes; they are insensitive to the log-
ical form of the presentation of information, as it is structured by operators like
92 Mind

negation or the order of words in a proposition. Specifying why implicit attitudes


appear to be evidence-​insensitive connects with explaining the role of these states in
cases of both the virtues and vices of spontaneity. For it is not that implicit attitudes,
once formed, are wholly resistant to evidence that contradicts them. Rather, they
are insensitive to the forms of learning characteristic of states like belief. The way in
which they respond to evidence is different, in other words. But they nevertheless
display characteristic forms of value-​guided learning and improvement over time,
which makes them capable of figuring in to skilled and even exemplary behavior.
Finally, more has to be said than that the components of our immediate reaction
to stimuli co-​activate. Understanding the processes of co-​activation between FTBA
relata underscores why implicit attitudes are a unified state. I summarize the follow-
ing central characteristics.
First, while I have emphasized that particular associations are activated in vary-
ing contexts, when an FTBA is set off, a “spread” of automatic activation always
occurs. When an agent learns or modifies her implicit attitudes, she replaces one
token relation for another (e.g., one behavioral response for another). To the extent
that a smoker can weaken the co-​activating association between nicotine cravings
and smoking-​initiation behaviors, for example, she does so not by eliminating all
behavioral responses to nicotine cravings, but by replacing one behavioral response
with another. The tennis player who learns to hit a drop volley instead of a stab
volley similarly replaces one motor response with another, effectively adding a
new spontaneous inclination to her repertoire. This is the sense in which Gendler
(2008a, 635, n.  4)  includes behavior as part of the “content” of alief. This point
is not intended as a defense of behaviorism. The proposal is that a psychological
motor routine is activated, which is functionally connected to the agent’s activated
thoughts and feelings. The activation of this motor routine may not issue in a full-​
fledged action—​see Chapters 4 and 5—​although there is always some behavioral
expression, such as changes in heart rate, eyeblink response, or muscle tone.
This omnipresent spread of activation is not encapsulated within any particular
mental system or type of mental state, but is rather cross-​modal. It is not the case
that semantic associations only co-​activate other semantic associations. Thoughts
of birthdays activate thoughts of cake, as well as feelings of hunger or joy (or sad-
ness, as the case may be) and behavioral impulses to eat or sing or blow out candles.
A spontaneous response to a stimulus does not just induce thoughts, or induce feel-
ings, or induce behaviors. It induces all three. Moreover, activation of this sort may
include inhibition (e.g., thoughts of safety may inhibit thoughts or feelings of fear;
see Smith, 1996).
Implicit attitudes are heterogeneous (Holroyd and Sweetman, 2016), but their
variety differs in degree, not kind. As I  discussed briefly in Chapter  2, in the lit-
erature on implicit social cognition, it is common to distinguish between, on the
one hand, “coldly” cognitive implicit beliefs, cognitions, or stereotypes and, on
the other hand, “hot” affective implicit attitudes, evaluations, or feelings. Some
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 93

researchers have advanced theoretical models of implicit social cognition based


on this very distinction, arguing that there are two fundamental kinds of implicit
mental state:  cold implicit stereotypes and hot implicit prejudices (Amodio and
Devine, 2006; Amodio and Ratner, 2011). The picture I  offer here of implicit
attitudes rejects this “two-​type” view (for discussion see Madva and Brownstein,
2016). Some implicit attitudes are comparatively more affective than others, but all
involve some degree of emotional response. Affect continues to exert causal effects
even at very low intensities, although low-​intensity effects will typically escape an
individual’s focal attention (and may require more fine-​grained measures than, for
example, the IAT).
The affective intensity of implicit attitudes is a function of several (a) content-​
specific, (b)  person-​specific, and (c)  context-​specific variables. Regarding (a),
implicit attitudes with certain contents (e.g., automatic evaluative responses to
perceptions of spiders or vomit) will tend to be more affectively intense than oth-
ers (e.g., perceptions of lamps or doorknobs). Regarding (b), some individuals will
have stronger affective dispositions toward certain stimuli than others. For exam-
ple, people who were obese as children or had beloved, obese mothers exhibit less
negative automatic evaluations of obesity (Rudman et al., 2007); individuals with
a strong dispositional “need to evaluate” exhibit more intense automatic evalua-
tions in general ( Jarvis and Petty, 1996; Hermans et al., 2001); and low-​prejudice
people tend to be less susceptible than high-​prejudice people to negative affective
conditioning in general (Livingston and Drwecki, 2007). Regarding (c), certain
contexts, such as being in a dark room, amplify the strength of automatic evalua-
tions of threatening stimuli (Schaller et  al., 2003; see Cesario and Jonas [2014],
discussed in Chapter 2 and in the Appendix, for a theoretically elaborated model
of which contexts will activate which kinds of affective responses). Future research
should focus on identifying the mediators, moderators, and downstream effects of
such differences in degree.
It is also important to differentiate among distinctive types of affective-​
motivational implicit responses. For example, anxiety and anger are both “nega-
tive” affective responses, but research suggests that while anxiety often activates
an implicit avoidance orientation, anger typically activates an implicit approach
orientation (Carver and Harmon-​Jones, 2009). Affective responses can therefore
differ along at least two dimensions—​positive versus negative, and approach ver-
sus avoid—​although future research should attempt to identify further types of
approach motivation. Anger, joy, hunger, and sexual arousal may each activate dis-
tinctive sorts of implicit approach orientations. This means that, instead of model-
ing affect intensity as a bipolar continuum between positive and negative valence,
we must differentiate among distinctive types of affective-​motivational responses.
These co-​activation relations of FTBA components demonstrate two key features
of implicit attitudes. One, as I’ve argued, is that implicit attitudes are sui generis
states. Token implicit attitudes can be changed or eliminated when these patterns
94 Mind

of co-​activation are broken, but as I  argued earlier, a distinctive type-​identifying


spread of activation between components always occurs. The automaticity of this
spread of activation helps to explain why these states do not take part in the infer-
ential processes definitive of states like beliefs. This in turn suggests that implicit
attitudes cannot figure in to practical reasoning in a way that any standard account
of practical normativity would require. That is, any account of ordinary practical
normativity would not admit of actions based on implicit attitudes, given their
inferential impoverishment and form-​insensitivity.
As I’ve argued, however, implicit attitudes do not merely cause action. In some
sense, they recommend it. This is a deep point of contrast between implicit attitudes
and aliefs. Aliefs are, on Gendler’s view, neither good nor bad in and of themselves,
but only good or bad to the extent that they are in “harmony” with an agent’s con-
sidered beliefs and intentions. In contrast, implicit attitudes have a pro tanto guid-
ing function, one that is also seen in the temporal interplay between co-​activating
FTBA relata. As an agent moves and attends to perceived ambient features, her low-​
level feelings of tension are or are not alleviated, giving her a sense of lingering ten-
sion or not. This affective sense of lingering tension is action-​guiding, signaling to
the agent to continue to attend or behave in some particular way. The skywalker’s
aversive feelings persist so long as she perceives the chasm below; the museum-
goer feels repelled from the painting until she reaches the right distance from it;
the implicitly biased agent feels as if the CV with the male name on it is just a little
better, and (unfortunately) this tension resolves as she moves on to the next one
in the batch.43 These subtle action-​guiding feelings offer the agent a sense of better
and worse that is indexed to her actual behaviors. Of course, this guiding sense of
better and worse is not tantamount to the guidance provided by her beliefs, values,
and evaluative judgments. Those provide what might be called full-​blown moral or
rational normativity.
I suggest that the pro tanto guidance function of implicit attitudes may be
captured in terms of what Ginsborg (2011) calls “primitive normativity.” At
a high level of generality, this is a distinguishing feature of implicit attitudes.
Primitive normativity offers agents a sense of how to “go on” without requiring
the agent to recognize (explicitly or implicitly) any rules for “going on” correctly.
Ginsborg identifies three requirements for an agent to “make a claim” to primi-
tive normativity:  (1) having the necessary training and experience; (2)  having

  I have in mind here the situation in which a CV is not quite worthy of one evaluation but a bit
43

too good for another. Cases of ambiguity like this—​when one says something like “Hmmm . . . not
this pile . . . but not this other pile . . . hmmm . . . I guess this pile”—​are those in which gut feelings and
the like are most evident. These are the situations in which implicit attitudes are most clearly affecting
behavior. Ambiguous situations like this are also those in which measures of implicit attitudes like the
IAT are most effective for making behavioral predictions.
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 95

sufficient motivation; and (3) experiencing some sense of appropriateness when


getting things right.44 Imagine a child, Ginsborg suggests, who has not yet mas-
tered color concepts but who, by following the example of an adult, is sorting green
objects by placing them into a bin, while leaving objects of other colors aside.45 It
is unlikely that this child is following a rule, like “Place all the green objects in the
bin,” since the child does not seem to possess the concept green. But the child
is not simply being caused to sort the objects as she does. Rather, a normative
claim is embedded in her behavior. The adult who showed her what to do has pro-
vided the necessary learning experience. She is sufficiently motivated to follow the
adult’s lead, in virtue of her innate disposition to learn by imitation. And, arguably,
she experiences some sense of appropriateness in the act. The green objects pre-
sumably feel to the child as if they “fit” with the other objects in the bin. The child
may become upset upon finding a red object in the bin and experience some sense
of pleasure in completing the task.46
Museumgoers and skywalkers and people with implicit biases also seem to
make claims to primitive normativity. They would not display the behaviors they

44
  I  follow Ginsborg’s formulation that agents “make a claim” to primitive normativity. I  take
Ginsborg’s idea to be that there is a normative claim implicit in the agent’s behavior. The agent acts as
she does because doing so is, for her, in some sense, appropriate. This is not to say that she endorses
what she does or consciously takes her behavior to be appropriate. Rather, the idea is that her behavior
is in part motivated by a sense of appropriateness, which is akin, I suggest, to the felt sense of rightness
or wrongness embedded in the process of alleviation (see Chapter 2, §4).
45
  In this example, it is not just that the child lacks verbal mastery of the concept green. Rather,
the child (by hypothesis) lacks the concept itself (some evidence for which may be that the child can-
not use the word “green” correctly). That is, the child may be able to discriminate between green and
not-​green things without having the concept green. Note that this is a controversial claim. See Berger
(2012).
46
  See the discussion in Chapter 4 of theories of skill learning. Interestingly, this sense of appropri-
ateness may be found in nonhuman animals, or so suggests Kristen Andrews (2015), who makes use
of Ginsborg’s concept of primitive normativity in discussing what she calls “naive normativity.” Naive
normativity offers agents a specifically social sense of “how we do things around here.” Andrews and
Ginsborg seem to differ on whether, or when, nonhuman animals make claims to primitive or naive
normativity. Ginsborg discusses the case of a parrot that has been trained to say “green” when shown
green things. She argues that the parrot does not make a claim to primitive normativity, because, while
it has had the necessary learning experiences and is suitably motivated to go on in the right way, it
does not, Ginsborg thinks, experience a sense of appropriateness when saying “green” at the right time.
Andrews discusses the case of goslings having imprinted on a human being rather than their biological
mother, and she thinks these animals are in fact likely to experience a sense of appropriateness when
following their adopted “mother” around (in addition to having the right experiences and motivation).
The goslings will appear to feel calm and comforted when the human “mother” is around and upset in
the “mother’s” absence. This empirical question about what parrots and goslings do or don’t experience
provides an interesting route to considering whether or how claims to primitive normativity are shared
by human beings and nonhuman animals.
96 Mind

do without the necessary learning experiences;47 they are sufficiently motivated by


innate or learned mechanisms having to do with object perception, bodily safety,
and intergroup categorization; and they experience, if my previous arguments are
correct, low-​level affective feelings that are indexed to the immediate success or fail-
ure of their actions. Moreover, they do all this without, it seems, needing to grasp
rules for their behavior, rules like “Stand this many feet from paintings of that size,”
“Avoid glass-​bottomed platforms,” and “Value men’s CVs over women’s.” Indeed, as
these cases illustrate, sometimes primitive normativity and full-​blown normativity
provide contradictory guidance.
In developing the concept of primitive normativity, Ginsborg takes herself to be
providing a middle ground for interpreting a certain class of presumably pervasive
phenomena, that is, behavior that seems intelligently guided but also seems not to
be guided by a conceptual grasp of rules. The middle ground is between thinking
that the child sorting blocks is, on the one hand, just being caused to behave appro-
priately by some subpersonal mechanism and, on the other hand, is acting on the
basis of rules that she implicitly understands. These two options mirror my earlier
discussion about the place of implicit attitudes in cognitive architecture. To think
that the child is being caused to behave appropriately by nothing other than some
subpersonal mechanism is akin to saying that her behavior is merely reflexive or
associative, as reflexes and mere associations provide no normative guiding force to
action. And to surmise that the child is acting on the basis of rules she understands,
but only implicitly, is analogous to thinking that she is behaving on the basis of non-
conscious doxastic states (i.e., putative implicit beliefs, operating on the basis of
unconscious inferences, etc.). Ginsborg’s claim for the middle ground derives from
considerations about how the child ultimately masters the concept green. If the
child is caused to act appropriately only by subpersonal mechanisms to discriminate
green things from not-​green things, then it is unclear how her competence at sort-
ing would ever lead to her acquisition of the concept. The ability to sort things by
color wouldn’t at any point offer the agent a basic sense of the appropriateness of
which things belong with which, which is presumably an important step in acquir-
ing the relevant concept. And if the child sorts on the basis of an implicit rule, then
it is equally unclear how her learning to sort would have any bearing on her acquir-
ing the concept, since, on this interpretation, the sorting presupposes the child’s
having the concept. Ginsborg’s middle ground is appealing, then, as a model for
how certain kinds of complex abilities that involve perceptions of better and worse
interact with higher-​order conceptual abilities.48 More specifically for my purposes,

47
  Although it is important to note that these learning experiences can be very brief in some cases.
See, for instance, research on the “minimal group paradigm” (Otten and Moskowitz, 2000; Ashburn-​
Nardo et al., 2001).
48
  Although, of course, much more would have to be said in order to substantiate this claim. My
aim is not to defend Ginsborg’s account of skill learning. Rather, my aim is to show why it is plausible
Implicit Attit ud e s and the A rchitec t ure o f the   Mind 97

Ginsborg’s middle ground also provides a model for the sort of normative guidance
implicit attitudes offer agents via their spontaneous inclinations and dispositions.
As the kinds of cases I’ve discussed show, sometimes our implicit attitudes go
awry from the perspective of moral or rational normativity. Implicit bias illustrates
this, as do the other kinds of cases I discussed in Chapter 1 of impulses and intu-
itions leading to irrational and immoral ends. Other times, our implicit attitudes
are better, but in mundane ways that we barely notice, such as in the distance-​
standing and museumgoer cases. And other times still, our implicit attitudes get it
right in spectacular ways, such that they seem to provide action guidance in ways
that our reflective beliefs and values don’t. The cases of spontaneous reactions and
decisions in athletic and artistic domains illustrate this. Here it seems as if agents’
claims to primitive normativity are not just defeasibly warranted. Rather, they seem
to be illustrative of a kind of action at its best. The world-​class athlete’s spontane-
ous reactions show the rest of us how to go on in a particular context; the master
painter’s reactions reveal features of perception and emotion we wouldn’t otherwise
experience.

7. Conclusion
I’ve argued in this chapter that states with FTBA components are best understood
as implicit attitudes, which are distinct from reflexes, mere associations, beliefs, dis-
positions, and aliefs. Along the way, and in summary, I’ve described what makes
implicit attitudes unique. By no means have I given a complete account of implicit
attitudes, but I hope to have shown why they deserve to be considered a unified and
distinct kind of mental state. With these points in mind, I now turn to the relation-
ship between these states and agents themselves. In what sense are implicit attitudes
features of agents rather than features of the cognitive and affective components
out of which agents minds’ are built? Are implicit attitudes, and the spontaneous
actions for which they seem to provide guidance, really ours?

and how it could then play a role in illuminating the peculiar nature of implicit attitudes, as states that
neither merely cause behavior nor rationalize it in any full-​blooded sense.
PA RT   T W O

SELF
4

Caring, Implicit Attitudes, and the Self

In “The View from in Here,” an episode of the This American Life podcast, reporter
Brian Reed (2013) plays recorded conversations between prisoners and prison
guards at the Joseph Harp Correctional Center in Lexington, Kentucky. The pris-
oners and guards had both just watched a documentary film critical of the prison
industry in the United States. The conversations were meant to allow prisoners and
guards to reflect on the film. At the center of Reed’s story is a conversation between
Antoine Wells, an inmate serving a nine-​year sentence, and Cecil Duly, a guard at
the prison. Their conversation quickly turned from the film itself to personal prob-
lems between Wells and Duly. Wells speaks at length about his perception of being
treated disrespectfully by Duly. He resents being ignored when asking for help,
being told to do meaningless tasks like picking daisies so that he’ll remember “who’s
in charge,” and generally being viewed as if he’s “nothing.” In response, Duly holds
his ground, offering a different interpretation of each of the events Wells describes
and claiming that he never shows disrespect to Wells. Duly explicitly defends the
notion that Wells, as a prisoner, has “no say over anything.” That’s just what being in
prison means. The conversation is heart-​wrenching, in part because Wells and Duly
speak past one another so profoundly, but also because both Wells and Duly see that
Wells’ experiences will do him no good. They both recognize that Wells is likely to
leave prison bereft of resources and skills, angry and indignant toward the society
that locked him up, and anything but “rehabilitated.”1
The episode takes a surprising turn when Reed returns to the prison several
months later to see if anything has changed between Wells and Duly as a result of
their conversation. Duly, the guard, states unequivocally that nothing has changed.
He stands by his actions; he hasn’t thought about the conversation with Wells much
at all; he feels no different; and he continues to claim that he treats all of the pris-
oners fairly and equally. Wells, however, feels that quite a lot has changed since he
expressed his feelings to Duly. And Wells claims that it is Duly who has changed.

1
  To his credit, Lieutenant Duly also laments that much of the contemporary prison system in the
United States is a for-​profit industry, seemingly incentivized to fail to rehabilitate prisoners.

101
102 Self

Duly is less dismissive, he says. Duly takes the time to answer questions. He looks at
Wells more like a human being.
If we accept both of their testimonies at face value, it seems that Duly has begun
to treat Wells kindly on the basis of inclinations that run contrary to Duly’s explicit
beliefs. These inclinations seem to unfold outside the ambit of Duly’s intentions
and self-​awareness.2 In this sense, Duly’s case resembles those discussed by Arpaly
(2004), in particular that of Huckleberry Finn. On Arpaly’s reading, Huck’s is a case
of “inverse akrasia,” in which an agent does the right thing in spite of his all-​things-​
considered best judgment. Huck’s dilemma is whether to turn in his friend Jim, an
escaped slave. On the one hand, Huck believes that an escaped slave amounts to a
stolen piece of property and that stealing is wrong. On the other, Huck is loyal to his
friend. He concludes from his (less than ideal) deliberation that he ought to turn
Jim in. But Huck finds himself unable to do it.3
While Huck is fictional, there is good reason to think that he is not, in the rel-
evant respects, unusual. Consider, for example, one of the most landmark studies in
research on intergroup prejudice. Richard LaPiere published “Attitudes vs. Actions”
in 1934, a paper about the two years he spent visiting 251 hotels and restaurants
with a couple of Chinese ethnicity. LaPiere and the couple would enter establish-
ments asking for accommodations, and were turned down only once. However,
when LaPiere followed up the visits with phone calls to the hotels and restaurants,
asking, “Will you accept members of the Chinese race in your establishment?” 92%
of the 128 responses were “no.” LaPiere’s study is usually discussed in terms of the
gap between attitudes and actions, as well as the importance of controlled labora-
tory studies.4 (For instance, it is not clear whether LaPiere spoke to the same per-
son at each establishment when he visited and when he phoned subsequently.) But
what is perhaps most striking about LaPiere’s study is that the hoteliers and restaura-
teurs seemed to act ethically when LaPiere visited, despite their explicit prejudiced
beliefs. That is, their explicit beliefs were prejudiced but their behavior (in one con-
text, atleast) was comparatively egalitarian. It seems that when these people were
put on the spot, with the Chinese couple standing in front of them, they reacted
in a comparatively more egalitarian way, despite their beliefs to the contrary.

2
  As I say, this depends on our accepting Duly’s testimony—​that he denies feeling any differently
toward Wells—​at face value. It is possible that Duly is misleading Reed, the interviewer. It is also quite
possible that Duly is aware of his changes in feeling and behavior, and has intended these changes in an
explicit way, despite his avowals to the contrary. Perhaps Duly is conflicted and just doesn’t know what
to say. This interpretation also runs contrary to his testimony, but it doesn’t quite entail his misleading
the interviewer in the same way. The story of Wells and Duly is only an anecdote, meant for introducing
the topic of this chapter, and it is surely impossible to tell what is really driving the relevant changes.
See §4 for further discussion.
3
  The case of Huckleberry Finn was discussed in this context earlier, most notably by Jonathan
Bennett (1974). See Chapter 6.
4
  See, e.g., the discussion in Banaji and Greenwald (2013).
Car ing , Implicit Attit ud e s , and the   S el f 103

These are all cases in which agents’ spontaneous behavior seems to be praise-
worthy, relative, at least, to their explicit beliefs. But are agents’ spontaneous reac-
tions really praiseworthy in these cases? Praise typically attaches to agents, and many
people associate agents with their reflective beliefs and judgments.
Similar questions arise with respect to the many vice cases in which agents’ spon-
taneous inclinations seem to be blameworthy. In these cases it is equally difficult to
assess the person who acts. Does the skywalker’s fear and trembling tell us anything
about her? Are judges who are harsher before lunch (Chapter  1) on the line for
the effects of their glucose levels on their judgment, particularly assuming that they
don’t know or believe that their hunger affects them? These questions are perhaps
clearest in the case of implicit bias. Philosophers who think of agential assessment
in terms of a person’s control over or consciousness of her attitudes have rightfully
wondered whether implicit biases are things that happen to people, rather than fea-
tures of people (as discussed later).
Across both virtue and vice cases, spontaneity has the potential to give rise to
actions that seem “unowned.” By this I mean actions that are lacking (some or all
of) the usual trappings of agency, such as explicit awareness of what one is doing or
one’s reasons for doing it; direct control over what one is doing; responsiveness to
reasons as such; or perceived consistency of one’s behavior with one’s principles or
values. But these are actions nevertheless, in the sense that they are not mere hap-
penings. While agents may be passive in an important sense when acting spontane-
ously, they are not thereby necessarily victims of forces acting upon them (from
either “outside” or “inside” their own bodies and minds).
The central claim of this chapter is that spontaneous actions can be, in central
cases, “attributable” to agents, by which I mean that they reflect upon the charac-
ter of those agents.5 Attributability licenses (in principle) what some theorists call
“aretaic” appraisals. These are evaluations of an action in light of an agent’s charac-
ter or morally significant traits. “Good,” “bad,” “honorable,” and “childish” are all
aretaic appraisals. I distinguish attributability—​which licenses aretaic appraisals—​
from “accountability,” which I take to denote the set of ways in which we hold one
another responsible for actions.6 Accountability may license punishment or reward,
for example. I develop an account of attributability for spontaneous action in this
chapter, in virtue of the place of implicit attitudes in our psychological economy.

5
  Note that I use the terms “character” and “identity” interchangeably. Note also that while some
theorists focus on attributability for attitudes as such, my focus is on attributability for actions.
However, as will be clear, I think that judgments about attributability for actions are, and should be,
driven primarily by considerations of the nature of the relevant agent’s relevant attitudes. This stands in
contrast to questions about accountability, or holding responsible, as I discuss here and in more depth
in Chapter 5.
6
  For related discussion on distinguishing attributability from accountability for implicit bias, see
Zheng (2016).
104 Self

In the next chapter, I situate this account with respect to broader questions about
accountability and holding responsible.

1. Attributability
At its core, attributability is the idea that some actions render the person, and not
just the action itself, evaluable. Imagine that I inadvertently step on your toes in a
crowded subway car. It’s the New York City subway, which means the tracks are old,
and the cars shake and bump and turn unpredictably. I’m holding onto the overhead
rail, and when the car jostles me, I shift my feet to keep my balance, and in the pro-
cess step on your toes. Chances are that you will be upset that I’ve stepped on your
toes, because something bad has happened. And, perhaps momentarily, you will be
upset at me, not just unhappy in general, since I’m the cause of your pain. Had I not
been on the train this day, your toes wouldn’t be throbbing. Or perhaps you have
negative feelings about people on the subway in general, because you think they’re
clumsy or self-​absorbed and now I’ve just confirmed your beliefs. None of these
responses, though, is really about me. The first is about me as nothing more than the
causal antecedent of your pain; you’d be equally angry if an inanimate object fell on
your foot. The second is about me as an anonymous member of a general class you
dislike (i.e., subway riders). In neither case would your reaction signal taking the
outcome as reflecting well or poorly on my personality or character. It is doubtful,
for example, that you would think that I’m a thoughtless or selfish person, if you
granted that my stepping on your toes was really an accident. A contrasting case
makes this clear too. Imagine that I stomped on your toes because I felt like you
weren’t listening to me in a conversation. In this case, you might understandably
feel as if you know something about what kind of person I am.7 You might think I’m
a violent and childish person. The charge of childishness is telling, since this kind of
behavior isn’t unusual in children, and it curries a different kind of moral assessment
in their case. A five-​year-​old who stomps on his teacher’s toes because she’s ignoring
him is behaving badly,8 of course, but it is far less clear what we should think about
what kind of a person the child is compared with the adult who stomps on your toes
on purpose.
Back to the subway for a second: it is also not hard to imagine cases in which
something significant about my character is open for evaluation, even if I’ve done
nothing more than inadvertently step on your toes. If I aggressively push my way
toward my favorite spot near the window on the subway and crunch your toes on
the way, then something about my character—​something about who I  am as an
agent in the world—​gets expressed, even if I didn’t mean to step on your toes. In

  You might know about what Peter Strawson (1962) called my “quality of will.”
7

  Mrs. Sykes, wherever you are, I’m sorry.


8
Car ing , Implicit Attit ud e s , and the   S el f 105

this last case, I am open for evaluation, even if other putatively “exculpating condi-
tions” obtain.9 In addition to lacking an intention to step on your toes, I might also
not know that I’ve stepped on your toes, and I might even have tried hard to avoid
your toes while I raced to my favorite spot. Regardless, what I do seems to express
something important about me. I’m not just klutzy, which is a kind of “shallow” or
“grading” evaluation (Smart, 1961). Rather, you would be quite right to think “what
a jerk!” as I push by.10
Theorists have developed the notion of attributability to make sense of the
intuition that agents themselves are open to evaluation even in cases where
responsibility-​exculpating conditions might obtain. Here’s one standard example:

Loaded Gun Fifteen year old Luke’s father keeps a shotgun in the house
for hunting, and last fall started to teach Luke how to shoot with it. Luke
is proud of the gun and takes it out to show his friend. Since the hunting
season has been over for months, it doesn’t occur to Luke that the gun
might be loaded. In showing the gun to his friend, unfortunately he pulls
the trigger while the gun is aimed at his friend’s foot, blowing the friend’s
foot off.11

To some people at least, there is a sense in which Luke himself is on the line, despite
the fact that he didn’t intend to shoot his friend and didn’t know that the shotgun
was loaded. Luke seems—​to me at least—​callow or reckless. This sense is, of course,
strongly mitigated by the facts of the case. Luke would seem monstrous if he meant
to shoot his friend in the foot, and he would seem less reckless if he were five rather
than fifteen. But in Luke’s case, as it stands, the key facts don’t seem to exculpate all
the way down. It should have occurred to Luke that the gun might be loaded, and his
failure to think this reflects poorly on him. Some kind of aretaic appraisal of Luke
seems appropriate.
Of course, I’m trading in intuitions here, and others may feel differently about
Luke. There is a large body of research in experimental moral psychology testing the

9
  I borrow the term “exculpating conditions” from Natalia Washington and Daniel Kelly (2016).
10
  I say this loosely, though, without distinguishing between the thought “What a jerk!” and “That
guy is acting like a jerk!” The latter reaction leaves open the possibility that the way in which this action
reflects on my character might not settle the question of who I really am, at the end of the day. As I dis-
cuss in Chapter 5, in relation to the idea of the “deep self,” I am skeptical about summative judgments of
character like this. But I recognize that some assessments of character are deeper than others. Perhaps
I am having an extraordinarily bad day and am indeed acting jerky, but I don’t usually act this way. As
I discuss in §3, dispositional patterns of thought, feeling, and behavior are an important element of
characterological assessment.
11
  This version of Loaded Gun is found in Holly Smith (2011). Smith’s version is an adaptation
from George Sher (2009). For related discussions, see Smith (2005, 2008, 2012), Watson (1996),
Hieronymi (2008), and Scanlon (1998).
106 Self

conditions under which people do and don’t have these kinds of intuitions. I discuss
some of this research in the next chapter. For now, all I need is that there exists at
least one case in which an action seems rightly attributable to an agent even if the
action is nonconscious in some sense (e.g., “failure to notice” cases), nonvoluntarily,
nontracing (i.e., the appropriateness of thinking evaluatively of the person doesn’t
trace back to some previous intentional or voluntary act or omission), or otherwise
divergent from an agent’s will.
The difficulty for any theory of attributability is determining that on the basis of
which actions are attributable to agents. What distinguishes those actions that are
attributable to agents from those that aren’t? Influential proposals have focused on
those actions that reflect an agent’s reflective endorsements (Frankfurt, 1988); on
actions that reflect an agent’s narrative identity or life story (Schechtman, 1996);
and on actions that stem from an agent’s rational judgments (Scanlon, 1998; Smith,
2005, 2008, 2012).12 However, a case like Loaded Gun seems to show how an action
can render an agent open to evaluative assessment even if that action conflicts with
the agent’s reflective endorsements, narrative sense of self, and rational judgments.
Similarly, something about the “moral personality” (Hieronymi, 2008) of agents like
Duly, Huck, and the skywalker seem to be on the line, even if these agents’ actions
don’t have much to do with their reflective endorsements, narrative sense of self,
and rational judgments, and as such aren’t “owned” in the full sense. I’ll say more
about these other approaches to attributability in Chapter 5, although nowhere will
I offer a full critical assessment of them. Rather, in this chapter I’ll lay out my own
view, and in the next I will situate it with respect to other prominent theories.

2.  Caring and Feeling


In what follows, I adopt a “care”-​based theory of attributability. Care theories argue
that actions that reflect upon what we care about are actions that are attributable to
us.13 Caring is intuitively tied to the self in the sense that we care about just those
things that matter to us. We care about quite a lot of what we encounter in our day-​
to-​day lives, even if we judge the objects of our cares to be trivial. It might matter to

12
  Properly speaking these are theories of “identification.” Identification names the process through
which some trait, experience, or action is rendered one’s own (Shoemaker, 2012, 117). Naming the
conditions of identification has proved difficult to say the least; it has been called a “holy grail” in the
philosophy of action (Shoemaker, 2012, 123). I focus on the conditions of attributability rather than
identification because of the line I want to draw between attributability and accountability. That is, I am
making a case for how certain “minimal” (unintentional, nontracing, etc.) actions reflect on agents’
character; I am not making a case for the conditions of full ownership of action or for what it means to
be responsible for an action.
13
  Of course, one can provide a theory of caring itself, with no concern for questions about attrib-
utability. My claim is more narrowly focused on the relationship between caring and attributability.
Car ing , Implicit Attit ud e s , and the   S el f 107

me that the ice cubes in my drink are square rather than round, even if, reflectively,
I think this is a trivial thing to care about. Or I might care about whether I get to the
next level on my favorite phone app, all the while wishing that I spent less time star-
ing at a little screen. Both the ice cubes and the phone app might even matter to me
if I’m focally unaware of caring about them. All I might notice is that something feels
off about my drink or after I’ve put the phone down. It might become clear to me
later that it was the ice cubes or the app that were bothering me, or it might not ever
become clear to me that I care about these things. Conversely, sometimes we don’t
care about things, even when we think we should. Despite believing that I should
probably care about the free-​will debate, usually it just makes me feel . . . meh.14
It is crucial to note that I  am referring to what Agnieska Jaworska (2007b)
calls ontological, rather than psychological, features of caring. In the psychologi-
cal sense, what one cares about are the things, people, places, ideas, and so on that
one perceives oneself as valuing. More specifically, the objects of one’s cares in the
psychological sense are the things that one perceives as one’s own. Cares in the psy-
chological sense track the agent’s own perspective. In the ontological sense, cares
are, by definition, just those attitudes (in the broad sense) that belong to an agent, in
contrast to the rest of the “sea of happenings” in her psychic life ( Jaworska, 2007b,
531; Sripada, 2015).15 One can be wrong about one’s cares in the ontological sense,
and one can discover what one cares about too (sometimes to one’s own surprise).
For example, I might discover that I have, for a long time, cared about whether other
people think that I dress stylishly. All along I might have thought that I don’t care
about such a trivial thing. But on days that people complimented my clothes, or
even merely cast approving glances at me, I felt peppier and more confident, with-
out consciously recognizing it. Both the content of my care (looking stylish) and
the emotions connected to it (e.g., feeling confident) might have escaped my con-
scious awareness. (Recall the point I made in Chapter 2, that emotion is often suf-
ficiently low-​level as to escape one’s own focal awareness.) The care-​based view of
attributability is concerned with cares in the ontological sense. Of course, it is often
the case that what one cares about in the ontological sense often coincides with
what one cares about in the psychological sense. But this is not a requirement. (In

14
  Might believing that you should care about φ suffice for showing that you care about φ, per-
haps just a little? This is entirely possible. As I note later, my view is ecumenical about various routes
to attributability. Believing that φ is good might very well be sufficient for demonstrating that I care
about φ. Similarly, avowing that φ is good might also be sufficient for demonstrating that I care about
φ. Indeed, I suspect that beliefs and avowals about what one ought to care about often do reflect what
one cares about. I intend here only to point to the possibility of there being at least one case of a person
believing that she ought to care about φ but failing to care about φ.
15
  The psychological sense of caring is perhaps more clearly labeled the “subjective” sense, but for
the sake of consistency I follow Jaworska’s (2007b) usage. Note also that the psychological–​ontological
distinction is not between two kinds of cares, but rather between two senses in which one might refer
to cares.
108 Self

the next chapter, I discuss in more detail the relationship between cares in the onto-
logical sense and the agent’s own perspective.)
All things considered, a care-​based theory of attributability is unusually inclusive,
in the sense that, on its lights, a great many actions will end up reflecting on agents,
potentially including very minor actions, like the museumgoer’s shifting around to
get the best view of the painting, as well as actions that conflict with an agent’s reflec-
tive endorsements, narrative identities, and rational judgments.16 One virtue of this
inclusiveness, as some care theorists have put it, is that cares are conceptually well
suited to make sense of actions “from the margins,” for example, the actions of small
children and Alzheimer’s patients ( Jaworska, 1999, 2007a; Shoemaker, 2015) or
agents with severe phobias or overwhelming emotions (Shoemaker, 2011). Small
children, for example, may not be able to make rational judgments about the value
of their actions, but they may nevertheless care about things. These sorts of actions
don’t easily fit into familiar philosophical ways of conceptualizing attributability.
The hard question is what it means to care about something. A persuasive answer
to this question is that when something matters to you, you feel certain ways about it.
According to the affective account of caring (e.g., Shoemaker, 2003, 2011; Jaworska,
1999, 2007a,b; Sripada, 2015), to care about something is to be disposed to feel cer-
tain complex emotions in conjunction with the object of one’s cares. When we care
about something, we feel “with” it. As David Shoemaker (2003, 94) puts it, in caring
about something, we are emotionally “tethered” to it. To care about your firstborn,
your dog, the local neighborhood park, or the history of urban beekeeping is to be
psychologically open to the fortunes of that person, animal, place, or activity.
Caring in this sense—​of being emotionally tethered to something because
it matters to you—​is inherently dispositional. For example, I can be angry about
the destruction of the Amazon rainforest—​in the sense that its fate matters to
me—​without experiencing the physiological correlates of anger at any particular
moment. To care about the Amazon involves the disposition to feel certain things

16
  To clarify, the care-​based theory of attributability that I endorse is unusually inclusive. Other care
theorists’ views tend to be more restrictive. For example, Arpaly and Timothy Schroeder (1999) argue
that “well-​integrated” beliefs and desires—​those beliefs and desires that reflect upon what they call the
“whole self ”—​must be both “deep” (i.e., deeply held and deeply rooted) and unopposed to other deep
beliefs and desires. I address the issue of opposition in Chapter 5; I’ll argue that we can in fact have con-
flicting cares. The point I am making here can be fleshed out in terms of depth. Actions that are attrib-
utable to agents need not stem from “deep” states, on my view. Theorists like Arpaly and Schroeder
are focused only on morally relevant actions and attitudes, however, whereas I am focused on actions
that reflect character in general. It is also crucial to keep the scope of my claim in mind. Attributability
licenses aretaic attitudes—​reactions stemming from an evaluation of a person’s character—​but it does
not necessarily license actively blaming or punishing a person (in the case of bad actions). Keeping in
mind that attributability licenses aretaic attitudes alone should help to render the inclusiveness of my
claim less shocking. My view can be understood as a reflection of the way in which we perceive and
evaluate each other’s character constantly in daily life.
Car ing , Implicit Attit ud e s , and the   S el f 109

at certain moments, like anger when you are reminded of the Amazon’s ongoing
destruction. (This isn’t to say that the disposition to feel anger alone is sufficient for
caring, as I discuss later.)
Caring in this sense is also a graded notion. Imagine that you and I both have pet
St. Bernards, Cupcake and Dumptruck. If you tell me that Cupcake has injured her
foot, I might feel bad for a moment, and perhaps I will give her an extra treat or rec-
ommend a good vet. Maybe I care about Cupcake, but not that much. Or perhaps
I care about you, and recognize that you care about Cupcake. If Dumptruck hurts his
foot in the same way, however, my reaction will be stronger, more substantive, and
longer-​lasting. It seems that I care more about Dumptruck than I do about Cupcake.
But what, really, are cares, such that feelings can constitute them? I  follow
Jaworska’s (2007b) Neo-​Lockean approach, which builds upon work by Michael
Bratman (2000). The core of this approach is that a mental state or attitude that
belongs to an agent—​those that are “internal” to her—​must support the cohesion
of the agent’s identity over time. Bratman famously argues that plans and policies
play this role in our psychological economy. But it is possible that more unreflec-
tive attitudes play this role too. As Jaworska shows, Bratman’s elaboration of how
plans and policies support identity and cohesion over time is applicable to more
unreflective states, like certain emotions. Bratman argues that internal states display
continuity over time and referential connectedness. Continuity distinguishes, for
example, a fleeting desire from a desire that plays a more substantive role in one’s
mind. The concept of continuity also helps to distinguish one and the same state
occurring at two points in time from two distinct states occurring at two points
in time. In order to be identity-​bearing, my desire for ice cream today must be the
same desire as my desire for ice cream tomorrow. Referential connectedness is a
relatively more demanding condition. Connections are referential links between
mental states or between attitudes and actions. A  referential link is a conceptual
pointer. The content of my desire for ice cream, for example, points to the action
I take, and the action I take is an execution of this particular desire. The content of
the state and the action are co-​referring. This clarifies that it is not enough for inter-
nality that a state like a desire causes an action; the content of my desire—​that it is a
desire for ice cream—​must point, through a conceptual connection, to my decision
to go to the store.
Jaworska argues that some, but not all, emotions display continuity and connec-
tion. Grief, for example, meets these conditions. On her view, grief involves painful
thoughts, a tendency to imagine counterfactuals, disturbed sleep, and so on. These
patterns display continuity and, by pointing referentially to the person or thing for
which one grieves, display connections as well. “In this way,” Jaworska writes, “emo-
tions are constituted by conceptual connections, a kind of conceptual convergence,
linking disparate elements of a person’s psychology occurring at different points in
the history of her mental life” (2007b, 553). Other emotions, like a sudden pang of
fear, do not display continuity and connection, on her view.
110 Self

On this rendering of the affective account of caring, the thing that you grieve for
is something you care about, while whatever made you spontaneously shrink in fear
is not. In turn this suggests that your grieving is something reflective of your charac-
ter, while your fearing is not.
I think it is right to say that feelings are tied to “mattering” in this way and that
affective states must display continuity and connection to count as mattering. But
I find the distinction between grief and fear—​and, more important, the distinction
between putative kinds of emotions on which it is based—​unconvincing.

3.  Feeling and Judging


The affective account of cares is often based on a distinction between so-​called pri-
mary and secondary emotions. Only secondary emotions, the argument goes, are
appropriate for underwriting cares. For Jaworska (2007b, 555), secondary emotions
are those that involve self-​awareness, deliberation, an understanding of one’s situation,
and systematic thought about the significance and consequences of one’s feelings.
Jaworska offers gratitude, envy, jealousy, and grief as examples of secondary emotions.
A paradigmatic example of a primary emotion, on the other hand, is fear, of the sort
that precipitates a driver’s slamming on the brake when the car in front of her stops
short or precipitates a mouse’s crouching when it detects the shadow of a hawk pass
overhead. Other ostensible primary emotions are disgust, rage, and surprise. All of
these, on Jaworska’s view, are more or less stimulus-​bound reflex-​like responses that
require no understanding of one’s situation. Much in the same vein, Shoemaker writes:

Primary emotions are like reflexes, responses triggered with no mediation


by the cognitive mechanisms of the prefrontal cortex, and the emotional
reactions falling under this rubric are the familiar “automatic” ones of
fear, anger, and surprise. Secondary emotions, on the other hand, require
the cognitive ability consciously to entertain a variety of mental images
and considerations with respect to the myriad relationships between self,
others, and events, i.e., to engage in “a cognitive evaluation of the con-
tents of the events of which you are a part” (Antonio Damasio, Descartes’
Error: Emotion, Reason, and the Human Brain [New York: Putnam’s, 1994],
p. 136). This, I take it, is the capacity that infants (and nonhuman animals)
lack. The analytic relation between emotions and cares discussed in the
text, therefore, holds only with respect to these secondary, more robust or
developed emotions. (2003, 93–​94)17

  Shoemaker and Jaworska appear to disagree about whether “marginal” agents, such as young
17

children, can care about things, but I do not pursue this question. For skepticism about whether non-
human animals lack the capacity to care, see Helm (1994).
Car ing , Implicit Attit ud e s , and the   S el f 111

Jaworska and Shoemaker both stress the importance of complex cognitive


capacities—​ deliberation, self-​awareness, conceptual understanding, systematic
evaluative thought, and so on—​in anything that counts as a secondary emotion. In
addition, Shoemaker adds the condition of a secondary care-​underwriting emotion
being “mediated” by the prefrontal cortex.
I do not think the affective account of caring can hang on this distinction between
primary and secondary emotion. I offer three reasons.
First, some of the core features of the apparent distinction rest on unpersua-
sive assumptions. This is the case when Shoemaker claims that primary emotions
are not mediated by the cognitive mechanisms of the prefrontal cortex. Jaworska
(2007b, 555) makes a similar claim in arguing that the defining feature of primary
emotions is that they are not processed by “higher” cognitive functions. On a
neurofunctional level, the evidence for the traditional association of the amyg-
dala with affect and the prefrontal cortex with cognition is mixed (e.g., Salzman
and Fusi, 2010). Moreover, research suggests that affect may be involved in all
forms of thought. In a review of recent literature showing that affect plays a key
role in the processing of conscious experience, language fluency, and memory,
Seth Duncan and Barrett (2007) argue that “there is no such thing as a ‘nonaf-
fective thought.’ Affect plays a role in perception and cognition, even when peo-
ple cannot feel its influence” and conclude that “the affect-​cognition divide is
grounded in phenomenology.”18
Second, the affective account of caring is meant as an alternative to both volun-
taristic and judgment-​based accounts of attributionism. I discuss judgment-​based
accounts of attributionism in more detail in Chapter 5, especially Angela Smith’s
“rational-​relations” view. Here the relevant point is that the appeal of affective
accounts of caring is their ostensible ability to make sense of actions that appear
to reflect upon agents even if those actions conflict with the agent’s deliberation,
explicit judgment, and so on.19 The affective account does so by tracing the source
of internality to something other than these cognitively complex reflective capaci-
ties. Both Jaworska and Shoemaker stress this point. Jaworska (2007b, 540) writes,
“Caring cannot be an attitude derived from an evaluative judgment,” and Shoemaker

18
  My point is not that Duncan and Barrett’s strong claim is necessarily correct, but rather that the
data’s attesting to the pervasiveness of affect in cognition makes it unlikely that there is a clear line
between cognitive and noncognitive emotion.
19
  I follow these authors in focusing on cognitively complex reflective capacities in a broad sense. Of
course, deliberation, self-​understanding, evaluative judgment, and so on can and sometimes do come
apart. Certainly some emotions involve self-​understanding but not deliberation, for example. Perhaps
a good theory of secondary emotions would focus on some (or one) of these capacities but not oth-
ers. A care theorist interested in maintaining the distinction between her approach and voluntarist or
judgment-​based approaches to attributionism would then need to show how the particular cognitively
complex capacity associated with secondary emotions is functionally distinct from what voluntarist
and judgment-​based approaches posit as the source of internality.
112 Self

(2003, 2011) goes to great lengths to distinguish his view from those that find the
source of internality in an agent’s reflective judgment (such as Gary Watson [1975]
and Smith [2005, 2008]). However, if secondary emotions alone can underwrite
cares, and secondary emotions are just those emotions that are partly constituted
by cognitively complex reflective capacities like evaluative judgment, then it looks
as if the affective account of cares collapses into one based on cognitively complex
reflective capacities.
Third, the precise relationship between feelings and judgments in so-​called sec-
ondary emotions is vague. Jaworska says that secondary emotions “involve” self-​
awareness, deliberation, and so on. Shoemaker says that secondary emotions are
“mediated” by the prefrontal cortex and require certain cognitive abilities, such
that agents can engage in processes of evaluation. It is hard to situate these claims
precisely within theories of emotion (see below for more on theories of emotion).
Are evaluative judgments, according to Jaworska and Shoemaker, causes of emo-
tions? Or do they simply co-​occur with emotions? Are they, perhaps, components
of emotional states? These differences matter. For example, in the case of envy, it
is one thing to say that the object of one’s envy matters to a person because envy
involves co-​occurring feelings and judgments, such as the belief that the person who
is envied is smarter than oneself. It is another thing to say that the object of one’s
envy matters to a person because envy is triggered by the judgment that the person
who is envied is smarter than oneself, and subsequently one experiences certain
feelings. In the latter case, in which the agent’s judgment is taken to be a causal pre-
cursor to her feelings, it seems clear that it is the judgment itself that fixes the agent’s
cares. The judgment is responsible for showing what matters to the person, in other
words. But, alternatively, if feelings and judgments are taken to co-​occur in a state of
envy, it is less clear which of these co-​occurring states is responsible for constituting
a care. This becomes particularly salient in cases where feelings and judgments seem
to diverge. It’s hard to tell whether and how to attribute a care to me if emotions are
constituted by co-​occurring feelings and judgments in the case in which I feel envi-
ous despite believing that there is no reason for me to feel this way.
Instead of distinguishing between primary and secondary emotions, the affec-
tive account of cares should take emotions to represent features of the world that an
agent values, even if those representations do not constitute judgments about what
the agent finds valuable. Here I build upon Jesse Prinz’s (2004) theory of emotion.20
Prinz (2004) aims to reconcile “feeling” (or “somatic”) theories of emotion with
“cognitive” theories of emotions. Crudely, feeling theories argue that emotions are

20
  See also Bennett Helm (2007, 2009), who develops an affective theory of caring that does not
rely on the distinction between primary and secondary emotions. Helm argues that to experience an
emotion is to be pleased or pained by the “import” of your situation. Something has import, according
to Helm, when you care about it, and to care about something is for it to be worthy of holistic patterns
of your attention and action.
Car ing , Implicit Attit ud e s , and the   S el f 113

feelings and that particular emotions are experiences of particular changes in the
body ( James, 1884; Lange, 1885; Zajonc, 1984; Damasio, 1994). For example,
elation is the feeling of one’s heart racing, adrenaline surging, and so on. As Prinz
argues, feeling theories do a good job of explaining the intuitive difference between
emotions and reflective thought. Evidence supporting this intuition stems from the
fact that particular emotions can be induced by direct physiological means, like tak-
ing drugs. Similarly, particular emotions can be elicited by simple behaviors, such
as holding a pencil in your mouth, which makes you feel a little happier, by forcing
your lips into a smile (Strack et al., 1988).21 It is hard to give an explanation of this
result in terms of an agent’s beliefs or evaluative judgments; why should mechani-
cally forcing your lips into a smile make you think about things differently and thus
feel happier? Relatedly, feeling theories do a good job of explaining the way in which
our feelings and reflective thoughts can come into conflict, as in the case where
I  think that I  shouldn’t be jealous, but feel jealous nevertheless. Feeling theories
have trouble, however, making sense of the way in which some emotions are some-
times closely tied to our thoughts. Consider the difference between feeling fearful
of a snake, for example, and feeling fearful of taking a test (Prinz, 2004). While your
bodily changes may be equivalent in both cases—​racing heart, sweating, and the
like—​in the case of the test your fear is tied to your belief that the test is important,
that you value your results, and so on, while in the snake case your fear can unfold
regardless of whether you believe that snakes are dangerous or that fearing snakes is
rational or good. There are many cases like this, wherein our emotions seem closely
tied to our beliefs, judgments, values, self-​knowledge, and so on. Parents prob-
ably wouldn’t feel anxious when their kids get sick if they didn’t wonder whether
what seems by all rights like a common cold might actually be a rare deadly virus.
Similarly, jealous lovers probably wouldn’t feel envy if they didn’t harbor self-​doubt.
And so on. Traditional feeling theories of emotion—​those that define emotions
strictly in terms of the experience of particular changes in the body—​have trouble
explaining these common phenomena.
Cognitive theories of emotion (Solomon, 1976; Nussbaum, 2001) and related
“appraisal” theories (Arnold, 1960; Frijda, 1986; Lazarus, 1991)  do better at
explaining the link between emotions and reflective thought. Traditional cognitive
theories identify emotions as particular kinds of thoughts. To be afraid of some-
thing is to believe or imagine that thing to be scary or worthy of fear, for example.
These theories founder on precisely what feeling theories explain: the ways in which
emotions and thoughts often seem to come apart and the ways in which emotions
can form and change independently of our evaluative judgments. Appraisal theo-
ries are a more plausible form of cognitive theory. Appraisal theories argue that

21
  But see Wagenmakers et al. (2016) effort to replicate Strack et al. (1998). See the Appendix for
discussion of replication in psychological science.
114 Self

emotions are at least in part caused by thoughts, but they do not identify emotions
as such with beliefs, judgments, and the like. Rather, appraisal theories hold that
particular emotions are caused by evaluative judgments (or appraisals) about what
some entity in the environment means for one’s well-​being. These appraisals are of
what Richard Lazarus (1991) calls core relational themes. Particular emotions are
identified with particular core relational themes. For example, anger is caused by an
appraisal that something or someone has demeaned me or someone I care about;
happiness is caused by an appraisal that something or someone has promoted my
reaching a goal; guilt is caused by the appraisal of my own action as transgressing a
moral imperative; and so on.
In one sense, appraisal theories are hybrids of feeling theories and cognitive the-
ories of emotions (Prinz, 2004, 17–​19). This is because appraisals themselves are
said to cause agents to experience particular feelings. Appraisals are precursors to
feelings, in other words, and emotions themselves can then be thought of as com-
prising both judgments and feelings. This does not represent a happy reconciliation
of feeling theories and cognitive theories, however. If appraisals do not constitute
emotions, but are instead causal precursors to feelings, then there should be no
cases—​or, at least, predictable and relatively exceptional cases—​in which agents
experience the feelings component of emotions without appraisals acting as causal
precursors. But the very cases that threaten traditional cognitive theories of emo-
tion illustrate this very phenomenon. These are cases of direct physical induction
of emotions, for example, by taking drugs or holding a pencil in one’s mouth. These
acts cause emotions without appraisals of core relational themes. And these kinds
of cases are pervasive.22
Prinz (2004) suggests a new kind of reconciliation. He retains the term
“appraisal” but deploys it in an idiosyncratic way. On a view like Lazarus’s, apprais-
als are judgments (namely, judgments about how something bears on one’s interest
or well-​being). On Prinz’s view, appraisals are not judgments. Rather, they are per-
ceptual representations of core relational themes. Prinz draws upon Fred Dretske’s
(1981, 1986) sense of mental representations as states that have the function of car-
rying information for a particular purpose. In the case of emotions, our mental rep-
resentations are “set up” to be “set off ” by (real or imagined) stimuli that bear upon
core relational themes. For example, we have evolved to feel fear when something is
represented as dangerous; we have evolved to feel sadness when something is rep-
resented as valued and lost. But just as a boiler being kicked on by the thermostat

  Appraisal theorists might add additional conditions having to do with the blocking, swamping,
22

or interruption of appraisals. These conditions would be added in order to explain away cases of appar-
ent dissociation between appraisals and emotions. Given the pervasiveness of such cases, however, the
onus would be on appraisal theorists to generate these conditions in a non–​ad hoc way. Moreover, it is
unclear if such conditions would cover cases in which the putative appraisal is missing rather than in
conflict with the resulting emotion.
Car ing , Implicit Attit ud e s , and the   S el f 115

need be reliably caused only by a change in the room’s temperature, fear or sadness
need be reliably caused only by a representation of the property of being dangerous
or being valued and lost, without the agent judging anything to be dangerous or
valued and lost. This is the difference between saying that an appraisal is a represen-
tation of a core relational theme and saying that an appraisal is a judgment about a
core relational theme. The final key piece of Prinz’s view is to show how representa-
tions of core relational themes reliably cause the emotions they are set up to set off.
They do so, Prinz argues, by tracking (or “registering”) changes in the body.23 Fear
represents danger by tracking a speeded heart rate, for example. This is how Prinz
reconciles feeling theories of emotion with cognitive theories. Emotions are more
than just feelings insofar as they are appraisals (in the sense of representations) of
core relational themes. But emotions are connected to core relational themes not
via rational judgments about what matters to a person, but instead by being set up
to reliably register changes in the body. 24
My aim is not to defend Prinz’s theory as such (nor to explicate it in any rea-
sonable depth), although I take it to provide a plausible picture of how emotions
“attune” agents’ thoughts and actions around representations of value.25 What I take
from Prinz’s account is that emotions can represent features of the (real or imag-
ined) world that the agent values without the agent thereby forming rational judg-
ments about that thing. This is, of course, a “low-​level” kind of valuing. In this sense,
agents can have many overlapping and possibly conflicting kinds of values (see dis-
cussion in Chapter 5).
Understood this way, emotions are poised to help indicate what matters to
us, in a broader way than other theorists of affective cares have suggested. (That
is, some emotions that theorists like Jaworska and Shoemaker think do not tell us

23
  For a related account of emotion that similarly straddles feelings theories and cognitive theories,
see Rebekka Hufendiek (2015). Hufendiek argues, however, that the intentional objects of emotions
are affordances, not changes in the body.
24
  The idea that the emotional component of implicit attitudes are “set up to be set off ” by repre-
sentations of particular stimuli may seem to sit uncomfortably with my claim that the objects of these
representations matter to agents, in the attributability sense. That is, some are inclined to see a conflict
between saying that a response is both “programmed” by evolution and attributable to the agent. I do
not share this intuition, both because it follows from voluntaristic premises about attributability that
I do not endorse and because I do not know how to draw a distinction between responses that are and
are not “programmed” by evolution. This is, of course, a question about free will and compatibilism,
which I leave to philosophers more skilled than myself in those dark arts.
25
  According to Randolph Nesse and Phoebe Ellsworth, “Emotions are modes of functioning,
shaped by natural selection, that coordinate physiological, cognitive, motivational, behavioral, and
subjective responses in patterns that increase the ability to meet the adaptive challenges of situations that
have recurred over evolutionary time” (2009, p. 129; emphasis in original). See also Cosmides and
Tooby (2000) and Tooby and Cosmides (1990) for related claims. I take it as counting in favor of
Prinz’s theory that treating emotions as perceptual representations of core relational themes helps to
show how emotions are adaptive in this sense.
116 Self

anything about what an agent cares about will, on my view, indicate what an agent
cares about.) But my claim is not that all emotions indicate cares. The relevant
states must be continuous across time and referentially connected. I’ll now argue
that when emotional reactions are components of implicit attitudes, they can meet
these conditions.

4.  Caring and Implicit Attitudes


Cross-​temporal continuity is a form of persistence through time, such that states A1
and A2 count as tokens of the same attitude (i.e., A at T1 and A at T2). Connections
are referential links between mental states or between attitudes and actions that
refer to each other. On the Neo-​Lockean view, states induce organization and
coordination within the agent by way of continuities and connections. This is why
these are proposed as conditions of agency. On Neo-​Lockean views like Bratman’s
(2000), reflective states like plans and policies paradigmatically induce organiza-
tion and coordination by way of being cross-​temporally continuous and referen-
tially connected. Affective care theorists like Jaworska show that certain emotions
also do this. My claim is also that certain emotions do this, not because they are
caused by or co-​occur with evaluative judgments, but rather because they are part
of integrated FTBA response patterns.
As I argued in Chapter 2, implicit attitudes change over time, both quickly and
slowly. The intentional object of the implicit attitude remains the same through
this process, tokening the state. While stray emotions can fail to display continuity
over time, emotional reactions that are part of an implicit attitude are shaped by
the agent’s learning history as well as by the feedback she receives on her behavior.
Implicit attitudes are also anticipatory states; the museumgoer, for example, seeks
alleviation by looking for the right spot to view the painting.
Emotions—​understood in the way I  have proposed—​also display conceptual
connections when they are components of implicit attitudes. They do so in two
ways. First, the FTBA relata of implicit attitudes converge upon the same inten-
tional object. The museumgoer notices the big painting, feels tension about the big
painting, steps back from the big painting, and registers the success of this reaction
in feed-​forward learning mechanisms stored in memory for future encounters with
big paintings.26 Second, implicit attitudes internally co-​refer. Recall my argument in
Chapter 3, building upon Gendler (2012), that implicit attitudes are unlike ordi-
nary beliefs and desires in the sense that beliefs and desires are “combinatoric.”
That is, any belief can, in principle, be paired with any desire. In contrast, implicit

  As I have noted, cares are graded. You might care about the big painting only a little bit. It might
26

also be right to say that what you care about are paintings like this one. The object of your care in this
case is the type, not the token.
Car ing , Implicit Attit ud e s , and the   S el f 117

attitudes are individuated at the cluster level. Their relata co-​activate rather than
merely frequently co-​occur. This is why implicit attitudes are singular entities rather
than a collection of frequently co-​occurring but fundamentally separate represen-
tations, feelings, actions, and changes over time. Given this, it seems right to say
that the FTBA relata of implicit attitudes are co-​referring in the sense that they are
conceptually connected. It is this bonding between the relata that connects these
states to an agent’s cares. More precisely, it is the tight connection between particu-
lar combinations of dispositions to notice, feel, behave, and adjust future automatic
reactions to similar stimuli that displays referential connections of the sort that indi-
cate a genuine connection to what the agent cares about.27
These conditions are not met in contrast cases of isolated perceptual, affective, or
behavioral reactions. On its own, an agent’s tendency to notice particular features
of the ambient environment, for example, bears little or no relation to the agent’s
cares. The odd shape of a window might catch your eye while you walk to work, or
a momentary feeling of disgust might surface when you think about having been
sick last week. So what? Whatever fleeting mental states underlie these changes in
attention fail to display both continuity and connection. The states do not persist
through time, nor do they refer to any of the agent’s other mental states. But when
you notice that the window causes a coordinated response with FTBA components,
and you see and act and learn from your actions, perhaps stepping back to see it
aright, something about what you care about is now on display. Or when you recoil
from the feces-​shaped fudge, not just feeling disgust but also reacting behaviorally
in such a way as to stop the fudge from causing you to feel gross, something about
what you care about is now on display.
Consider two cases in more detail. First, ex hypothesi, Cecil Duly displays
belief-​discordant attitudes toward Antoine Wells. While Duly denies that he thinks
or feels differently about Wells after their conversation, he begins to treat Wells

27
  Developing a point from John Fischer and Mark Ravizza (1998), Levy (2016) identifies three
criteria for demonstrating a “patterned” sensitivity to reasons. Meeting these criteria, he argues, is suf-
ficient for showing that an agent is directly morally responsible for an action, even if the agent lacks
person-​level control over that action. Levy’s three criteria are that the agent’s apparent sensitivity to
reasons be continuously sensitive to alterations in the parameters of a particular reason; that the agent
be sensitive to a broad range of kinds of reasons; and that the agent demonstrate systematic sensitivity
across contexts. Levy then goes on to argue that the mechanisms that have implicit attitudes as their
components fail to be patterned in this way; they fail to meet these three criteria. While I think these
criteria represent a promising way to think about the relationship between implicit attitudes and the
self, I would frame and analyze them differently. Being patterned in this way strikes me as a promising
avenue for understanding actions that are attributable to agents. In any case, I would contest—​as much
of the discussion in this and the preceding chapter suggests—​Levy’s claim that implicit attitudes para-
digmatically fail to be patterned in this way. Elsewhere, Levy (2016) writes that implicit attitudes are
not “entirely alien to the self ” and that “that suffices for some degree of attributability.” One could inter-
pret the argument of this chapter to be my way of analyzing the degree of attributability that implicit
attitudes paradigmatically deserve.
118 Self

more kindly and humanely nevertheless. It’s plausible to think that Duly’s action-​
guiding attitudes are implicit. In interacting with Wells, Duly might now see,
without explicitly noticing, a particular pained look on Wells’s face (F:  “human
suffering!”), feel disquieted by it (T: “bad to cause pain!”), move to comfort Wells
in some subtle way, perhaps by saying “please” and “thank you” (B:  “offer reas-
surance to people in pain!”), and absorb the feedback for future instances when
this behavior alleviates his felt tension (A: “suffering reduced!”). Focusing on the
emotional component of Duly’s reaction in particular, in Prinz’s terms, Duly seems
to have formed an emotion-​eliciting appraisal of Wells. The core relational theme
Duly represents is perhaps guilt; Wells’s perceived suffering conveys to Duly that
he has transgressed a moral value. This appraisal will be reliably caused by par-
ticular somatic changes in Duly’s autonomic nervous system associated with rep-
resentations of others as suffering or as in pain (De Coster et  al., 2013). These
processes are value-​signaling, even if Duly forms no evaluative judgments about
Wells or even if Duly’s evaluative judgments of Wells run in the opposite direction
of his implicit attitudes.
Of course, other interpretations of this anecdote are possible. Perhaps, despite
his testimony, Duly simply thought to himself that Wells deserves better treatment
and changed his behavior as a result. Nothing in the story rules this interpretation
out. But it seems unlikely. Duly’s testimony should be taken as prima facie evi-
dence for what he believes.28 People do not always report their beliefs truthfully
or accurately, but Duly’s explicit denial of feeling or acting differently toward Wells
is not irrelevant to determining what has changed in him. Moreover, while Duly
was forced to listen to Wells during the discussion of the film, and thus might have
simply been persuaded by Wells’s reasoning, in a way indicative of a belief-​based
interpretation of Duly’s changed attitudes, it is not as if Wells’s arguments are new
to Duly. Duly has heard Wells complaining for a long time, he says. It is plausible to
think that something other than the force of Wells’s arguments had an impression
on Duly.29 Perhaps there was something in the look on Wells’s face that affected
Duly in this particular context. Or perhaps the presence of a third party—​Reed—​
acted as an “occasion setter” for a change in Duly’s implicit attitudes toward Wells.30
For example, Duly probably had never before been forced to interact with Wells in

  See Chapter 3.
28

  An experiment to test these questions might change the strength of Wells’s arguments and look
29

for effects on Duly (or, more precisely, would do so on subjects performing tasks operationalized to
represent Wells’s and Duly’s positions). Or it might change crucial words in Wells’s testimony and
compare implicit and explicit changes in Duly’s attitudes. See Chapter 3 for a discussion of analogous
experiments. See also the discussion in Chapter 6 of research on changing unwanted phobic reactions
(e.g., to spiders) without changing explicit attitudes.
30
  Occasion setting is the modulation of a response to a given stimulus by the presence of an addi-
tional stimulus, usually presented as part of the context (Gawronski and Cesario, 2013). See the discus-
sion in Chapter 7 of renewal effects on implicit learning in the animal cognition literature.
Car ing , Implicit Attit ud e s , and the   S el f 119

anything like an equal status position, and perhaps by virtue of being in this posi-
tion, Duly’s associations about Wells shifted.31
If this is right, then the question is whether the object of Duly’s implicit attitude
constitutes something that Duly cares about. It does to the extent that Duly’s atti-
tude appears to display both cross-​temporal continuities and the right kind of ref-
erential connections. Duly’s interactions with Wells are sustained over time. Wells
himself experiences Duly’s actions as manifestations of one and the same attitude.
Indeed, it would be hard to understand Duly’s actions if they were interpreted as
manifesting distinct attitudes rather than as various tokens of one and the same atti-
tude. This makes sense, also, given that Duly’s relevant attitudes all point toward
the same intentional object (i.e., Wells). Not only does Duly’s attitude point toward
Wells, but the various components of this attitude also all seem to refer to Wells.
Duly perhaps noticed a look on Wells’s face, feels disquieted by the look on Wells’s
face, acts in such a way as seems appropriate in virtue of his perception of and feel-
ings about Wells, and incorporates feedback from this sequence into his attitude
toward Wells in future interactions.
All of this suggests that Duly cares about Wells, if only in a subtle and conflicted
way. Indeed, Duly’s caring attitude is conflicted. He explicitly states the opposite of
what I’ve suggested his implicit attitudes represent. But this can be the case, given
that cares, in the sense I’ve discussed, are ontological, not psychological.
Second, consider an implicit bias case. Here it seems perhaps more difficult to
show that the objects of an agent’s implicit attitudes are things she cares about. In
“shooter bias” studies (see Chapter 1), for instance, participants try hard to shoot
accurately, in the sense of “shooting” all and only those people shown holding guns.
Participants in these experiments clearly care about shooting accurately, in other
words. But this fact does not foreclose the possibility that other cares are influenc-
ing participants’ judgments and behavior. Indeed, it seems as if participants’ shoot-
ing decisions reflect a care about black men as threatening (to the agent). Again, this
is a care that the agent need not recognize as her own.32 The emotional component
here is fear, which it is argued signals to the agent how she is faring with respect to
personal safety. This value-​laden representation speaks to what matters to her. To
see this, it is important to note that the shooter bias test does not elicit generic or
undifferentiated fear, but rather fear that is specifically linked to the perception of
black faces. This linkage to fear is made clear in consideration of the interventions
that do and do not affect shooter bias. For example, participants who adopt the con-
ditional plan “Whenever I see a black face on the screen, I will think ‘accurate!’ ” do
no better than controls. However, participants who adopt the plan “Whenever I see

31
  Changes in social status have been shown to predict changes in implicit attitudes (e.g., Guinote
et al., 2010). See Chapter 7.
32
  In Chapter 5, I discuss the way in which inconsistent caring is not problematic in the way that
inconsistent believing is.
120 Self

a black face on the screen, I will think ‘safe!’ ” demonstrate significantly less shooting
bias (Stewart and Payne, 2008). Fear is one of Lazarus’s core relational themes; it is
thought to signal a concrete danger. And in Prinz’s terms, it is reliably indicated by
specific changes in the body (e.g., a faster heart rate).
Evidence for the cross-​temporal continuity of participants’ implicit attitudes
in this case comes from the moderate test-​retest reliability of measures of implicit
associations (Nosek et al., 2007).33 Moreover, Correll and colleagues (2007) find
little variation within subjects in multiple rounds of shooter bias tests. It is also fairly
intuitive, I think, that agents’ decisions across multiple points in time are manifesta-
tions of the same token attitude toward black men and weapons rather than mani-
festations of wholly separate attitudes.
The case for conceptual convergence is strong here too. The agent’s various
spontaneous reactions to white/​black and armed/​unarmed trials point referen-
tially to what matters to her, namely, the putative threat black men pose. The find-
ing of Correll and colleagues (2015) that participants in a first-​person shooter
task require greater visual clarity before responding to counterstereotypic targets
compared with stereotypic targets (see Chapter  2) is conceptually connected to
pronounced fear and threat detection in these subjects (Stewart and Payne, 2008;
Ito and Urland, 2005) and to slower shooting decisions. Note that this conceptual
convergence helps to distinguish the cares of agents who do and do not demon-
strate shooter bias. Those who do not will not, I suggest, display the co-​activating
perceptual, affective, and behavioral reactions indicative of caring about black men
as threatening.
Of course, other cases in which agents act on the basis of spontaneous inclina-
tions might yield different results. I have focused on particular cases in order to show
that implicit attitudes can count as cares. The upshot of this is that actions caused by
implicit attitudes can reflect on character. They can license aretaic appraisals. This
leaves open the possibility that, in other cases, what an agent cares about is fixed by
other things, such as her beliefs.

5. Conclusion
One virtue of understanding implicit attitudes as having the potential to count as
cares, in the way that I’ve suggested, is that doing so helps to demystify what it means
to say that a care is ontological rather than psychological. It’s hard to see how one can
discover what one cares about, or be wrong about what one cares about, on the basis
of an affective account of caring too closely tied to an agent’s self-​understanding,
beliefs, and other reflective judgments. To the extent that these accounts tie care-​
grounding feelings to self-​awareness of those feelings or to evaluative judgments

33
  See the discussion of test-​retest reliability in Chapter 3 and the Appendix.
Car ing , Implicit Attit ud e s , and the   S el f 121

about the object of one’s feelings, cares in the ontological and psychological senses
seem to converge. But implicit attitudes are paradigmatic instances of cares that
I can discover I have, fail to report that I have, or even deny that I have. This is most
relevant to the vices of spontaneity, where, because of potential conflict with our
explicit beliefs, we are liable to think (or wish) that we don’t care about the objects
of our implicit attitudes. But, of course, cases like those of Huck and Duly show that
this is relevant to the opposite kind of case too, in which our spontaneous reactions
are comparatively virtuous.
This in turn gives rise to at least one avenue for future research. In what ways do
changes in implicit attitudes cause or constitute changes in personality or character?
Two kinds of cases might be compared: one in which a person’s implicit attitudes
are held fixed and her explicit attitudes (toward the same attitude object) change
and another in which a person’s implicit attitudes change and her explicit attitudes
(toward the same attitude object) are held fixed. How would observers judge
changes in personality or character in these cases? Depending, of course, on how
changes in attitudes were measured (e.g., what kinds of behavioral tasks were taken
as proxies for changes in attitudes), my intuition is that the former case would be
associated with larger changes in perceived personality than would the latter case,
but that in both cases observers would report that the person had changed.34
Relatedly, the arguments in this chapter represent, inter alia, an answer—​or
alternative to—​the question of whether implicit attitudes are personal or subper-
sonal states. Ron Mallon, for example, argues that personal mental states occur
“when a situation activates propositional attitudes (e.g. beliefs) about one’s category
and situation,” while subpersonal mental states occur “when a situation activates
mere representations about one’s category and situation” (2016, p.  135, emphasis
in original). On these definitions, implicit attitudes are not personal mental states,
because they are not propositional attitudes (like beliefs; see Chapter  3).35 Nor
are they subpersonal mental states, because they are not mere representations of
categories and situations (Chapter 2).36 Neither are implicit attitudes what Mallon

34
  Loosely related research shows that measures of explicit moral identity tend to make better pre-
dictions of moral behavior than do measures of implicit moral identity (Hertz and Krettenauer, 2016).
35
  Levy (2017) offers an even more demanding definition of the related concept of “person-​level
control.” On his view, adapted from Joshua Shepherd, “[p]‌erson-​level control is deliberate and delib-
erative control; it is control exercised in the service of an explicit intention (to make it the case that
such and such)” (2014, pp.  8–​9, emphasis in original). On the basis of this definition, Levy argues
that we generally lack person-​level control over actions caused by implicit attitudes. I discuss Levy’s
argument—​in particular the role of intentions to act in such and such a way in assessing questions
about attributability and responsibility—​in Chapter 5.
36
  Implicit attitudes are not mere representations of categories and situations, because they include
motor-​affective and feedback-​learning components (i.e., the T-​B-​A relata). Perhaps, however, Mallon
could posit action-​oriented or coupled representations as subpersonal states (see Chapter 2). Even so,
Mallon suggests that however subpersonal states are defined, they are connected with one another in
merely causal ways. Implicit attitudes, by contrast, are sensitive to an agent’s goals and cares.
122 Self

calls personal mental processes—​“rational processes of thought and action”—​nor


subpersonal mental processes—​merely “automatic, causal processes.” Implicit atti-
tudes seem to share features of both personal and subpersonal states and processes.
They are about us, as persons, but not necessarily about our beliefs or our rational
processes of thought and action.
More broadly, the upshot of what I’ve said here is that implicit attitudes can
constitute cares, even though these states may involve nothing more than tenden-
cies to notice salient features in the environment, low-​level feelings, “mere” motor
behavior, and subpersonal dynamic feedback systems. The relation to one’s identity
is not in these mental state relata themselves, but in their mutual integration. Most
important, then, the kind of mental state that I have identified with our spontaneous
inclinations and dispositions—​implicit attitudes—​is a source of identity over time.
These inclinations are therefore attributable to us and license aretaic appraisals of
us. When we act on the basis of these attitudes, our actions are not just instrumen-
tally valuable or regrettable, or good and bad as such. Evaluations of these actions
need not be shallow or merely “grading.” In virtue of them, we are admirable or
loathsome, compassionate or cruel, worthy of respect or disrespect.
Admittedly, this entails a very broad account of the self. On my view, a great
many more actions and attitudes will reflect on who we are as agents, compared
with what other attributionist views suggest. This raises questions about the precise
relationship between cares and actions, about responsibility for this broad array of
actions that are attributable to us, and about the fundamentality of the “deep self.”
I address these and other questions in the next chapter.
5

Reflection, Responsibility,
and Fractured Selves

The preceding chapter established that paradigmatic spontaneous actions can be “ours”
in the sense that they can reflect upon us as agents. They may be attributable to us in
the sense that others’ reactions to these actions are sometimes justifiably aretaic. I have
defended this claim about spontaneous actions being ours on the basis of a care-​based
theory of attributability. In short, caring attitudes are attributable to agents, and implicit
attitudes can count as cares.
But a number of questions remain. First, what exactly is the relationship between
cares and action, such that actions can “reflect upon” cares (§1)? Second, when an
action reflects upon what one cares about, is one thereby responsible for that action?
In other words, are we responsible for the spontaneous actions in which our implicit
attitudes are implicated (§2)? Third, how “deep” is my conception of the self? Do our
implicit attitudes reflect who we are, really truly deep down (§3)? Finally, my account of
attributable states is not focused on agents’ intentions to act or on their awareness of
their actions or their reasons for action. Nor does my account focus on agents’ rational
judgments. What role do these elements play (§4)?

1. Reflection
While I hope to have clarified that implicit attitudes can count as cares, I have not
yet clarified what it takes for an action to reflect upon one’s cares. As Levy (2011a)
puts it in a challenge to attributionist theories of moral responsibility, the difficulty
is in spelling out the conditions under which a care causes an action, and “in the
right sort of way.”1 The right sort of causality should be “direct and nonaccidental”

1
  Most researchers writing on attributionism accept that a causal connection between an agent’s
identity-​grounding states and her attitudes/​actions is a necessary condition for reflection (e.g., Levy,
2011a; Scanlon, 1998; Sripada, 2015). Note also two points of clarification regarding Levy’s (2011a)
terminology. First, he identifies a set of propositional attitudes that play the same role in his discussion

123
124 Self

and must also be a form of mental causality, as others have stressed (Scanlon, 1998;
Smith, 2005, 2008; Sripada, 2015). These conditions are to rule out deviant causal
chains, but also to distinguish actions that reflect upon agents from actions as such.
For an action to say something about me requires more than its simply being an
action that I perform (Shoemaker, 2003; Levy, 2011a). Answering my ringing cell
phone is an action, for example, but it doesn’t ipso facto express anything about me.
In the terms I have suggested, this is because answering the phone is not something
I particularly care about. My cares are not the effective cause of my action, in this
case. (I will return to this example later.)
Causal connection between cares and actions is a necessary condition for the
latter to reflect the former, but it doesn’t appear to be sufficient. To illustrate this,
Chandra Sripada describes Jack, who volunteers at a soup kitchen every week-
end. Jack cares about helping people in need, but Jack also cares about Jill, who
also works at the soup kitchen and whom Jack wants to impress. Sripada stipulates
that the reason Jack volunteers is to impress Jill. The problem, then, is that “given
his means-​ends beliefs, two of Jack’s cares—​his Jill-​directed cares and his charity-​
directed cares—​causally promote his working at the soup kitchen. But working at
the soup kitchen, given the actual purposes for which he does so, expresses only his
Jill-​directed cares” (Sripada, pers. com.). The problem, in other words, is that agents
have cares that are causally connected to their actions yet aren’t reflected in those
actions.2
We can overcome this problem by considering Jack’s actions in a broader con-
text.3 Does he go to the soup kitchen on days when Jill isn’t going to be there? Does
he do other things Jill does? Has he expressed charity-​directed cares in other ven-
ues, and before he met Jill? These kinds of questions not only illuminate which of an
agent’s cares are likely to be effective (as in the Jack and Jill case), but also whether an

as what I have been calling cares. That is, he refers to an agent’s identity-​grounding states as “attitudes.”
Second, Levy distinguishes between an action “expressing,” “reflecting,” and “matching” an attitude
(or, as I would put it, an action or attitude expressing, reflecting, and matching an agent’s cares). For
my purposes, the crucial one of these relations is expression. It is analogous to what I call “reflection.”
  How can two cares “causally promote” the same action? Shouldn’t we focus on which singular
2

mental state actually causes the action, in a proximal sense? This kind of “event causal” conception of
cares—​that is, the view that the event that is one’s action must be caused by a singular state—​runs
into the problem of long causal chains (Lewis, 1973; Sripada, 2015). Cares are not proximal causes
of action. For example, Jack’s cares about Jill don’t directly cause him to volunteer at the soup kitchen.
Rather, these cares cause him to feel a particular way when he learns about the soup kitchen, then to
check his schedule, then call the volunteer coordinator, then walk out the door, and so on. “Because
the connections between one’s cares and one’s actions are so indirect and heavily mediated,” Sripada
writes, “there will inevitably be many cares that figure into the causal etiology of an action, where the
action does not express most of these cares” (pers. com.). Thanks to Jeremy Pober for pushing me to
clarify this point.
3
  Sripada (2015) proposes a different solution, which he calls the Motivational Support Account.
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 125

agent’s cares are effective in any particular case. Consider again the case of answer-
ing the cell phone. Imagine a couple in the midst of a fight. Frank says to work-​
obsessed Larry, “I feel like you don’t pay attention to me when I’m talking.” Larry
starts to reply but is interrupted by his ringing phone, which he starts to answer
while saying, “Hang on, I might need to take this.” “See!” says Frank, exasperated. In
this case, answering the phone does seem to reflect upon Larry’s cares. His cares are
causally effective in this case, in a way in which they aren’t in the normal answering-​
the-​phone case. We infer this causal effectiveness from the pattern toward which
Frank’s exasperation points. The fact that this is an inference bears emphasizing.
Because we don’t have direct empirical access to mental-​causal relations, we must
infer them. Observations of patterns of behavior help to justify these inferences.
Patterns of actions that are most relevant to making inferences about an agent’s
cares are multitrack. Jack’s and Larry’s actions appear to reflect their cares because
we infer counterfactually that their Jill-​directed cares (for Jack) and work-​directed
cares (for Larry) would manifest in past and future thought and action across a vari-
ety of situations. Jack might also go to see horror movies if Jill did; Larry might
desire to work on the weekends; and so on. Patterns in this sense make future actions
predictable and past actions intelligible. They do so because their manifestations are
diverse yet semantically related. That is, similar cares show up in one’s beliefs, imagi-
nations, hopes, patterns of perception and attention, and so on. Another way to put
all of this, of course, is that multitrack patterns indicate dispositions, and inferences
to dispositions help to justify attributions of actions to what a person cares about.
Levy articulates the basis for this point in arguing that people are responsible for
patterns of omissions:

Patterns of lapses are good evidence about agents’ attitudes for reasons to
do with the nature of probability. From the fact that there is, say, a 50%
chance per hour of my recalling that it is my friend’s birthday if it is true
that I care for him or her and if internal and external conditions are suitable
for my recalling the information (if I am not tired, stressed, or distracted;
my environment is such that I am likely to encounter cues that prompt me
to think of my friend and of his or her birthday, and so on), and the fact that
I failed to recall her birthday over, say, a 6-​hour stretch, we can conclude
that one of the following is the case: either I failed during that stretch to
care for him or her or my environment was not conductive to my thinking
of him or her or I was tired, stressed, distracted, or what have you, or I was
unlucky. But when the stretch of time is much longer, the probability that
I failed to encounter relevant cues is much lower; if it is reasonable to think
that during that stretch there were extended periods of time in which I was
in a fit state to recall the information, then I would have had to have been
much unluckier to have failed to recall the information if I genuinely cared
for my friend (Levy 2011[b]‌). The longer the period of time, and the more
126 Self

conducive the internal and external environment, the lower the probabil-
ity that my failure to recall is a product of my bad luck rather than of my
failure to care sufficiently. This is part of the reason why ordinary people
care whether an action is out of character for an agent: character, as mani-
fested in patterns of response over time, is good evidence of the agent’s
evaluative commitments in a way that a single lapse cannot be. (2011a,
252–​253)

In practice, this notion of reflection as inferred on the basis of dispositional


patterns of behavior works as follows. Consider again the case of shooter bias
(Chapters 1 and 4). The fact that behavioral responses on a shooter bias test are pre-
dicted by measures of black–​violence implicit associations (e.g., an IAT; Glaser and
Knowles, 2008) suggests that the agent’s behavior is indeed caused by her attitudes
(i.e., those attitudes that reflect upon what she cares about). These behavioral pre-
dictions are buttressed by studies in which manipulations of black–​violence implicit
associations lead to changes in shooting behavior (Correll et al., 2007). Moreover,
the multitrack patterns of behavior that tests of black–​danger associations appear to
predict rule out the possibility that participants’ behavior is caused by their cares
but that their behavior doesn’t reflect their cares (as in the Jack and Jill case). Test-​
retest reliability (Nosek et al., 2007), particularly within specific and shared con-
texts, demonstrates that the relevant associations are relatively durable.4 Also, caring
about the purported violent tendencies of black men doesn’t manifest in shooting
behavior alone, but gives rise to a pattern of related results, such as ambiguous
word and face detection (Eberhardt et  al., 2004), social exclusion (Rudman and
Ashmore, 2007), and the allocation of attention (Donders et al., 2008).5 These pat-
terns may be hard to notice in daily life, since social norms put pressure on people
to behave in unprejudiced ways, and most people are fairly good at regulating their

  See the Appendix for discussion of test-​retest reliability and boosting it using relevant context cues.
4

  See Chapter  2, footnote 9, for discussion of Eberhardt et  al.’s (2004) finding of how black–​
5

criminal associations create attentional biases and shape word and face detection. Laurie Rudman and
Richard Ashmore (2007) used an IAT measuring associations between white and black faces and nega-
tive terms specifically (although not exclusively) related to violence. The terms were violent, threaten,
dangerous, hostile, unemployed, shiftless, and lazy. (Compare these with the terms used on a standard
attitude IAT:  poison, hell, cancer, slime, devil, death, and filth.) Rudman and Ashmore found that
black–​violent associations on this IAT predicted reports of engaging in socially excluding behavior
(specifically, how often participants reported having made ethnically offensive comments and jokes,
avoided or excluded others from social gatherings because of their ethnicity, and displayed nonver-
bal hostility, such as giving someone “the finger,” because of their ethnicity). The black–​violent IAT
outpredicted the attitude IAT specifically with respect to discriminatory behavior. Finally, Nicole
Donders and colleagues (2008) show that black–​violent associations—​but neither danger-​irrelevant
stereotypes nor generic prejudice—​predict the extent to which black faces, versus white faces, capture
the attention of white subjects.
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 127

implicit attitudes.6 This is precisely the reason controlled experiments are impor-
tant. They have the ability to create and control the conditions under which multi-
track patterns of behavior emerge.
These points seem to be building to the claim that people should be held respon-
sible for their implicit attitudes. And, more broadly, that people are responsible for
any spontaneous action that fits the characterization I’ve given. But responsibility is
not a monolithic concept.

2.  Caring and Responsibility


Theories of attributability are generally meant as theories of responsibility. The
usual idea is that actions that are attributable to agents are actions for which agents
are responsible. If attributability is sufficient for responsibility, and if my arguments
in the preceding chapter were correct, then it would follow that having an implicit
attitude toward φ, which causes one to act toward φ in some way that meets the con-
ditions described in that chapter, would be sufficient for holding an agent respon-
sible for those actions. This would be a problematic upshot, for at least two related
reasons. First, it seems strange to say that agents are responsible for minor acts like
stepping back from large paintings and smiling at friends who look like they need
reassuring. These actions just seem too minor and inconsequential to raise ques-
tions about responsibility (let alone questions about moral responsibility). Second,
this result would seem to entail an incredibly expansive conception of responsibil-
ity, one that would mean holding people responsible for seemingly unfree behav-
iors, like phobic reactions (e.g., to spiders), actions resulting from brainwashing,
addictions, and so on. One might worry that my account of attributability is far too
inclusive, such that on its lights we end up caring about—​and seemingly responsible
for—​just about everything we do (and maybe even everything that is done to us).
I suspect this worry is driven by conflating the concept of being responsible with
the concept of holding responsible. More specifically, I  think it is the reasonable

6
  Although the truth of each of these three claims—​that patterns of biased behavior are hard to
notice in daily life; that social norms put pressure on people to behave in unprejudiced ways; and that
people are fairly good at regulating their implicit attitudes—​are highly dependent on many contex-
tual factors. People in different epistemic positions, due to education and social privilege, will likely
be variably susceptible to the “blind spot bias” (i.e., the tendency to recognize biases more easily in
others than in oneself). Social norms favoring egalitarianism are highly geographically variable. And
people’s untutored skill in regulating their implicit attitudes is content-​dependent. This can be seen in
average correlations between implicit and explicit attitudes. Generally implicit and explicit attitudes
converge for attitude-​objects like brand preferences, for which there are not strong social incentives to
regulate one’s implicit attitudes, and diverge for attitude-​objects like racial preferences, for which there
are strong incentives to regulate one’s implicit attitudes (although more so in some places than others).
Thanks to Lacey Davidson for pushing me to clarify this.
128 Self

thought that it would be strange to hold a person responsible for some minor
spontaneous action—​in the sense of holding the person to some obligation that
she ought to discharge—​that drives the worry about my account of attributability
being too inclusive. First I’ll explain the important difference between being and
holding responsible, as I understand it, and what this difference means for the rela-
tionship between the self and spontaneity (§2.1). My basic claim will be that agents
are often responsible for their spontaneous actions in the attributability sense but
that this does not license judgments about how we ought to hold one another
responsible.7 I’ll then use this distinction to contextualize my claims with recent
research in empirical moral psychology (§2.2). To be clear, through all of this, my
claims have to do with responsibility for actions. But in many cases (though not all,
as I argue later), whether and how one is responsible for a given action—​or even
for a mere inclination—​has to do with the psychological nature of the attitudes
implicated in it.

2.1  Being Responsible and Holding Responsible


As I argued in Chapter 4, attributability opens agents in principle to aretaic apprais-
als. This is, roughly, what I take being responsible for some action to mean. I say
“roughly” because it is often the case that “being responsible” is used as a catch-​
all for a collection of related concepts. One might say, “Lynne is responsible,” and
mean by it that Lynne (and not Donna) stole the money or that Lynne is a responsible
person or that Lynne should be punished. On my view, no one of these meanings of the
concept of responsibility is basic or fundamental.8 Attributability captures just one
of these meanings, namely, that some action expresses something about the agent
herself who acted. It captures the sense in which, for example, we might say that
Lynne is responsible for this; she’s really greedy sometimes. In this attributability sense,
we don’t say that “Lynne is responsible” simpliciter. Rather, we use the concept of
responsibility as a way of opening space for characterological or aretaic assessment.
In contrast, holding responsible is essentially actional, in the sense that it levies a
behavioral demand upon oneself or another. To hold oneself or another responsible
is to demand that one do something. I think there are basically two ways to hold an
agent responsible: borrowing terms from Shoemaker (2011), I call these “answer-
ability” and “accountability.”9 I describe each in turn. My overall point is to show

  Because my primary interest in this and the preceding chapter is the relationship between the self
7

and spontaneity (which I take the concept of attributability to address), I will leave aside the question
of when, or under what conditions, we ought to hold one another responsible for actions driven by
implicit attitudes.
8
  See John Doris (2015) for a sustained argument that there is no one monotonic way of setting the
conditions of responsible agency.
9
  Shoemaker does not understand answerability and accountability as two forms of holding
responsible. For Shoemaker, there are three kinds of moral responsibility, each equiprimordial, so to
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 129

that it is plausible that agents are responsible for paradigmatic spontaneous actions
in the attributability sense, because being responsible in this sense does not neces-
sarily mean that one is obliged to answer for one’s actions or be held to account for
them in some other way.
Answerability describes the condition of being reasonably expected to defend
one’s actions with justifying (and not just explanatory) reasons. These must be the
reasons that the agent understood to justify her action (Shoemaker, 2011, 610–​
611). Holding Lynne responsible in the answerability sense means conceptualizing
the question “Why did you φ?” as appropriate in this context. When we demand an
answer to this kind of question, we intrinsically tie the action to the agent’s judg-
ment; we tie what Lynne did to the presumed considerations that she judged to
justify what she did. Demanding an answer to this question is also a way of exact-
ing a demand on Lynne. Being answerable in principle means being open to the
demand that one offer up one’s reasons for action. In this sense, being answerable
is a way of being held responsible, in the sense of being held to particular interper-
sonal obligations.10
Being “accountable” is another way of being held responsible for some action. It
is different from answerability in the sense that being accountable does not depend
solely or even primarily on psychological facts about agents. Whether it is appro-
priate to demand that Lynne answer for her actions depends, most centrally, on
whether her action stemmed from her evaluative judgments or was connected to
them in the right way (see §4 for further discussion). In contrast, there are times
when it seems appropriate to hold someone responsible notwithstanding the status
of that person’s evaluative judgments or reasons for action. These are cases in which
we incur obligations in virtue of social and institutional relationships. Watson (1996,
235), for example, defines accountability in terms of a three-​part relationship of one
person or group holding another person or group to certain expectations, demands,
or requirements. For example, the ways in which I might hold my daughter account-
able have to do with the parenting norms I accept, just as what I expect of a baseball
player has to do with the social and institutional norms governing baseball. In a sim-
ilar vein, Shoemaker (2011) defines accountability in terms of actions that involve
relationship-​defining bonds. Understood this way, practices of holding accountable
can be assessed in particular ways, for example, whether they are justified in light of
the promises we have made within our relationships.

speak: attributability, answerability, and accountability. My view is much indebted to Shoemaker’s, and


the ways I define these terms are similar to his. But I take answerability and accountability to be two
species of the same kind, namely, as two ways of holding responsible, and my usage of these terms
departs from his accordingly.
10
  In §4.3, I  will discuss Smith’s influential “rational-​relations” view, according to which moral
responsibility is just answerability. On Smith’s view, in other words, attributability and answerability
range over precisely the same actions.
130 Self

I am, of course, running somewhat roughshod over important disagreements


between other theorists’ conceptions of answerability and accountability. My aim
is only to make the case that these forms of holding responsible are separate from
responsibility as attributability.
Consider the difference between being responsible in the attributability sense
and holding responsible in the answerability sense. Both Shoemaker (2011) and
Jaworska (1999, 2007a) have offered extensive discussion of cases that are meant
to make this distinction clear. Shoemaker discusses agents who suffer from pho-
bias, such as an irrational fear of spiders, which cause them to feel and behave in
spider-​averse ways that they reflectively disavow. He also discusses agents with
strong emotional commitments that persist in spite of what they think they ought
to do, such as a parent who continues to love a child despite judging that the child
is a bad person for having done something unforgivable. Shoemaker argues that
evaluations of agents like these are more than “shallow.” They are, rather, deep in
the sense that we treat phobias and strong emotions as reflective of agents’ endur-
ing traits and identity. Feelings and actions like these “reflect on me,” Shoemaker
writes, “and in particular on who I am as an agent in the world, but they are not
grounded in any evaluative reasons” (2011, 611–​612). Shoemaker defends his
argument in three ways: via an appeal to intuition; an appeal to folk practices; and
an appeal to his own account of cares as identity-​grounding attitudes. Jaworska’s
examples are young children and Alzheimer’s patients. Both, she argues, have
deep, identity-​grounding attitudes that nevertheless do not reflect rational judg-
ments. This is because rational judgments require commitments to correctness,
and young children and Alzheimer’s patients are often incapable of being so com-
mitted. Before a certain age, for example, young children appear not to grasp
the notion of a belief being correct or incorrect.11 If true, then they cannot form
ration­al judgments. Yet they are moral agents, for whom actions can be, in at least
an attenuated way, expressive of who they are.
Whether or not the agents in these cases should be held responsible in the
accountability sense depends on further facts about each case. The spider-​phobe
can’t reasonably be held accountable for her phobic reactions just as such, but if
she takes a job at the zoo, she might incur institutional obligations that require her
to work in the proximity of spiders. In this case, notwithstanding her psychology,
she might be justifiably asked to undergo treatment for her phobia, if she wants to

  Jaworska’s claim here is ambiguous between young children being unable to distinguish between
11

true and false beliefs and young children failing to possess the concept of a false belief as such. Very
young children—​as young as two or three —​demonstrate in their own behavior the ability to recog-
nize a belief as mistaken. It is trickier to determine when children come to possess the concept of a
false belief as such. One proxy may be a child’s ability to fluently distinguish between appearance and
reality. John Flavell and colleagues (1985) report that a fragile understanding of the appearance–​reality
distinction emerges somewhere around ages four to six. Thanks to Susanna Siegel for pushing me to
clarify this point.
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 131

keep her job.12 In a more intimately interpersonal context, you might hold a friend
accountable for returning your phone calls in virtue of the obligations of friendship.

2.2  Responsibility in Practice


I have been arguing for particular conceptions of action, the self, and responsibility
more or less from the armchair. The growing literature in empirical moral psychol-
ogy is relevant to these concepts, however. Many moral psychology studies suggest
that people do not always (or often) conceptualize action, the self, and responsi-
bility in the ways philosophers have suggested.13 More specifically, some studies
suggest that the proposals I’ve made are at odds with dominant folk judgments.
A telling example is the investigation by Cameron and colleagues (2010) into the
conditions under which people are more or less likely to hold others responsible for
implicit racial biases.
Cameron and colleagues compared participants’ reactions across three condi-
tions. In each condition, participants read about “John,” a white manager of a com-
pany who has to decide whom to promote among various candidates on the basis
of merit. (Two other analogous scenarios were used as well: one describing “Jane,”
who has to decide to whom to rent an apartment; and one describing “Jim,” who
has to assign grades to students’ work.) In the “consciousness” condition, partici-
pants are told that John consciously believes that people should be treated equally,
regardless of race, but subconsciously dislikes black people. John doesn’t con-
sciously know that he doesn’t like black people, and his subconscious dislike ends
up influencing his hiring decisions, such that he sometimes unfairly denies black
employees promotions. In the “automaticity” condition, John is aware of his nega-
tive “gut feelings” toward black people; he tries, but fails to control this feeling and
ends up unfairly denying black employees promotions. Finally, in the third condi-
tion, participants are given no details about John’s biases. They are simply told that
he believes that people should be treated equally, regardless of race, but sometimes
he unfairly denies black employees promotions. Cameron and colleagues’ question
was whether and how participants would assign moral responsibility differently
across these conditions. Their finding was that participants cut John significantly
more slack in the first condition than in the second and third conditions. That is,
when participants understood John to be unaware of his biases, they considered
him to be less morally responsible for his actions.
This is an interesting finding in its own right, given that lack of awareness dimin-
ished moral responsibility judgments significantly more than lack of control. On
the face of it, this study could also be understood to contradict my claim that

12
  But the facts about her psychology might make her answerable for her decision to take the job at
the zoo in the first place.
13
  For an overview, see Doris (2015).
132 Self

agents are responsible, in the attributability sense, for paradigmatic spontaneous


actions, including actions influenced by implicit biases, even when agents are
unaware of the reasons why they act as they do. But a closer look at how judg-
ments about moral responsibility were assessed in Cameron and colleagues’ study
is telling. After reading the vignette about John, participants were asked to agree
or disagree (on a 5-​point Likert scale) with four statements, which together con-
stituted the study’s “Moral Responsibility Scale.” The statements were: (1) “John
[or Jane or Jim, depending upon the assigned content domain of the scenario] is
morally responsible for his treating African Americans unfairly”; (2) “John should
be punished for treating African Americans unfairly”; (3)  “John should not be
blamed for treating African Americans unfairly” (reverse-​coded); and (4), “John
should not be held accountable for treating African Americans unfairly” (reverse-​
coded) (Cameron et al., 2010, 277). While intuitive, these items lump together
moral responsibility as such, punishment, blame, and accountability. Participants’
overall responses to this scale don’t tell us whether they distinguished between
John being responsible, in the attributability sense, and holding John responsible,
in the sense of demanding that John explain himself or demanding that John be
punished or actively blamed. It’s possible that participants’ judgments were driven
entirely by their assessments of whether John ought to be held responsible for his
actions, particularly since none of the four items on the Moral Responsibility Scale
single out aretaic appraisals.14
More recent research by Liz Redford and Kate Ratliff (2016) appears to sup-
port this possibility. Using Cameron and colleagues’ Moral Responsibility Scale,
Redford and Ratliff also found that participants were more willing to ascribe
responsibility to a person described in a vignette as being aware, rather than
unaware, of his implicit biases. But Redford and Ratliff also found that participants’
judgments about the target’s obligation to foresee the behavioral consequences of his
own implicit biases mediated their moral responsibility judgments.15 The more a
participant thought that the person in the vignette had an obligation to foresee his
discriminatory behavior, the more the participant held the person in the vignette
responsible for that behavior. It is important to note that participants’ perceptions of

14
  Cameron and colleagues (2010) report that the four items on the Moral Responsibility Scale
had an “acceptable” internal consistency (Cronbach’s α = .65). This means that participants’ answers
to the four items on the scale were somewhat, but not strongly, correlated with each other. While
this suggests that participants’ conceptions of moral responsibility as such, punishment, blame, and
accountability are clearly related to one another, it also suggests that these may be distinct concepts.
Another possibility is that participants’ answers are internally consistent, to the degree that they are,
because participants treat the four questions as basically equivalent, when taking the survey. When
enacting related judgments in the wild, these same participants might make more discrepant judg-
ments. It would be interesting to see the results of a similar study in which the questions were presented
separately between subjects.
15
  See also Washington and Kelly (2016).
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 133

targets’ actual foresight of their own behavior, rather than participants’ perceptions
about what the targets ought to foresee, did not mediate their moral responsibility
judgments. This suggests that participants’ views about holding responsible—​the
obligations they ascribe to others—​explain their answers on the moral responsi-
bility scale. This suggests in turn that we do not know how participants in these
studies would make specifically aretaic judgments about actions affected by implicit
attitudes. A study on folk attitudes toward responsibility for implicit bias could care-
fully distinguish between participants’ willingness to evaluate others’ character, to
demand others’ reasons for action, and to hold others accountable in various ways.
These same outcome measures could be applied to other cases of spontaneous and
nondeliberative actions as well, such as subtle expressions of interpersonal fluency
and even athletic improvisation.

3.  The Deep Self


It is easy to think that talk about actions that reflect on one’s “self” implies that each of
us is defined by one core unified thing, our fundamental or “deep” self. I want to resist
this implication, as it surfaces in both the attributionist literature (§3.1) and the empiri-
cal moral psychology literature (§3.2).

3.1  Attributionism and Fundamentality


In most of the attributionism literature, the fundamentality of the self has to do with
the essence of one’s evaluative attitudes.16 Conceptualizing the self in this sense offers
an answer to the question “But what do you truly think or feel about φ, at the end of the
day?” Here I use “think” loosely, not to describe a cognitive state per se, but an agent’s
overall evaluative sense of things or, put another way, an agent’s summary judgment,
including her thoughts and feelings. Perhaps another way to put this would be that the
self for attributionism represents how you regard things, deep down, both cognitively
and affectively. For example, Holly Smith writes:

It appears that to blame someone is to negatively appraise something like


that person’s whole evaluative structure insofar as it bears on his act . . . [and
i]ntroducing the term “moral personality,” we can say that a person’s full
moral personality with respect to an act is the entirety of the person’s evalu-
ative attitudes (including both attitudes with explicitly moral content as
well as ones without explicit moral content) toward the features the per-
son supposes the act to have. Blameworthiness, then, seems to turn on the

16
  See, e.g., Levy (2011).
134 Self

extent to which a person’s full moral personality is involved in choosing to


perform the act. (2015, 190)17

Levy also articulates the view that attributionism is concerned with the fun-
damental self in this sense. In writing about responsibility for implicit bias, he
argues that attributionists are “committed to excusing agents’ responsibility for
actions caused by a single morally significant attitude, or cluster of such attitudes
insufficient by themselves to constitute the agent’s evaluative stance, when the
relevant attitudes are unrepresentative of the agent. This is clearest in cases where
the attitude is inconsistent with the evaluative stance” (2011, 256). In other
words, attributionists must excuse agents’ responsibility for actions caused by
implicit biases because these attitudes, while morally significant, don’t constitute
the agent’s fundamental stance. He then goes on to describe implicit biases in
these terms, stressing their arationality (or “judgment-​insensitivity”), concluding
that attributionists “ought to excuse [agents] of responsibility for actions caused
by these attitudes.” Levy takes this as a problem for attributionism. But for my
purposes, the important point is that Levy assumes a fundamentally unified con-
ception of an agent’s evaluative stance. This is what I take him to mean when he
says that an attitude or cluster of attitudes “constitutes” the agent’s “global” evalu-
ative stance.18
One challenge for this conception of the self is that conflicts occurring within
one’s self are difficult to understand, on its lights. There is no problem for this view
with the idea of conflict between an agent’s superficial attitudes, of course. But the
notion that agents have a fundamental evaluative stance—​a unified deep self—​
means that one of their attitudes (or set of attitudes) must represent them, at the end
of the day. Conflicts within the self—​for example, both favorable and unfavorable
deep evaluations of φ—​would make answering the question “But what do you truly
think or feel about φ, at the end of the day?” seemingly impossible.
If this understanding of the self is correct, then it will be hard to see how sponta-
neous action will ever reflect on the agent. If there can be no conflicts within one’s
fundamental self, then in cases of conflict between spontaneous inclination (of the

17
  Emphasis added. Smith credits the term “moral personality” to Pamela Hieronymi (2008), but
uses it in a different way.
18
  In concluding their presentation of what they call the “Whole Self ” view of moral responsibility,
Arpaly and Schroeder write, “We agents are not our judgments, we are not our ideals for ourselves,
we are not our values, we are not our univocal wishes; we are complex, divided, imperfectly ration­al
creatures, and when we act, it is as complex, divided, imperfectly rational creatures that we do so. We
are fragmented, perhaps, but we are not some one fragment. We are whole” (1999, pp.  184–​185).
I agree with this eloquent description of what it is to be an agent. The question I am here considering is
whether the fact that we are whole agents means that attributability judgments necessarily target each
other’s “whole” self.
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 135

kind I’ve described, and not just mere reflexes or mere associations) and reflective
judgment, it is hard to see how attributability would ever “go with” one’s spontane-
ous inclinations.
But must we think of the self (in the attributability sense) as fundamental and
unified? Why assume that there is some singular way that one regards φ at the end
of the day? It strikes me as coherent to think of the self as tolerating internal hetero-
geneity and conflict. Sripada (2015), for example, writes of a “mosaic” conception
of the deep self. The mosaic conception accepts conflicts internal to the deep self.
Recall that Sripada’s conception of the self is, like mine, constituted by one’s cares.
Unlike a set of beliefs, Sripada argues, it is not irrational to hold a set of inconsist­
ent cares. “To believe X, believe that Y is incompatible with X, and believe Y is
irrational,” he writes, but “to care for X, believe that Y is incompatible with X, and
care for Y is not irrational.” In other words, cares need not be internally harmoni-
ous, in contrast with (an ideally rational set of) beliefs or reflective judgments,
which reflect what the agent takes to be true. Thus while a belief-​based conception
of the deep self would not be able to tolerate internal conflict between evaluative
attitudes, a care-​based conception can, because it is not irrational to have an incon-
sistent set of cares.
On my view, a theory of attributability need not try to pinpoint a person’s
bottom-​line evaluative regard for φ, at the end of the day, so to speak. Instead, it can
aim to clarify the conditions under which an action reflects upon a person’s substan-
tive evaluative regard for φ, even if that regard conflicts with other deep attitudes
of the agent. Everything hangs on what “substantive” means. I endorse a capacious
conception of “substantive.” Character is reflected in a great many more actions than
most theories suggest. In spirit my claim is close to Shoemaker’s when he offers
aretaic praise for his demented grandfather:

Suppose that when I visit my demented grandfather, he reaches for a choc-


olate in the candy jar and brings it over to me, despite the great physical
effort it requires. After accomplishing this, he is on to the next things: mov-
ing a bowl on his kitchen counter an inch to the left, gazing out the win-
dow at a preening cardinal, turning to ask me who I am. My expressing real
gratitude to him at that point has no communicative point (it may well
be highly confusing). He cannot remember what he just did, he will not
remember how I respond to what he just did, he cannot identify with that
moments-​ago agent, and he cannot deliberate about the future (and so
identify with some expected future self). He lacks the sense of his agential
identity required to be accountable. Nevertheless, what he did was kind—​
perhaps he took the fact that it would give me pleasure to be a reason to
bring me a chocolate—​and if he does this generally, it reflects on a kind-
ness that is a trait of his character. He is kind. (2015, 98)
136 Self

In Chapter 4, I offered an account of cares meant to capture this intuition. Here


my claim is that these cares may be disunified and that no one set need be considered
fundamental. Note that this allows that some of an agent’s evaluative attitudes are
more substantive than others. Thus the kinds of aretaic appraisals directed toward
an action reflecting implicit bias, for example, are likely to be considerably weaker
than the kinds of aretaic appraisals directed toward an action reflecting explicitly
endorsed bias. As before (§2), I think the intuition that there must be some funda-
mental attitude, or set of attitudes, that “takes the rap” at the end of the day makes
perfect sense when one is considering the question of how to hold agents respon-
sible for their actions. But it plays no necessary role in thinking about the nature of
the self to which those actions are attributable.19

3.2  The Deep Self in Practice


As in the case of Cameron and colleagues’ study of folk attributions of responsibil-
ity for implicit bias (§2.2), empirical research on this question is illuminating, while
also leaving some questions open. This is particularly true in the moral psychol-
ogy literature focused on the theory of the “good true self ” (Newman et al., 2014,
2015). According to this theory, people commonly believe that everyone has a “true
self,” who they really are “deep down,” and, moreover, that the true self is fundamen-
tally good (according to one’s own subjective perception of “good”). Evidence for
this view is found in studies in which observers are more likely to say that another
person’s true self is reflected in behavior that the observer thinks is morally good
rather than morally bad (Newman et al., 2014). For example, liberals’ versus con-
servatives’ values toward homosexuality predict different judgments about whether
a person’s actions stem from her true self. Liberals, who are more inclined to find
homosexuality morally acceptable, are more likely to attribute same-​sex attraction
to a person’s true self, compared with conservatives, who are more inclined to find
homosexuality morally unacceptable (Inbar et  al., 2012). Moreover, and what is
important for my purposes, George Newman and colleagues (2014) find that these
results are independent of whether people are described as acting on the basis of
their “feelings” or their “beliefs.” This is meant to dispel the idea that the folk more
commonly associate who one “really is” with one’s unmediated, automatic, affective
states (one’s “feelings”) or with one’s reflective, considered avowals (one’s “beliefs”).
Instead, Newman and colleagues find that people associate others’ true selves with
whatever behavior they (the observers) think is morally good.
The findings of this research seem to be bad news for my view, in the sense that
they might warrant, in this context, displacing discussion of the differences between

  For research suggesting that the folk hold a related pluralist conception of the self, see Tierney
19

et al. (2014).
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 137

spontaneous and reflective action with discussion of moral judgments as such. In


other words, I have spilled a lot of ink arguing about the conditions under which
spontaneous action reflects upon who one is. But research on the good true self
seems to suggest—​so far as questions about the relationship between action and
the self are concerned—​that whether an action is spontaneous or impulsive matters
less than whether the action is perceived as good. This sort of displacement of the
central topics of traditional discussion in the literature is argued for by revisionists
about moral responsibility (Vargas, 2005; Faucher, 2016; Glasgow, 2016).
But as in Cameron and colleagues’ study, here too we have to look closely at
the measures. Participants in Newman and colleagues’ (2014) study 3, the one
comparing the roles of feelings and beliefs, were directly asked to what extent the
person described in the vignette would “still be true to the deepest, most essential
aspects of his being.” Participants were directly asked about their attitudes toward
the person’s “deepest” self, in other words. Whatever the answers people give to this
question, those answers won’t tell us whether people think that particular kinds of
action reflect upon the agent herself, independent of questions about the agent’s
deepest self. In other words, what research on the “good true self ” shows is that
when people are asked about others’ deepest selves, they primarily use their own
moral values for guidance. But assume that I’m correct that the “deep” self can be
fractured and mosaic, that both my positive and negative attitudes toward φ can be
part of my deep self (or, as I would put it, can count among my cares). The good true
self literature doesn’t illuminate how the folk would make judgments about others’
selves in this sense.
This is not a knock against the theory of the good true self. It would be interest-
ing to investigate the role of participants’ moral values in judgments about others’
fractured selves, that is, judgments about others’ selves in the attributability sense
I developed in Chapter 4. For example, in testing intuitions about a Huck Finn–​like
character, participants might be asked whether deep down in Huck’s soul he truly
cares about Jim, truly believes in slavery, or is truly divided. It would be unsurprising
to find that moral values play a key role in judgments like these, given the large body
of moral psychology research demonstrating the pervasiveness of moral values in
judgments of all sorts, such as judgments about intentional action (Knobe, 2003),
moral luck (Faraci and Shoemaker, 2014), and the continuity of personal identity
(Strohminger and Nichols, 2014). I  suspect that this research can be integrated
with, rather than represent a challenge to, the kind of attributionist theory for which
I have been arguing.

4.  Intention, Awareness, and Rational Relations


Thus far, I’ve argued that actions that reflect upon our cares are attributable to us
and that the conditions under which an action is attributable to an agent are not the
138 Self

same as the conditions under which an agent ought to be held responsible for an
action. Moreover, I’ve argued that our cares can be in conflict with one another. This
leads to a pluralistic or mosaic conception of the self.
But surely, a critic might suggest, there must be something to the idea that an
agent’s intentions to act, and her awareness of those intentions or of other relevant
features of her action, play a core role in the economy of the self. When we consider
one another in an aretaic sense, we very often focus on each other’s presumed inten-
tions and conscious states at the time of action. Intentions and self-​awareness seem
central, that is, not just to the ways in which we hold one another responsible (e.g.,
in juridical practices), but also to the ways in which we assess whether an action
tells us anything about who one is. What role do these considerations play in my
account? I consider them in turn: intentions first, then self-​awareness. Then, I con-
sider an influential alternative: if neither intentions to act nor awareness of one’s
reasons for action are necessary features of attributable actions, what about the
relatively weaker notion that for an action to be attributable to an agent, the agent’s
relevant attitudes must at least be in principle susceptible to being influenced by the
agent’s rational judgment? Mustn’t attributable actions stem from at least in prin-
ciple judgment-​sensitive states?

4.1 Intentions
Judging whether or not someone acted on the basis of an intention is indeed very
often central to the kind of aretaic appraisals we offer that person.20 The subway
rider who steps on your toes on purpose is mean-​spirited and cruel, whereas the one
who accidently crunches your foot while rushing onto the train is more like moder-
ately selfish. As it pertains to my account of spontaneous action, the question is not
whether an agent’s intention to φ often matters, but rather whether it must matter,
in the sense of playing a necessary role in the way attributability judgments ought to
be made. But what about long-​term intentions? Consider again Obama’s grin. One
might concede that this micro-​action was not reflective of an occurrent intention
to reassure the chief justice, but insist instead that Obama’s automatic dispositions
to smile and nod were reflective of a long-​term, standing intention, perhaps to be
a social person. In this case, one might say that Obama’s spontaneous act of inter-
personal fluency was attributable to him, but on the basis of its reflecting upon his
long-​term intentions. But emphasizing long-​term intentions in this kind of context
is problematic, for at least three reasons.
First, it is not obvious how to pick out the relevant intention with which one’s
gestures are to be consistent without making it either impracticably specific or

  Here I mean “intentional” in the ordinary sense, of doing A in order to bring about B (e.g., smiling
20

intentionally in order to put someone at ease).


R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 139

vacuously general. Of which long-​term intentions are Obama’s reactions supposed


to be reflective? “To be sociable” is too general to tell an agent much about how to
handle a flubbed Oath of Office; and “Reassure chief justices with a smile when they
flub Oaths of Office” is uselessly specific.
Second, the over-​or underspecificity of such intentions suggests their causal
inefficacy. In order for the objection I’m considering to get a grip, some appropriate
causal relationship must hold between a long-​term intention and one’s spontane-
ous action. One’s intentions must be in charge. Perhaps a long-​term intention could
make itself felt “in the moment” if an agent had cultivated habits concordant with
it in advance. With enough practice, a good Aristotelian can make it the case that
the right response springs forth in the face of some salient contextual cue. But mak-
ing it the case that the intention to be sociable guided one in the moment would
radically underdetermine the range of possible ways an agent might respond dur-
ing a flubbed Oath of Office. The intention is simply too general to help an agent
respond appropriately to the contextual particulars. And an agent could not feasibly
avoid this problem by identifying and practicing for every possible contingency. Of
course, both Obama and Roberts did practice beforehand, and presumably they did
consider ways things might go wrong during the inauguration. Indeed, to the extent
that Obama holds a relevant long-​term intention to be sociable, doesn’t just about
everyone who aspires to public office, including Roberts? (Doesn’t almost every-
one?) And yet many people, similarly situated, would have handled the situation
much less gracefully. A similarly situated agent who did not respond as Obama did
would not thereby have failed to accomplish the intention to be sociable. Nor should
we attribute Obama’s success to an ability to harmonize such an intention with his
spontaneous reactions. What makes this implausible is the difficulty of imagining
how such intentions could be discernibly operative in the moment.
Third, to the extent that you find something admirable in how Obama handled
the situation, consider what the actual source of that admiration is. Is Obama praise-
worthy because he is really good at harmonizing his winks and nods with his long-​
term intentions? Is this the source of our admiration in such cases? Even if some
relevant causal connection does obtain between Obama’s long-​term intentions and
his spontaneous reactions, we would be forced to think that, in this case, the fact of
that connection is the reason we find the spontaneous reaction admirable. It is more
likely that we notice Obama’s social adeptness because we sense that, if placed in a
similar situation, we would be far less skillful.

4.2 Awareness
Aretaic appraisals of spontaneous action don’t necessarily trace back to an agent’s
apparent long-​term intentions to act in some way. But what about an agent’s self-​
awareness? In some sense it may seem strange to say that character judgments
of others are appropriate in cases where the other person is unaware of what
140 Self

she is doing. But my account of attributability does not include conscious self-​
awareness among the necessary conditions of an action’s reflection on an agent.
Should it?
It is important to separate different senses in which one might be self-​aware in
the context of acting. One possibility is that one is aware, specifically, of what one is
doing (e.g., that I’m staring angrily at you right now). Another possibility is that one
is aware of the reasons (qua causes) for which one is doing what one is doing (e.g.,
that I’m staring angrily at you because you insulted me yesterday). A third possibility
is that one is aware of the effects of one’s actions (e.g., that my staring will cause you
to feel uncomfortable).21 Each of these senses of self-​awareness bears upon attribut-
ability. When one is self-​aware in all three senses, the relevant action surely reflects
upon who one is in the attributability sense. But is awareness in any of these senses
necessary?
In advancing what he calls the “Consciousness Thesis,” Levy (2012, 2014) argues
that a necessary condition for moral responsibility for actions is that agents be
aware of the features of those actions that make them good or bad. They must, in
other words, have a conscious belief that what they are doing is right or wrong. For
example, in order for Luke to be responsible for accidentally shooting his friend in
Loaded Gun (Chapter 4), it must be the case that Luke was aware, at the time of the
shooting, that showing the gun to his friend was a bad thing to do, given the possibil-
ity of its being loaded. If Luke was completely unaware of the facts that would make
playing with the gun a stupid idea, then Luke shouldn’t be held morally responsible
for his action. Levy’s argument for this derives from his view of research on the
modularity of the mind. In short, the argument is that minds like ours are modular,
that is, they are made up of independently operating systems that do not share infor-
mation (Fodor, 1983). Modularity presents an explanatory problem: How is it that
behavior so often seems to integrate information from these ostensibly dissociated
systems? For example, if our emotions are the products of one mental module, and
our beliefs are the products of another, then why doesn’t all or most of our behavior
look like an incomprehensible jumble of belief-​driven and emotion-​driven outputs
that are at odds with one another? Everyone recognizes that this is sometimes the
case (i.e., sometimes our beliefs and our feelings conflict). But the question is how
the outputs of mental modules ever integrate. Levy’s answer is that consciousness
performs the crucial work of integrating streams of information from different men-
tal modules. Consciousness creates a “workspace,” in other words, into which dif-
ferent kinds of information are put. Some of this information has to do with action
guidance, some with perception, some with evaluative judgment, and so on. This
workspace of consciousness is what allows a person to put together information
about an action that she is performing with information about that action being

  I adapt these three possibilities from Gawronski and colleagues’ (2006) discussion of the differ-
21

ent senses in which one might be aware of one’s implicit biases.


R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 141

right or wrong. In Luke’s case, then, what it means to say that he is aware of the fea-
tures of his action that makes the action good or bad is that his mind has integrated
information about what he’s doing—​playing with a gun—​with information about
that being a bad or stupid thing to do—​because it might be loaded.
One of Levy’s sources of evidence for this view are experiments showing that self-​
awareness tends to integrate otherwise disparate behaviors. For example, in cognitive
dissonance experiments (e.g., Festinger, 1956), agents become convinced of particular
reasons for which they φed, which happen to be false. But after attributing these rea-
sons to themselves, agents go on to act in accord with those self-​attributed reasons.
Levy (2012) interprets this to show that becoming aware of reasons for which one is
acting (even if those reasons have been confabulated) has an integrating effect on one’s
behavior. Levy ties this integrating effect of self-​awareness to the conditions of respon-
sible agency. Put simply, his view is that we can be thought of as acting for reasons only
when consciousness is playing this integrating role in our minds. And we are morally
responsible for what we do only when we are acting for reasons.
Levy doesn’t distinguish between being responsible and holding respon-
sible in the same way as I do, so it is not quite right to say that Levy’s claim is
one about the necessary conditions of attributability (as I’ve defined it). But
the Consciousness Thesis is meant as a claim about the minimal conditions of
agency. Since all attributable actions are actions as such, any necessary condition
of agency as such should apply mutatis mutandis to the minimal conditions of
attributability. So could some version of the Consciousness Thesis work to show
that actions are attributable to agents just in case they consciously believe that
they are doing a good or bad thing?
One worry about the Consciousness Thesis is that it relies upon Levy’s global
workspace theory of consciousness, and it’s not clear that this theory is cor-
rect. Some have argued that no such global workspace exists, at least in the terms
required by Levy’s account (e.g., Carruthers, 2009; King and Carruthers, 2012).
Even if the global workspace theory is true, however, I think there are reasons to
resist the claim that consciousness in Levy’s sense is a basic condition of attribut-
ability (or of responsible agency in the attributability sense).
Consider star athletes.22 In an important sense, they don’t seem to be aware of
the reasons their actions are unique and awe-​inspiring. As I noted in Chapter 1, Hall
of Fame NFL running back Walter Payton said, “People ask me about this move or
that move, but I don’t know why I did something. I just did it.”23 Kimberly Kim, the
youngest person ever to win the US Women’s Amateur Golf Tournament, said, “I

22
  See Brownstein (2014) and Brownstein and Michaelson (2016) for more in-​depth discussion of
these examples. See also Montero (2010) for critical discussion of cases that do not fit the description
of athletic expertise that I give.
23
  Quoted in Beilock (2010, 224).
142 Self

don’t know how I did it. I just hit the ball and it went good.”24 Larry Bird, the great
Boston Celtic, purportedly said, “[A lot of the] things I  do on the court are just
reactions to situations . . . A lot of times, I’ve passed the basketball and not realized
I’ve passed it until a moment or so later.”25 Perhaps the best illustration arises in an
interview with NFL quarterback Phillip Rivers, who fails pretty miserably to explain
his decisions for where to throw:

Rivers wiggles the ball in his right hand, fingers across the laces as if ready
to throw. He purses his lips, because this isn’t easy to articulate. “You
always want to pick a target,” he says. “Like the chin [of the receiver]. But
on some routes I’m throwing at the back of the helmet. A lot of it is just
a natural feel.” Rivers strides forward with his left leg, brings the ball up
to his right ear and then pauses in midthrow. “There are times,” he says,
“when I’m seeing how I’m going to throw it as I’m moving my arm. There’s
a lot happening at the time. Exactly where you’re going to put it is still
being determined.” Even as it’s leaving the fingertips? More head-​shaking
and silence. Finally: “I don’t know,” says Rivers. “Like I said, there’s a lot
going on.” (Layden, 2010)

What’s perplexing is that these are examples of highly skilled action. A form of
action at its best, you might even say.26 This suggests that skilled action lies within
the domain of attributability. Certainly we praise athletes for their skill and treat
their abilities as reflecting on them. And yet these athletes report having very little
awareness, during performance, of why or how they act. They disavow being aware
of the reasons that make their moves jaw-​dropping. Thus there seems to be tension
between these athletes’ reports and the claim of the Consciousness Thesis (i.e., that
they must be aware of their reasons for action, and in particular their right-​making
reasons for action, if they are to deserve the praise we heap on them). Star athletes
present a prima facie counterexample to the Consciousness Thesis.
One way to dissolve this tension would be to say that star athletes’ articulacy
is irrelevant. What matters is the content of their awareness during performance,
regardless whether these agents can accurately report it. I agree that the content of
awareness is what matters for evaluating the proposal that attributability requires
consciousness of the right-​making reasons for one’s action. But articulacy is not

  Quoted in Beilock (2010, 231), who follows the quote by saying, “[Athletes like Kim] can’t tell
24

you what they did because they don’t know themselves and end up thanking God or their mothers
instead. Because these athletes operate at their best when they are not thinking about every step of
performance, they find it difficult to get back inside their own heads to reflect on what they just did.”
25
  Quoted in Levine (1988).
26
  See Dreyfus (2002b, 2005, 2007a,b) for discussion of the idea that unreflective skilled action is
action “at its best.”
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 143

irrelevant to this question. What agents report offers a defeasible indication of their
experience. Coaches’ invocations to “clear your mind” help to corroborate these
reports. For example, cricketer Ken Barrington said, “When you’re playing well you
don’t think about anything and run-​making comes naturally. When you’re out of
form you’re conscious of needing to do things right, so you have to think first and act
second. To make runs under those conditions is mighty difficult.”27 Barrington’s is
not an unusual thought when it comes to giving instructions for peak performance
in sports. Branch Rickey—​the Major League Baseball manager who helped break
the color barrier by signing Jackie Robinson—​purportedly said that “an empty head
means a full bat.”28 Sports psychologists have suggested as much too, attributing
failures to perform well to overthinking, or “paralysis by analysis.”
Surely some of these statements are metaphorical. Barrington’s mind is certainly
active while he bats. He is presumably having experiences and focusing on some-
thing. But research on skill also appears to corroborate what star athletes report and
coaches teach. One stream of research focuses on the role of attention in skilled
action. Do experts attend—​or could they attend—​to what they are doing during
performance? According to “explicit monitoring theory” (Beilock et al., 2002), the
answer is no, because expert performance is harmed by attentional monitoring of
the step-​by-​step components of one’s action (e.g., in golf, by attending to one’s back-
swing). Experts can’t do what they do, in other words, if they focus on what exactly
they’re doing. Dual-​task experiments similarly suggest that, as one becomes more
skilled in a domain, one’s tasks become less demanding of attention (Poldrack et al.,
2005).29
Critics of explicit monitoring theory have pointed out that expert performance
might not suffer from other forms of self-​directed attention, however. For exam-
ple, David Papineau (2015) argues that experts avoid choking by focusing on their
“basic actions” (i.e., what they can decide to do without having to decide to do any-
thing else) rather than on the components of their actions. But focusing on one’s
basic actions—​for example, that one is trying to score a touchdown—​isn’t the same
as focusing on the right-​making features of one’s action. “Scoring a touchdown” isn’t
what makes Rivers’s actions good or praiseworthy; any hack playing quarterback is
also trying to score a touchdown (just as many people aim to be sociable but few
display interpersonal fluency like President Obama). Awareness of these course-​
grained reasons isn’t tantamount to awareness of the right-​making features of one’s

27
  Quoted in Sutton (2007, 767).
28
  Reported in “Manuel’s Lineups Varied and Successful,” http://​www.philly.com/​philly/​sports/​
phillies/​20110929_​Manuels_​lineups_​varied_​and_​successful.html.
29
  Dual-​task experiments involve performing two tasks at once, one of which requires skill. People
who are skilled in that domain perform better at both tasks because, the theory suggests, the skill-​
demanding task requires less attention for them. Russell Poldrack and colleagues (2005) support this
interpretation with neural imaging data.
144 Self

action. Nor is it tantamount to awareness of the reasons that explain how these
agents perform in such exceptionally skilled ways, which is what would be required
to reconcile skilled agency with the consciousness requirement.
Research on improvisational action in other domains of expertise is suggestive
too. In a review of Levy (2014), Arpaly (2015) points out that improvisational jazz
musicians simply don’t have enough time to attend to and deliberate about the
goodness or badness of their decisions.30 Brain research seems to tell a related story.
Neuroimaging studies of improvisational jazz musicians show substantial downreg-
ulation of the areas of the brain associated with judgment and evaluation when the
musicians are improvising (and not when they are performing scored music; Limb
and Braun, 2008). One might assume that awareness of the right-​and wrong-​mak-
ing features of one’s actions is tied to evaluation and judgment.
Another way to dissolve the tension between these exemplars of skill and the
Consciousness Thesis might be to focus on what athletes do when they practice.
Surely in practice they focus on their relevant reasons for action. Kim might adjust
her grip and focus explicitly on this new technique. Indeed, she’ll have to do so
until the new mechanics become automatic. So perhaps the role of right-​making
reasons is indexed to past practice. Perhaps star athletes’ reasons for action have
been integrated in Levy’s sense in the past, when they consciously practiced what
to do. But this won’t work either. As Arpaly puts it, if it were the case that past prac-
tice entirely explains performance, then all one would need to do to become an
expert performer is “work till they drop and then somehow shake their minds like a
kaleidoscope” (2015, 2). But clearly this isn’t the case, as those who have practiced
till they drop and still failed to perform can attest. So we are still lacking an under-
standing of what makes star performance distinct. It’s not the integration of reasons
for action of which one was aware when practicing. (Relatedly, sometimes people
exhibit expertise without ever having consciously or intentionally practiced at all.
Arpaly discusses social fluency—​like Obama’s grin—​in similar terms.)
Levy’s Consciousness Thesis is rich and intriguing, and represents, to my mind,
the most persuasive claim on the market for a consciousness requirement for moral
responsibility (and, inter alia, attributability).31 Nevertheless, barring a more per-
suasive alternative, I proceed in thinking that actions can reflect on agents even if
those agents aren’t focally conscious of what they’re doing.

  I also discuss musical improvisation in Brownstein (2014).


30

  Levy (2016) discusses cases of skilled action like these and argues that in them agents display
31

a kind of subpersonal control sufficient for holding them responsible. This is because, he argues, the
agent displays a “patterned” sensitivity to reasons. See Chapter 4, footnote 27, for an elaboration on
this claim. Levy applies this argument about patterned sensitivity to reasons to implicit attitudes, pur-
suing the possibility that even if agents lack conscious awareness of the moral quality of their action of
the kind that renders them morally responsible for that action, perhaps they can still be held respon-
sible because their attitudes display the right kind of patterned subpersonal control. Levy ultimately
rejects both possibilities.
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 145

4.3  Rational Relations


Some might think that intentions and awareness are important for attributing
actions to the self because intentions and awareness are reliable representatives
of rational cognitive processes. Others have argued more directly that only those
actions that reflect upon rational cognitive processes are attributable to agents.
One prominent version of this kind of theory is Smith’s “Rational Relations View”
(2005, 2008, 2012). Her view is a version of what some have called “Scanlonian”
responsibility, in reference to Thomas Scanlon’s (1998) influential work. I focus on
Smith’s work rather than Scanlon’s because Smith speaks directly to the challenge,
for any view that understands responsible agency in terms of an agent’s rational
judgments, of explaining actions that conflict with agents’ apparent judgments and
beliefs. After describing Smith’s view, I’ll argue that it does not adequately explain
these kinds of actions.32 The upshot is that my account of cares in Chapter 4—​as
including implicit attitudes that may be functionally isolated from agents’ rational
judgments—​remains unthreatened. The broader upshot is that actions that stem
from states that are isolated from agents’ rational faculties can nevertheless reflect
upon who those agents are.
Smith understands responsibility in terms of answerability. To be clear, she does
not endorse the distinction I drew earlier (§2.1) between three kinds of responsi-
bility (attributability, answerability, and accountability). For Smith, responsibility
just is answerability. The terminology can be confusing, however, because Smith
takes her view to be an attributionist theory of moral responsibility. As I  under-
stand her view, it is that actions for which we are answerable are just those actions
that reflect on us as moral agents. Attributability is answerability is responsibility, in
other words.
Smith argues that answerability—​the ability, in principle, to answer for one’s atti-
tudes with justifying reasons—​acts as a litmus test for determining whether that
attitude stems from the agent’s evaluative—​or what she calls rational—​judgments.
Your attitudes stem from rational judgments, in other words, when you can in prin-
ciple be asked to defend that attitude with reasons, according to Smith. Notice that
I said “attitudes” here. Smith focuses on responsibility for attitudes, not actions. On
her view, then, those attitudes that stem from rational judgments can be thought of
as making up a person’s responsibility-​bearing self. They do so because evaluative or
rational judgments represent an agent’s reasons for holding a particular attitude. She
writes, “To say that an agent is morally responsible for some thing . . . is to say that
that thing reflects her rational judgment in a way that makes it appropriate, in prin-
ciple, to ask her to defend or justify it” (2008, 369). As I understand the “Rational
Relations View” (RRV) of responsibility as answerability, then:

32
  Smith develops her view in terms of responsibility for attitudes, not actions per se, as
discussed later.
146 Self

 RV: An agent A is morally responsible for some attitude B iff B reflects


(1)  R
A’s rational judgments about reasons for holding B.

Despite the fact that Smith is focused on attitudes, RRV can be unproblematically


extended to cover responsibility for actions too:

(2) RRV(Action): An agent A is morally responsible for some action B iff B


reflects A’s rational judgments about reasons for performing B.

Moreover, RRV is also connected to a claim about what it means for an atti-
tude to reflect upon an agent. It is incumbent upon Smith to clarify this, as one
might wonder what distinguishes the focus on rational judgments in RRV from
the voluntarist’s way of tying moral responsibility to an agent’s deliberative or
conscious choices.33 This is important because RRV is meant as an alternative
to voluntarism about moral responsibility. Smith’s tactic is to offer an expansive
definition of those states that reflect our rational assessments of what we have
reason to do, a definition sufficiently expansive to include deliberative and con-
scious choices but also desires, emotions, and mere patterns of awareness. For
example, Smith writes, “Our patterns of awareness—​e.g., what we notice and
neglect, and what occurs to us—​can also be said to reflect our judgments about
what things are important or significant, so these responses, too, will count as
things for which we are responsible on the rational relations view” (2008, 370).
Sripada (pers. com.) labels Smith’s expansive definition of states that reflect an
agent’s rational judgments “in principle sensitivity” (IPS). The claim of IPS is
that mere patterns of awareness and the like reflect agents’ rational judgments
because the psychological processes implicated in those patterns of awareness
are, in principle, open to revision in light of the agent’s reason-​guided faculties.
According to Smith:

In order for a creature to be responsible for an attitude . . . it must be the


kind of state that is open, in principle, to revision or modification through
that creature’s own processes of rational reflection. States that are not even
in principle answerable to a person’s judgment, therefore, would not be
attributable to a person in the relevant sense. (2005, 256)

As I formulate it:

(1) IPS:  Some attitude B reflects agent A’s rational judgments iff B is in


principle causally susceptible to revision by A’s processes of rational
reflection.

33
  I am indebted to Holly Smith’s (2011, 118, n 9) articulation of this worry.
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 147

And for actions:

(2) IPS(Action): Some action B reflects agent A’s rational judgments iff B is


in principle causally susceptible to revision by A’s processes of rational
reflection.

IPS clarifies how one might think that agents who act spontaneously might still
fall within the purview of RRV. On Smith’s view, it does not undermine attributabil-
ity to show that some attitude is occurrently unresponsive to the agent’s reasoned
judgments. Rather, attributability is undermined only if an attitude is in principle
causally isolated from the agent’s rational reflections. What is crucial to see is how
IPS saves RRV (and their correlative postulates for action) in cases of spontane-
ous action. It isn’t enough for Kim to say, “I don’t know how I did it,” on this view;
Kim might still be answerable for her putting nonetheless. Analogously, an avowed
egalitarian who behaves in some discriminatory way cannot be exculpated from
responsibility simply by saying that her discriminatory behavior conflicts with what
she judges she ought to do. In other words, the implicitly biased avowed egalitarian
might be tempted to say, “My evaluative judgment is that discrimination is wrong;
therefore, it is wrong in principle to ask me to defend what I have done, because
what I have done does not reflect my evaluative judgments.” IPS(Action) cuts off
this line of defense.
There is something deeply intuitive about this approach. There does indeed
seem to be a connection between those actions that I can rightly be asked to defend,
explain, or answer for in some way and those being the actions that reflect on me. But
if something like this is right—​that attributability is answerability—​then something
must be wrong with the care-​based account of attributability I offered in Chapter 4.
In the case of virtuous spontaneity, it makes no sense to ask President-​Elect Obama
to answer for his reassuring grin or Kimberly Kim to answer for the perfection of
her putts. Precisely what agents like these say is that they have no answer to the
question “Why did you do that?”34 So, too, in cases of the vices of spontaneity. As
Levy puts it, in the case of implicit bias “it makes no sense at all to ask me to justify
my belief that p when in fact I believe that not-​p; similarly, it makes no sense to ask
me to justify my implicit racism when I have spent my life attempting to eradicate
it” (2011, 256).
Despite its intuitive appeal, I do not think attributability should be understood
as answerability in Smith’s sense. My first reason for saying this stems from cases in
which it seems right to attribute an attitude or action to an agent even if that attitude
or action fails to reflect her rational judgment. These are the same cases I discussed
earlier (§2.1), proposed by Shoemaker (2011) and Jaworska (1999, 2007b), having

34
  See Brownstein (2014).
148 Self

to do with spider-​phobes, parents who continue to love their terrible children,


Alzheimer’s patients, and young children.
In reply to this kind of critique, Smith discusses the case of implicit racial bias.
She shares my view that these implicit attitudes are attributable to the agent, even if
they conflict with the agent’s reflective egalitarian judgment. But Smith thinks this
is the case because our nonconscious “takings” and “seemings” are often, in fact,
judgment-​sensitive:

I think it is often the case . . . that we simply take or see certain things as


counting in favor of certain attitudes without being fully aware of these
reasons or the role they play in justifying our attitudes. And I think these
normative “takings” or “seemings” can sometimes operate alongside more
consciously formulated judgments to the effect that such considerations
do not serve to justify our attitudes. So, for example, a person may hold
consciously egalitarian views and yet still find herself taking the fact of a
person’s race as a reason not to trust her or not to hire her. In these cases,
I think an answerability demand directed toward her racist reactions still
makes perfect sense—​a person’s explicitly avowed beliefs do not settle
the question of what she regards as a justifying consideration. (2012,
581, n 10)

I think Smith’s conclusion is right but her justification is wrong. Implicit biases like
these do reflect on agents (as I  argued in the preceding chapter). But this is not
necessarily because these biases are judgment-​sensitive. Why think that Smith’s
vignette describes someone who takes a person’s race as a reason not to trust or
hire her?
Part of the problem, I think, is that Smith’s claim that attitudes that are attribut-
able to agents must be in principle causally susceptible to revision by an agent’s pro-
cesses of rational reflection (i.e., IPS) is both too vague and too strong. It is vague
because it is in principle hard to tell what counts as a process of rational revision.
Indeed, it is hard to see how anything at all can be ruled out as being in principle
susceptible to rational revision. Suppose at some point in the future psychologists
develop techniques that allow agents to overcome the gag reflex, or even to stop
the beating of their own heart, using nothing but certain “mindfulness” exercises.
Would this indicate that these processes, too, are in principle susceptible to rational
revision?
IPS is too strong because it is sometimes the case that agents can change their
attitudes and behavior in such a way that seems to reflect upon them as agents, with-
out those changes counting as responsiveness to rational pressure per se. Sometimes
we do this by binding ourselves by force, like Ulysses’ tactic for resisting the Sirens.
But this action is attributable to Ulysses simply because the change it wrought in his
behavior traces back to his “pre-​commitment” (Elster, 2000). That is, Ulysses made
R ef l ec tion , R e spons ib ilit y, and Frac t ured   S elve s 149

a deliberative choice to instruct his crew to bind him to the mast when they were to
pass the island of the Sirens. This case wouldn’t speak against Smith’s view. But there
are many cases that do. I’ll call these cases of self-​regulation by indirect means.35
I discuss these in more depth in Chapter 7. Briefly though, for example, there are a
slew of “evaluative conditioning” techniques that enable agents to shift the valence
of their stored mental associations through repeated practice. A  person who has
learned to associate “Bob” with “selfishness” and “anger” can, through training,
learn to associate Bob with “charity” and “placidness,” for example. People can also
shift their “statistical map” of social stereotypes by repeated exposure to counterste-
reotypes. One striking field study found, for example, that women undergraduates’
implicit associations between leadership and gender were significantly altered in
proportion to the number of women science professors they had (i.e., women in
counterstereotypic leadership roles in the sciences; Dasgupta and Asgari, 2004). At
the start of the year, the students implicitly associated women with concepts related
to nurturance more than with concepts related to leadership. (This is a common
finding.) After one year, the women undergraduates who took more classes with
women scientists more strongly associated women with leadership concepts on
measures of their implicit attitudes.
In these examples, there is little reason to think that agents’ implicit attitudes
shift in response to rational pressure. That is, there is little reason to think that these
successful cases of implicit attitude change are successful because the relevant atti-
tudes are judgment-​sensitive, in Smith’s sense.36 Indeed, in the case of Dasgupta and
Shaki Asgari’s study, while women undergraduates’ implicit attitudes changed when
they took more classes from women scientists, their explicit attitudes didn’t change.
Some of the women undergraduates continued to explicitly endorse the view that
women possess more nurturance than leadership qualities, even after they ceased to
display this association on indirect measures.

5. Conclusion
In this chapter, I’ve situated my account of the relationship between implicit atti-
tudes and the self with respect to familiar ways of thinking about attributability
and responsibility. I’ve described how I  understand what it means for an action
to “reflect on” the self; I’ve distinguished between being responsible and various
forms of holding oneself or others responsible for actions; I’ve offered a picture of
the deep self as fractured rather than necessarily unified; and I’ve explained why
I think spontaneous actions can reflect on agents even when agents have no relevant

35
  I borrow the term “indirect self-​regulation” from Jules Holroyd (2012). See also Washington and
Kelly (2016) on “ecological control,” a term they borrow from Clark (2007).
36
  See the discussion in Chapter 3 on implicit attitudes and rational processing, such as inference.
150 Self

intentions to act as they do, when they are unaware of what they are doing and why,
and even when their attitudes are isolated from their rational judgments. I’ve tried
to make sense of these claims in terms of both freestanding philosophical theories
and research on relevant folk attitudes. Of course, arguably the most salient con-
cerns about spontaneous and unplanned actions that arise in folk conversation are
ethical concerns. How can we improve our implicit attitudes? Under what condi-
tions can they become more morally credible? These are the questions I address in
the next two chapters.
PA RT   T H R E E

ETHICS
6

Deliberation and Spontaneity

D’Cruz (2013, 37–​38) imagines the following conversation:

Alfred: What shall we have for supper tonight, dear?


Belinda: I have an idea: let’s forget about cooking supper and just eat ice-​cream!
Alfred: But we have plenty of groceries in the fridge that we should use before
they spoil.
Belinda: They won’t spoil in one day. We can cook with them tomorrow.
Alfred: I guess you’re right. But surely eating ice-​cream for supper isn’t good for
our cholesterol levels?
Belinda: But Alfred, we so rarely do such a thing. Skipping supper just once isn’t
going to kill us.
Alfred: I guess you’re right. But what if the kids come home and there’s no ice-​
cream left? They might be cross.
Belinda: Alfred, they’ll understand when we tell them that their parents have
decided to go on a little binge. They’ll probably find it quite funny.
Alfred: I guess you’re right again, Belinda, all things considered. Our diet won’t
be seriously compromised, the groceries won’t be wasted, and the children
won’t be cross. Yes, you’re quite right. Ice-​cream for dinner it is!
Belinda: Oh, Alfred, forget about it. We’ll just put in a roast and boil up some
cabbage.

As D’Cruz puts it, Belinda has better “instincts” for when to act on a whim, while Alfred
seems overly cautious and even stultified. He acts like “a Prufrock.” By thinking so care-
fully about what to do, Alfred undermines Belinda’s suggestion, which at its core was to
do something without thinking too carefully about it.
D’Cruz’s vignette must be understood against the backdrop of a familiar and
appealing idea. The idea is that deliberation is necessary and central to ethical
action. By ethical action, I mean action that promotes goodness in the agent’s life,

153
154 Ethics

considered in the broadest possible terms.1 By deliberation, I  mean “bringing to


mind ideas or images meant to have some rational relation to the topic being con-
sidered, in the service of reaching a conclusion about what to think or do” (Arpaly
and Schroeder, 2012, 212). The appeal of the idea that deliberation is necessary and
central to ethical action is reflected in Alfred’s dilemma. How can he know whether
it is really a good idea to eat ice cream for dinner without thinking carefully about it?
His careful consideration of the reasons for and against taking Belinda’s suggestion
seems, to many people, like an obvious thing to do. He must “step back” from his
immediate inclinations in order to consider them in the light of his broader goals
and values.
But the contrast between Alfred and Belinda appears to challenge this idea.
Belinda doesn’t deliberate and seems better off for it. This suggests that, in some
contexts at least, deliberation may have a limited role to play in making our sponta-
neous reactions more virtuous. More specifically, deliberation may be neither nec-
essary nor always recommended for acting well.2
The aim of this chapter is to argue that this is so. I begin by arguing that delib-
eration cannot be foundational for action (§1). Here I  synthesize arguments
made previously by Railton and by Arpaly and Timothy Schroeder. Next, I elab-
orate on cases in which agents appear to act ethically in spite of their deliberative
reasoning (§2). People like Cecil Duly, Lapiere’s hoteliers and restaurateurs, and,
most notably, Huckleberry Finn, present prima facie counterexamples to the idea
that deliberation is necessary for ethical action. However, the prima facie sugges-
tion these cases present is limited, because these agents plausibly would be better
off, ethically speaking, if they were better deliberators. Cases like Huck’s show
that deliberation can go awry, in other words, but not that practical deliberation,
when done properly, is problematic. But I  introduce another set of cases that
show that deliberation itself—​even perfect deliberation—​can undermine ethical
action (§3). The considerations I introduce in my discussion of these cases apply
as well to the question of how to combat the vices of spontaneity, like implicit
bias. Here, too, the relationship between spontaneity and deliberation is fraught
(§4). This suggests the need for a different perspective on implicit attitude

1
  I will not delve into theories of what constitutes goodness in an agent’s life. All I mean by “good-
ness” is what appears to me to be good. For example, it appears to me that implicit attitudes that pro-
mote egalitarian social practices are good, while implicit attitudes that promote discriminatory social
practices are bad. Bennett takes a similar approach in his discussion of the relationship between “sym-
pathy” and “bad morality” (in cases like those of Huckleberry Finn). Bennett writes, “All that I can
mean by a ‘bad morality’ is a morality whose principles I deeply disapprove of. When I call a morality
bad, I cannot prove that mine is better; but when I here call any morality bad, I think you will agree with
me that it is bad; and that is all I need” (1974, 123–​124).
2
  Thus the spirit in which my argument is made is similar to that of Doris (2015), who writes, “
I insist that reflection is not necessary for, and may at times impede, the exercise of agency, while self-​
ignorance need not impede, and may at times facilitate, the exercise of agency.”
Deliberation and Spontane i t y 155

change, one not centered on deliberation. I develop this alternative framework


in Chapter 7.
Before moving on, however, I want to emphasize the scope of my claim. I will
not argue that deliberation is useless or unimportant. For one, a lot of delibera-
tion went into the writing of this book—​although you will have to judge to what
end. And surely stepping back to evaluate one’s reasons for action, as Alfred does,
is sometimes the right thing to do. More broadly, practical deliberation is inelim-
inably central to human experience. One may be persuaded that human beings have
a great deal more in common with nonhuman animals than was previously thought
(cf. de Waal, 2016), yet recognize the distinctiveness in human experience of step-
ping back from a situation, contemplating a suite of reasons for and against doing
something, forecasting the outcomes of different decisions, discussing those hypo-
thetical outcomes in language with others, making sophisticated inferences about
combinations of likelihoods, weighting competing norms, and so on. The impor-
tance of these sorts of conceptual capacities for our identity as a species, and for
ethical decision-​making, has been long emphasized, and rightly so.
But this is not to say that deliberating in these ways is always the right thing to
do. Deliberation can be faulty, and even when it isn’t, it can have costs. Moreover,
as I argue later, even when deliberation appears to be playing a central role in guid-
ing our decisions and behavior, things may in fact be otherwise. It is important to
emphasize the background that frames this discussion. In the Western tradition
at least, practical deliberation has mostly been understood as necessary for ethi-
cal action and, not infrequently, as representing the paradigmatic route to doing
what we ought. There are important historical exceptions, of course. Aristotelian
phronesis might be one, depending on how you interpret it. Moral sentimentalism
is another. But the overwhelming emphasis otherwise has been on the importance
of thinking carefully in order to act rightly. My aim is to push back against this in
order to highlight the sometimes positive relationship between being spontaneous
and ethical. As Arpaly put it, “A moral psychology that feeds on an unbalanced diet
of examples in which people form beliefs by reflection and practical judgments by
practical deliberation risks missing out on important parts of moral life” (2004, 25).

1. Nondeliberative Action
My claim that deliberation is neither necessary nor always recommended for ethi-
cal action would be a nonstarter if deliberation were foundational for action. If we
had to deliberate in order to act for reasons, in other words, then we would have to
deliberate in order to act for ethical reasons. This idea—​that practical deliberation
is necessary for action (properly so called)—​is familiar in the philosophy of action.
For example, David Chan writes, “To make a reason for doing something the agent’s
reason for doing it, the reason must enter into a process of practical reasoning”
156 Ethics

(1995, 140). Similarly, Peter Hacker writes, “Only a language-​using creature can
reason and deliberate, weigh the conflicting claims of the facts it knows in the light
of its desires, goals and values, and come to a decision to make a choice in the light
of reasons” (2007, 239).
But I  think deliberation is not foundational for action in this sense. Railton
(2009, 2014, 2015) and Arpaly and Schroeder (2012) have convincingly argued
for this view. They begin with the observation that agents often do things they have
reasons to do without having to recognize or consider those reasons in delibera-
tion. Railton argues that driving a car skillfully, producing just the right witty reply
in conversation, adjusting one’s posture to suit the mood of a group, shoplifting
competently, playing improvisational music, and shooting a basketball in the flow
of a game are all nondeliberative reason-​responsive actions. Arpaly and Schroeder
discuss examples like seeing at a glance that a reaching device can be used to grasp
a banana, multiplying numbers in one’s head, passing the salt when someone says,
“Pass the salt,” and knowing how and when to deliberate itself. These examples are
similar to those discussed throughout this book. Railton and Arpaly and Schroeder
discuss them in order to address a regress problem common to many theories of
action. For Railton, the problem is with theories that attempt to identify a special
act—​like choosing or endorsing one’s reasons for action—​that distinguishes either
action done for reasons or autonomous action from mere behavior. For Arpaly and
Schroeder, the problem is with theories that rationalize action exclusively in terms
of deliberation.3 The problem for both Railton and Arpaly and Schroeder is that an
action cannot be rationalized or shown to be done for reasons by another second-
ary act of choosing or endorsing or deliberating unless that secondary choosing,
endorsing, or deliberating itself is autonomous or done for reasons. For if the sec-
ondary choosing, endorsing, or deliberating is not itself autonomous or done for
reasons, it will not be capable of showing the resulting action to be autonomous
or done for reasons. But whatever makes the secondary choosing, endorsing, or
deliberating autonomous or done for reasons itself must be autonomous or done for
reasons . . . and so on.
This means that no act of past, present, or future deliberation can rationalize
deliberation itself. Arpaly and Schroeder (2012, 218–​219) elaborate on this, argu-
ing against what they call Previous Deliberation, Present Deliberation, and Possible
Deliberation. Previous Deliberation holds that deliberation is responsive to reasons
if it stands in an appropriate relation to a previous act of deliberation, such as ante-
cedently resolving to act in accord with a certain principle. The trouble with this way
of rationalizing deliberative action is that it succumbs to the regress problem. Each

3
  Arpaly and Schroeder are more explicit than Railton about whose theories they have in mind. They
identify Barry (2007), Chan (1995), Hacker (2007), Korsgaard (1997, 2009), and Tiberius (2002) as
holders of the putatively mistaken view that deliberation is required for responding to reasons. They
identify David Velleman’s (2000) model of practical reason as one that may avoid this problem.
Deliberation and Spontane i t y 157

previous act of deliberation, so long as it is an act, must itself be rationalized in some


way. Present Deliberation holds that deliberation is responsive to reasons if, when
we reach a deliberative conclusion, we also simultaneously embrace the process by
which we reached that conclusion, perhaps via an act of endorsement or accept­
ance (2012, 220). This avoids a temporal regress, but succumbs to another kind of
vicious regress. To deliberate at all, on this view, requires infinite acts of simultane-
ous deliberation. Finally, Possible Deliberation holds that deliberation is responsive
to reasons if it stands in an appropriate relation to a merely possible act of delibera-
tion, such as reasoning about what conclusion one would have reached, had one
deliberated (2012, 221). Here, too, the regress problem persists, this time in the
form of infinite counterfactuals that an agent would possibly have to deliberatively
consider in order to act for reasons. The agent’s counterfactual reasoning about each
of her deliberative acts would itself have to be counterfactually supported by other
possible acts of deliberation.
Railton argues that, to stop the regress, there must be “ways in which individuals
could come to embrace one reason over others autonomously, but not via a fur-
ther ‘full fledged’ act” (2009, 103). When one drives a car carefully, out of concern
for the safety of others, it must be the case that one acts for reasons (i.e., others’
safety) without necessarily choosing or endorsing those reasons. Likewise, as a
driver’s skill increases, nondeliberative habituation permits her to drive more flex-
ibly (i.e., to respond to a broader array of reasons in diverse situations) and more
self-​expressively (i.e., in a way that represents some aspect of who she is). Driving
in this way is a kind of nondeliberative competence, according to Railton, compris-
ing well-​honed affective, automatic, and nonconscious processes. It represents the
exercise of what he calls “fluent agency,” which is an expression of regress-​stopping
nondeliberative processes that enable individuals to respond to reasons without
that responsiveness depending upon rationalization by further actions. Akin to an
acquired habitus, fluent agency promotes autonomy, but not necessarily rational-
ity.4 In some cases, on Railton’s account, fluent agency is a competence that operates
independently of our rational faculties, for instance when we drive a car without
having to think about what we’re doing. (Fluency permits the driver to be a bet-
ter responder to driving-​related reasons, Railton argues, but not to be any more or
less rational.) In other cases, fluent agency is a competence that allows us to deploy
our rational faculties. When a person matches her means to her ends by picking
up a letter she wishes to read and think about, she is exercising a nondeliberative
competence—​matching means to ends—​which is necessary for the deployment of
her rational competence—​as a reader of letters (Railton, 2009, 115).
Arpaly and Schroeder similarly argue that the regress can be stopped if (in their
terms) there are fundamental processes by means of which agents can think and

  See Chapter 3 for brief discussion of Bourdieu’s concept of “habitus.”


4
158 Ethics

act for reasons that are not themselves full-​fledged acts of thinking and acting for
reasons. They are especially concerned to show that deliberation itself (a “mental
action”) is reason-​responsive on the basis of fundamental reason-​responsive nonde-
liberative processes. However, they extend these arguments to cases beyond men-
tal action, arguing that, for example, passing the salt when someone says, “Pass the
salt,” is a nondeliberative reason-​responsive action that need not stand in a rational-
izing relationship to other previous, present, or possible acts of deliberation. The
central idea is that the act of passing the salt is rationalized, not by any deliberative
act, but by a nondeliberative process that is nevertheless responsive to reasons. Such
nondeliberative reason-​responsive processes are regress-​stoppers.
The upshot of these arguments is that deliberation is not foundational for action.
As Arpaly and Schroeder put it, “Deliberation is not the foundation of our abil-
ity to think and act for reasons but a tactic we have for enhancing our preexisting,
and foundational, abilities to think and act for reasons: our ND [nondeliberative]
reason-​responding” (2012, 234). If this is correct, then there should be cases in
which agents’ spontaneous nondeliberative reactions get things right—​that is, seize
upon the agent’s best reasons for action—​in the absence of, or even in spite of, the
conclusions of their deliberative practical reasoning.

2.  Huck and Friends


The examples I discussed at the beginning of Chapter 4 fit this description. Cecil
Duly seems clear that, in his view, Antoine Wells deserves no better treatment than
Duly had given him all along. Yet after he interacts with Wells in a new context,
Duly’s unreflective treatment of Wells changes, and for the better. One can easily
imagine asking Duly how he ought to treat Wells in the future and finding similar
discord between his practical reasoning and his behavior. He seems likely to con-
clude that he ought to continue to give Wells “tough love,” but continue to treat him
empathetically nevertheless. Similarly, when asked how they ought to treat Chinese
customers, LaPiere’s hoteliers/​restaurateurs give the wrong answer. Yet their unre-
flective behavior, it seems, was more humane and unprejudiced than the results of
their practical deliberation.
The story of Huckleberry Finn provides the most famous illustration of this
pattern. As I  said in Chapter  4, Huck fails to do what he understands to be his
duty, turning in his friend Jim, the escaped slave. Huck takes seriously the idea of
“property rights” and believes that Jim is Miss Watson’s property. But at the crucial
moment something in Huck prevents him from turning Jim in. One way to interpret
this scene is that, when Huck is faced with the moment of action, his spontaneous
reaction is comparatively more egalitarian than are the explicit attitudes that figure
into his deliberation. Indeed, Arpaly suggests that Huck is the inverse of an implic-
itly biased agent. “There are people,” she writes, “who sport liberal views but cross
Deliberation and Spontane i t y 159

the road when a person of a different race appears or feel profound disbelief when
that person says something intelligible. Huckleberry, from the beginning, appears
to be the mirror image of this sort of person: he is a deliberative racist and viscerally
more of an egalitarian” (2004, 76).
To interpret Huck this way requires two things. First, Huck’s explicit attitudes
conflict with his behavior. Indeed, what seems to elude Huck, as Arpaly argues, is
the belief that letting Jim go is the right thing to do. Much to the contrary, Huck
explicitly believes that, because he fails to turn Jim in, he’s a wicked boy and is going
to hell.5 Second, to interpret Huck this way requires that his action be a response to
some kind of character-​sustaining motivation. Huck’s action-​guiding attitude must
be his in such a way that it reflects well upon him. While of course Huck is fictional
and many interpretations of Twain’s text are possible, there is reason to think that
Huck’s motivation is praiseworthy in this sense.
Contrast this with Jonathan Bennett’s (1974) interpretation. He uses the Huck
Finn case to consider the relationship between what he calls “sympathy” and
morality. By sympathy, Bennett seems to mean what most people today would call
empathy, an emotional concern for others in virtue of their feelings and experi-
ences. Huck’s sympathy for Jim, on Bennett’s reading, is “just a feeling,” a “natural”
reaction to one’s friend, which Bennett says is irrational, given that all of Huck’s
reasons suggest that he ought to turn Jim in (1974, 126, 126–​127). Bennett con-
trasts actions based on sympathy to actions based on reasons (whether good or
bad). While I agree with Bennett’s broad view that Huck’s conscience gets it right
despite his deliberation, I think Arpaly (2004) is right in pointing out that Bennett’s
understanding of Huck’s conscience is too crude. For Bennett, Huck acts on the
basis of some “purely atavistic mechanism,” perhaps akin, Arpaly suggests, to what
Kant calls “mere inclination” (76). But Twain’s description of Huck suggests that he
undergoes something more like a “perceptual shift” in his view of Jim. Arpaly writes:

[The] discrepancy between Huckleberry’s conscious views and his uncon-


scious, unconsidered views and actions widens during the time he spends
with Jim. Talking to Jim about his hopes and fears and interacting with
him extensively, Huckleberry constantly perceived data (never deliberated
upon) that amount to the message that Jim is a person, just like him. Twain
makes it very easy for Huckleberry to perceive the similarity between him-
self and Jim:  the two are equally ignorant, share the same language and
superstitions, and all in all it does not take the genius of John Stuart Mill

5
  Arpaly writes that Huck acts for good moral reasons “even though he does not know or believe that
these are the right reasons. The belief that what he does is moral need not even appear in Huckleberry’s
unconscious. (Contra Hursthouse 1999, my point is not simply that Huckleberry Finn does not have
the belief that his action is moral on his mind while he acts, but that he does not have the belief that
what he does is right anywhere in his head—​this moral insight is exactly what eludes him.)” (2004, 77).
160 Ethics

to see that there is no particular reason to think of one of them as the infe-
rior to the other. While Huckleberry never reflects on these facts, they do
prompt him to act toward Jim, more and more, in the same way he would
have acted toward any other friend. That Huckleberry begins to perceive
Jim as a fellow human being becomes clear when Huckleberry finds him-
self, to his surprise, apologizing to Jim—​an action unthinkable in a society
that treats black men as something less than human. (2004, 76–​77)

If this is right, then it is appropriate to say that Huck acted ethically, in a nonac-
cidental way, despite what he believed he ought to do. Huck’s feelings for Jim are
not brute instincts, as Bennett suggests, but are instead attitudes that reflect upon
Huck’s experience and perception, his behavior, and how he incorporates feedback
from this behavior over time. In short, it isn’t a stretch to credit Huck’s egalitarian
act to his implicit attitudes toward Jim. Huck comes to care about Jim’s personhood,
it seems, and this is ethically praiseworthy, in the sense I argued in Chapters 4 and
5. His implicit attitudes don’t reflect upon how rational he is or upon how consist­
ently his conscience might be deployed in other contexts. They don’t necessarily
reflect on who Huck is in a deep sense, overall. But they do reflect upon him.
This suggests that in a case like Huck’s, deliberation is not necessary for genu-
inely ethical action (i.e., action that isn’t just accidentally good or is good in some
way that doesn’t reflect on the person). However, it remains open that this may
be tantamount to scant praise. It seems clear that one’s esteem for Huck would
rise had he both acted rightly and recognized the rightness of his action in delib-
eration. In other words, we can praise Huck despite his being conflicted, while
also recognizing that it would have been better had he not been conflicted. Huck
isn’t good at deliberating. He fails to take into consideration a number of crucial
premises, for example, that Jim is a person and that people are not property. Had
Huck been a better deliberator, we might imagine him going forward in life more
likely to act ethically, without having to overcome deep rending personal conflict.
Similarly, in the Duly and LaPiere cases, there is no reason to think that the agents
couldn’t have acted deliberatively, or wouldn’t have acted better had their delibera-
tive judgments and nondeliberative inclinations aligned. While one might think
that Duly’s increasingly kinder treatment of Wells reflects well upon him—​despite
what Duly appears to believe—​one might still hold that Duly would be better off
both treating Wells kindly and explicitly judging that Wells ought to be treated
kindly. Similarly, LaPiere’s hoteliers/​restaurateurs would have been better off, ethi-
cally speaking, both behaving in a nondiscriminatory way and avowing to act in a
nondiscriminatory way.
Cases like these leave it open that deliberation, while nonfoundational for
action, and perhaps not necessary in all cases for ethical action, is still always rec-
ommended. Deliberating (well) is always the best option, in other words. But other
cases challenge this idea too.
Deliberation and Spontane i t y 161

3.  Costly Deliberation


D’Cruz’s vignette of Alfred and Belinda is meant to illustrate a tension between delib-
eration and spontaneity in practical ethics. The tension is this: if there is value in act-
ing spontaneously, it is a value that is always at risk of being undermined when one
deliberates about whether or not to act spontaneously. Belinda’s rejection of Alfred’s
decision represents the realization of this risk. Once Alfred has forced Belinda to
deliberate about whether to eat ice cream for dinner on a lark, the idea of eating ice
cream for dinner on a lark is no longer appealing. The appeal of eating ice cream for
dinner was precisely its spontaneity. D’Cruz labels this the “deliberation-​volatility”
of the particular class of reasons for action having to do with acting spontaneously.
Agents may have “deliberation-​volatile reasons” (DVRs) to act spontaneously from
time to time, but these degrade as reasons for action once the agent deliberates
upon them. These reasons are volatile upon deliberation.6
If cases like Huck’s represent an existence proof of agents who can act ethically
in spite of their practical deliberation, cases involving DVRs represent an existence
proof of agents who ought to act nondeliberatively. DVRs, D’Cruz argues, lurk in any
situation in which agents have reasons to act spontaneously. He focuses on “whim”
cases. In addition to Alfred and Belinda, D’Cruz imagines Moritz, who is riding the
train from Berlin to Dresden, but decides at the last second to skip his stop and
ride to Zittau, despite thinking that he probably should get off in order to stick to
his plans in Dresden. “As the train starts, [Moritz] feels an overwhelming euphoria
come over him,” D’Cruz writes, “a powerful sensation of freedom that, in retrospect,
easily outweighs his other reasons for going to Dresden” (2013, 33). Moritz has
good reasons for skipping his stop, assuming that feeling free and euphoric con-
tribute to Moritz’s desire to lead a good and happy life. But, D’Cruz asks, consider
the outcome had Moritz deliberated about this choice. He might have thought, “If
I continue to Zittau, I will feel an overwhelming sense of euphoria and freedom.”
Had Moritz contemplated this conditional, he would have rendered it false.7 This
6
  See also Slingerland (2014) for extended discussion of what he calls the “paradox of wu wei”—​
which is effectively the paradox of trying to be spontaneous—​in classical Chinese ethics.
7
  Borderline cases are possible. Moritz might deliberate about continuing to Zittau, decide to do it,
and still feel some degree of euphoria and freedom. Rather than rendering the conditional “If I con-
tinue to Zittau, I will feel an overwhelming sense of euphoria and freedom” categorically false, perhaps
deliberating upon it only renders it much more likely to be false. Or perhaps deliberating upon the con-
ditional will diminish the strength of Moritz’s feelings of euphoria and freedom. Speculatively, it seems
to me that what matters is whether one makes a decision before one’s process of deliberation has con-
cluded, from one’s own perspective. If Moritz briefly considers the conditional, but then just decides
to head to Zittau before considering its implications or its weight in light of other considerations—​a
“what the hell” moment, in other words—​it seems likely that he will experience the euphoria and
freedom associated with acting spontaneously. If, however, he considers the conditional sufficiently
to feel confident or settled in making a deliberative decision, then I suspect he will not experience the
euphoria and freedom associated with acting spontaneously.
162 Ethics

is because, presumably, Moritz’s euphoria was tied to the spontaneity of his deci-
sion. The more that Moritz engages in deliberative practical reasoning about what
to do, the less he has reason to continue to Zittau. Of course, Moritz doesn’t have
reasons to create a policy of skipping his stop regularly, just as Alfred and Belinda
don’t have reasons to eat ice cream for dinner every night. Indeed, doing so would
be another way of undermining the spontaneity of these decisions. But in these par-
ticular cases, agents have positive reasons to act nondeliberatively.
There is no reason Huck, Duly, or LaPiere’s hoteliers/​restaurateurs wouldn’t
have been better off, ethically, had they been better deliberators and acted on the
basis of their practical deliberation. But this isn’t the case with Alfred and Belinda,
or in any case involving DVRs. Alfred’s problem is not that he should be a better
deliberator.8 His problem—​the thing preventing him from acting spontaneously in
a way that he himself would value—​is his very need to deliberate. In other words,
just in virtue of deliberatively considering whether to make a spontaneous choice
to eat ice cream for dinner, Alfred no longer has reasons to spontaneously choose to
eat ice cream for dinner.
DVR cases show that deliberation can have costs. And sometimes these costs tip
the scales in favor of not deliberating. The fact that deliberation can have costs in this
sense suggests that, in some contexts at least, deliberation can undermine ethical
action. Moreover, the contexts in which deliberation can undermine ethical action
are not rare. The relevant context is not limited to cases of acting on a whim, as
Belinda and Moritz do.
In some cases, it simply isn’t feasible to deliberate, and thus one’s goals are instru-
mentally supported by acting nondeliberatively.9 One reason for this is that real time
interaction does not offer agents the time required to deliberate about what to say
or do, and trying to deliberate in such circumstances is counterproductive. This is
evident when you think of a witty comeback to an insult, but only once it’s too late.
Unfortunately, most of us don’t have Oscar Wilde’s spontaneous wit. (Fortunately,
most of us also don’t have the same doggedness that motivated George Costanza
to fly to Ohio to deliver a good comeback.) To be witty requires rapid and fluent
assessments of the right thing to say in the moment. In addition to time pressure,
social action inevitably relies upon nondeliberative reactions because explicit think-
ing exhausts us. We are simply not efficient enough users of cognitive resources to
reflectively check our spontaneous actions and reactions all the time.
Beyond the feasibility problem of acting deliberatively in cases like these, there
are many cases in which agents’ nondeliberative reactions and dispositions are par-
ticularly well suited to the situation. The research on expertise in athletics discussed
in Chapter 5 provides one set of examples. On the tennis court, it is not just difficult

  See D’Cruz (2013, 35).


8

  For this “feasibility argument,” I  am indebted to Gendler “Moral Psychology for People with
9

Brains.”
Deliberation and Spontane i t y 163

to think deliberatively about which shots to play while playing. The best players
are those who seem to have the best spontaneous inclinations; the flexibility and
situation-​specific responsiveness of their impulses is a positive phenomenon.10 That
is, like Belinda, their spontaneous reactions are getting it right, and there is no rea-
son to think that they, like Huck, would be better off were these reactions supported
by deliberative reasoning. Athletes who reflect on the reasons for their actions while
playing sometimes end up like Alfred, destroying those very reasons. (Of course,
in a wider slice of time, athletes do analyze their play, when they practice. I’ll argue
later that this doesn’t undermine the claim that deliberation is costly in sports, nor
does it undermine the broader claim that examples like these show that deliberation
can undermine ethical action.)
The same can be said of particular elements of social skill. The minor social
actions and reactions explained by our implicit attitudes often promote positive
and prosocial outcomes when they are spontaneously and fluently executed. Hagop
Sarkissian describes some of the ways in which these seemingly minor gestures can
have major ethical payoffs:

For example, verbal tone can sometimes outstrip verbal content in affect-
ing how others interpret verbal expressions (Argyle et al. 1971); a slightly
negative tone of voice can significantly shift how others judge the friendli-
ness of one’s statements, even when the content of those statements are
judged as polite (Laplante & Ambady 2003). In game-​theoretic situations
with real financial stakes, smiling can positively affect levels of trust among
strangers, leading to increased cooperation (Scharlemann et  al. 2001).
Other subtle cues, such as winks and handshakes, can enable individuals
to trust one another and coordinate their efforts to maximize payoffs while
pursuing riskier strategies (Manzini et al. 2009). (2010b, 10)11

Here, too, nondeliberative action is a positive phenomenon. It is not just that since
deliberation is not feasible in real-​time action, we must make do with the second-​best
alternative of relying upon our nondeliberative faculties. Rather, in cases of social
fluency, like athletic expertise, agents’ nondeliberative inclinations generate praise-
worthy outcomes, outcomes that could be undermined by deliberation. A smile that
reads as calculated or intentional—​a so-​called Pan Am smile—​is less likely than a
genuine—​or Duchenne smile—​to positively affect levels of trust among strangers.
Here it seems that it is the very spontaneity of the Duchenne smile—​precisely that
it is not a product of explicit thinking—​that we value. Supporting this, research sug-
gests that people value ethically positive actions more highly when those actions

10
  But see Montero (2010) for discussion of particular sports and arts in which this is not the case.
11
  W hile I endorse giving others kind smiles and meaningful handshakes, I think one should prob-
ably avoid winking at strangers.
164 Ethics

are performed spontaneously rather than deliberatively (Critcher et  al., 2013).
(However, perhaps as in the case of athletic expertise, the role of deliberation in
social fluency is indexed to past practice, as discussed later.)
Well-​known thought experiments in philosophy can illustrate this idea too.
Bernard Williams’s suggestion that it is better to save one’s drowning spouse unhesi-
tatingly than to do so accompanied by the deliberative consideration that in situ-
ations like this saving one’s spouse is morally required illustrates the same point.
Williams’s point isn’t that deliberating interferes with right action by distracting or
delaying you. Rather, it’s that the deliberative consideration adds “one thought too
many” (1981, 18).12 Your spouse desires that you rescue him or her without your
needing to consult your moral values.13
Spontaneity has distinctive value in all of these cases, from conversational wit to
athletic expertise to social fluency to ethical actions that are necessarily nondelib-
erative. Deliberating in real time, when one is in the flow of decision-​making, threat-
ens to undermine one’s practical success in all of these cases. Conversational wit
is undermined by deliberation in the moment, as is athletic improvisation, social
fluency, and so on. To the extent that people have reasons to be witty, or to excel
in high-​speed sports, or to promote trust and cooperation through verbal tone and
smiling, they will undermine these very reasons by deliberating upon them.
Of course, in many cases, we contemplate and consider what to do in situations
like these ahead of time. Arguably the sharpest wits—​stand-​up comics—​spend a
lifetime contemplating those nuances of social life that ultimately make up their
professional material. Likewise, expert athletes train for thousands of hours, and
much of their training involves listening to and considering the advice of their
coaches. So it is not as if deliberation plays no role in these cases. Rather, it seems to
play a crucial role in the agents’ past, in shaping and honing their implicit attitudes.
This seems to suggest that deliberation is indeed recommended in promoting
and improving virtuous implicit attitudes. It is just that the role it plays is indexed
to the agent’s past. It is important to note that this is not inconsistent with Railton’s
and Arpaly and Schroeder’s critique of Previous Deliberation (§1). Their point is
that past instances of deliberation cannot solve the regress problem of rationalizing

12
  Bernard Williams makes this point in arguing against consequentialist forms of ethics, but the
idea can be broadened.
13
  Or consider the key scene in the film Force Majeure. What appears to be an avalanche is rushing
toward a family. The father keeps his cool . . . until he panics and flees the oncoming rush, grabbing his
cell phone on the way but abandoning his family. The avalanche turns out to be a mirage, but this takes
nothing away from what was his quite reasonable terror. Of course, what was not reasonable was his
abandonment of his wife and small children. Surely his problem wasn’t a failure to deliberate clearly
about what to do, but rather that he acted like a coward in a situation demanding spontaneity. The rest
of the film grapples with his inability to admit what he did, but also with what his cowardly spontane-
ous reaction says about him as a person. All of this makes sense, I think, only within the framework of
thinking that his nondeliberative reaction itself should have been better.
Deliberation and Spontane i t y 165

action. To reiterate, this is because one cannot claim that a present nondeliberative
action is an action as such just because the agent contemplated her reasons for it
at some point in the past, because that act of past deliberation itself stands in need
of rationalizing. The point of this argument is to show that agents can—​indeed,
must—​act for reasons nondeliberatively. But this claim about the nature of action
does not provide normative guidance about what agents ought to do, that is, when
they should or shouldn’t deliberate. As such, it is consistent with the argument
against Previous Deliberation that agents ought to cultivate their implicit attitudes
by reflecting on them in various ways ahead of time, in the ostensible manner of
comics and athletes and so on.
We should hesitate to accept the idea that deliberation is always normative even
if it is indexed to the past, however. One reason for this is that there are some cases
in which agents act excellently and nondeliberatively without having had any sort
of training or preparation. Heroes, for example, often “just react” to the situation,
and do so without having deliberated about what to do at any time in the present
or the past. Wesley Autrey (Chapter 1)—​the “Subway Hero” who saved the life of
an epileptic man who had fallen onto some subway tracks, by holding him down
while a train passed just inches overhead—​didn’t credit his decision to any delibera-
tive act or explicit forethought, but rather to his experience working construction
in confined spaces (such that he could snap-​judge that he would have enough room
for the train to pass overhead).14 Cases of social fluency often have this form too. As
I argued about Obama’s grin (Chapters 2 and 5), it is unlikely that any intentional
efforts in the past to practice reassuring gestures like this explain the fluency of his
act. Such efforts are usually woefully underspecific. They fail to distinguish those
with and without the relevant fluency (just as I said, in Chapter 4, that expert ath-
letes’ thousands of hours of intentional practice don’t distinguish them from inex-
pert athletes who have practiced for thousands of hours too).
There is also reason to question the role past deliberation plays in shaping one’s
implicit attitudes even in those cases in which agents do train for and practice what
they do. Expert athletes learn from their coaches and plausibly deliberate about
what their coaches teach them, but it’s not clear that what they learn is a product
of what they deliberate about or conceptualize. For example, in many ball sports,
like baseball, tennis, and cricket, athletes are taught to play by “watching the ball” or
“keeping their eye on the ball.” This instruction serves several purposes. Focusing
on the ball can help players to pick up nuances in the angle and spin of the incoming
serve or pitch; it can help players keep their head still, which is particularly impor-
tant in sports like tennis, where one has to meet the ball with one’s racquet at a
precise angle while one’s body is in full motion; and it can help players avoid dis-
tractions. One thing that attempting to “watch the ball” does not do, however, is

14
  See http://​en.wikipedia.org/​wiki/​Wesley_​Autrey.
166 Ethics

cause players to actually visually track the ball from the point of release (in baseball
or cricket), or opponent contact (in tennis), to the point of contact with one’s own
bat or racquet. In fact, it is well established that ball players at any level of skill make
anticipatory saccades to shift their gaze ahead of the ball one or more times during
the course of its flight toward them (Bahill and LaRitz, 1984; McLeod, 1987; Land
and McLeod, 2000). These saccades—​the shifting of the eye gaze in front of the
ball—​occur despite the fact that most players (at least explicitly) believe that they
are visually tracking the ball the whole time.15
Moreover, it seems clear that a batter who does have an accurate understand-
ing of what she does when she watches the ball, and attempts to put this under-
standing into action as a result of practical deliberation, won’t necessarily be any
better off than the ordinary batter who falsely understands what she does when she
intends to watch the ball. The batter with the accurate belief might even be worse
off. The point isn’t that batters can’t articulate their understanding of what they do
while playing. The point is that the best available interpretation of batters’ skill pos-
its well-​honed implicit attitudes to explain their ability, not true beliefs about what
to do (as I argued in Chapter 3). These implicit attitudes generate dispositions to
respond to myriad circumstances in spontaneous and skillful ways. Good coaches
are ingenious in identifying ways to inculcate these attitudes. Of course, this tute-
lage involves explicit instruction. But as in the case of watching the ball, explicit
instruction may achieve its goal not via the force of the meaning of the instruction
itself. In tennis, the coach might say, “Think about placing your serve in the outer-
most square foot of the service box,” but effectively what this might do is cause the
player to unconsciously rotate her shoulders outwardly while serving.
While these are very specific examples, broader theories of skill learning tend to
emphasize observation rather than instruction. Consider research on how children
learn to manipulate objects given their goals. Showing them various toys, Daphna
Buchsbaum and colleagues (2010) told four-​year-​olds, “This is my new toy. I know

15
  In fact, David Mann et al. (2007) show that skilled cricketers suffer little to no loss in their bat-
ting abilities even when they are wearing misprescribed contact lenses that render the batters’ eyesight
on the border of legal blindness (although one must note the small sample size used in this study).
Research suggests that visual information in these contexts is processed in the largely unconscious
dorsal visual stream (see Chapter 2, footnote 13). For an extensive analysis of the watching-​the-​ball
example, see Brownstein and Michaelson (2016) and Papineau (2015). Recall (from Chapter 4) that
Papineau argues that athletes in these contexts must hold in mind an intention to perform what he calls
a “basic action.” Basic actions are, on his view, actions that you know how to do and can decide to do
without deciding anything else. For example, a batter might hold the intention in mind to “hit defen-
sively.” The batter need not—​and ought not—​focus on how to hit defensively. Focusing in this way on
the components of a basic action can cause what Papineau calls the “yips.” But failing to keep the right
intention in mind causes what Papineau calls “choking.” Distinguishing between choking and the yips
in this way is useful, but the concern I just discussed remains so long as there is the possibility of the
agent’s intentions to perform basic actions being confabulated.
Deliberation and Spontane i t y 167

it plays music, but I haven’t played with it yet, so I don’t know how to make it go.
I thought we could try some things to see if we can figure out what makes it play
music.” The experimenters then showed the children various sequences of actions,
like shaking the toy, knocking it, and rolling it. Following a pattern, sometimes
the toy played music and sometimes it didn’t. Sometimes this pattern included
all the actions the experimenter performed, but sometimes it didn’t. When given
their turn to make the toy play music, the children didn’t simply imitate what the
experimenter did. Rather, they made decisions about which parts of the sequence
to imitate. These decisions reflected statistical encoding of the effective patterns
for making the toy work. However, in a second experiment, the experimenter told
another group of children, “See this toy? This is my toy, and it plays music. I’m going
to show you how it works. I’ll show you some things that make it play music and
some things that don’t make it play music, so you can see how it works.” In this peda-
gogical (vs. comparatively nonpedagogical) context, the children “overimitated” the
experimenter. They did just what the experimenter did, including needless actions,
thus ignoring statistical information. As a result, they were less effective learners. As
Alison Gopnik (2016), one of the coauthors of the study put it, summarizing this
and other research, “It’s not just that young children don’t need to be taught in order
to learn. In fact, studies show that explicit instruction . . . can be limiting.”
As with my argument about deliberation writ large, my point here is not that
explicit instruction is always counterproductive to skill learning. Rather, my argu-
ment is that it is harder than it seems to explain away cases of nondeliberative skill
by indexing those skills to past acts of deliberative learning.

4.  Deliberation, Bias, and Self-​Improvement


The case for the necessity and warrant for deliberating in order to act both sponta-
neously and ethically is perhaps strongest when one turns to consider the vices of
implicit attitudes. For example, surely one must reflect on the wrongs of racism and
inequality in order to overcome one’s implicit biases. Deliberation can help us to
calibrate the goals that activate particular implicit attitudes in particular situations
to our long-​term reflective goals (Moskowitz and Li, 2011; see also Chapter  2).
Deliberation also seems crucial for recognizing when the statistical regularities in
the environment that reinforce our implicit attitudes are themselves reflective of
inequalities and unjust social arrangements (Railton, 2014; Huebner, 2016). And
discussing the wrongs of discrimination does seem to help promote implicit atti-
tude change (Devine et al., 2012).
Thinking carefully about our spontaneous inclinations seems crucial for mak-
ing them more virtuous in other contexts too. Understanding and contemplat-
ing the problems that stem from confirmation bias and attribution errors and so
on—​these and all the other problems associated with spontaneous and impulsive
168 Ethics

thinking—​seem essential for avoiding them. Indeed, arguably the most widely
practiced and seemingly effective techniques for habit change and overcom-
ing unwanted impulses focus on critical self-​reflection. I have in mind versions of
cognitive-​behavioral therapy (CBT), which are standardly understood in terms
of encouraging agents to consciously and explicitly revise the ways in which they
make sense of their own habitual thoughts, feelings, and actions. CBT strategies
teach agents how to replace their unwanted or maladaptive cognitive, affective, and
behavioral patterns with patterns that match their reflectively endorsed goals. This
process of recognition and replacement is thoroughly deliberative, or so it is argued.
Indeed, it has been compared by its practitioners to an inwardly focused process of
Socratic questioning (Neenan and Dryden, 2005; Beck, 2011).
Nevertheless, I now want to point out important ways in which deliberation can
distract from and even undermine the fight against unwanted implicit attitudes.
One way in which deliberation can be distracting in combating unwanted
implicit attitudes is by simply wasting time. For example, cognitive therapies are
very common for treating phobias and anxiety disorders, in particular in conjunc-
tion with various exposure therapies. But reviews of the literature show that they
provide little, if any, added benefit to exposure therapies alone (Wolitzky-​Taylor
et al., 2008; Hood and Antony, 2012). Moreover, consider one of the most striking
demonstrations of effective treatment for spider phobia (Soeter and Kindt, 2015).
First, spider-​fearful participants were instructed to approach an open-​caged taran-
tula. They were led to believe that they were going to have to touch the tarantula
and, just before touching it, were asked to rate their level of fear. The point of this
was to reactivate the participants’ fearful memories of spiders. After reporting their
feelings, the cage was closed (i.e., they didn’t have to touch the spider). A  treat-
ment group was then given 40 mg of the beta blocker propranolol. (One control
group was given a placebo; another was given the propranolol but did not have the
encounter with the tarantula.) Beta blockers are known to disrupt the reconsolida-
tion of fear memories while leaving expectancy learning unaffected. After taking
the beta blocker, the participants’ approach behavior toward spiders was measured
four times, from four days post-​treatment to one year post-​treatment. Treatment
“transformed avoidance behavior into approach behavior in a virtual binary fash-
ion,” the authors write, and the effect lasted the full year after treatment (Soeter and
Kindt, 2015, 880). All participants in the treatment group touched a tarantula four
days after treatment, while participants in the control groups were barely willing
to touch the tarantula’s cage. Moreover, and most relevant to the effectiveness of
deliberatively considering one’s unwanted thoughts and feelings, participants in the
treatment condition reported an unchanged degree of fear of spiders after treatment,
despite these dramatic changes in their behavior. Eventually, three months later,
their self-​reported fear “caught up” with the changes in their behavior. The authors
conclude, “Our findings are in sharp contrast with the current pharmacological and
cognitive behavioral treatments for anxiety and related disorders. The β-​adrenergic
Deliberation and Spontane i t y 169

blocker was only effective when the drug was administered upon memory reactiva-
tion, and a modification in cognitive representations was not necessary to observe a
change in fear behavior” (2015, 880).16
In a more general context, it is hard to tell when deliberative self-​criticism and
reflection are doing the work of changing one’s attitudes and behavior and when
it just seems so. This is often the case with cognitive therapies. Despite its prac-
titioners’ understanding of CBT’s mechanisms for bringing about attitude and
behavioral change, there are reasons to think that CBT is effective for quite dif-
ferent reasons. One point of concern about theorists’ understanding of how CBT
works has to do with the accuracy of introspection. The theory of CBT assumes that
people can accurately introspect the maladaptive thoughts that drive their behav-
ior. This identification is putatively accomplished through a process of introspective
Socratic questioning (i.e., Socratic questioning of oneself, guided by the CBT practi-
tioner). But, of course, people are not always (or often) good at identifying through
introspection the thoughts that explain and cause their behavior. This is because
most of our thoughts are not encoded in long-​term, episodic memory. Instead, as
Garson Leder (2016) points out in a critique of the dominant interpretation of how
CBT works, usually people rely upon theory-​guided reconstructions when trying
to understand their own past experiences and thoughts (e.g., Nisbett and Wilson,
1977). Dominant theories of CBT require that people identify the actual past
thoughts that explain their current feelings and behavior, not just past thoughts that
could coherently make sense of their current feelings and behavior.17 But the real-
ity of how memory works—​as is in evidence in the large confabulation literature,
for example (as discussed later)—​suggests that this is simply not feasible most of
the time. Alternatively, it is possible that CBT works through associative processes.
Patients might simply be counterconditioning their associations between particular
thoughts and particular feelings and behavior by rehearsing desirable thoughts in
conjunction with desirable outcomes.
A similar pattern may be at work in some cases of interventions for implicit bias.
Some research suggests that persuasive arguments effectively shift implicit atti-
tudes.18 If this is so, then it seems to follow that deliberating about the contents of

16
  Another reason this experiment is striking is that propranolol has also been used effectively to
diminish anti-​black bias on the black–​white race IAT (Terbeck et al., 2012). This is suggestive of pos-
sibilities for future research on interventions to combat implicit bias, whether involving beta blockers
or other fear-​reconditioning approaches. Such interventions may represent a “physical stance” of the
kind I describe in Chapter 7. They may also supplement the role of rote practice in implicit attitude
change, also described in Chapter 7.
17
  Alternatively, patients are encouraged to “catch” and record snippets of their inner monologue in
particular and problematic situations. It is similarly unclear if doing this—​accurately recording parts of
one’s inner monologue—​is actually what’s responsible for the salutary effects of CBT.
18
  See Richeson and Nussbaum (2004), Gregg et  al. (2006), and Briñol et  al. (2009). See also
Chapter 3.
170 Ethics

one’s implicit attitudes is likely to be an effective way of changing them. The connec-
tion here is that deliberation is the usual means by which we evaluate and appreciate
the strength of an argument. In deliberation, we try to infer sound conclusions from
premises and so on. So evidence showing that persuasive arguments change implicit
biases would seem to be good evidence in turn that deliberation is effective in com-
bating unwanted implicit biases.
Pablo Briñol and colleagues (2009) compared the effects on an IAT of read-
ing strong versus weak arguments for the hiring of black professors at universities.
One group of research participants learned that hiring more black professors has
been shown to increase the quality of teaching on campus—​a strong argument—​
while the other group was told that hiring more black professors was trendy—​a
weak argument. Only participants in the strong argument condition had more posi-
tive implicit attitudes toward blacks after intervention. Mandelbaum (2015b) cites
this study as evidence supporting his propositional account of implicit attitudes
(Chapter 3). The reason he does so is that, on his interpretation, the implicit atti-
tudes of the participants in the strong-​argument condition processed information
in an inferentially sound way. If this is right, it suggests that people ought to deliber-
ate about reasons to be unbiased, with the expectation that doing so will ameliorate
their implicit biases.
But as in the case of CBT, it is unclear exactly why argument strength affects
implicit attitudes. As Levy (2014) points out, to show that implicit attitudes are
sensitive to inferential processing, it is not enough to measure implicit attitudes
toward φ, demonstrate that agents make a relevant set of inferences about φ, and
then again measure agents’ implicit attitudes toward φ. It could be, for instance,
that the persuasive arguments make the discrepancy between agents’ implicit and
explicit attitudes salient, that agents find this discrepancy aversive, and that their
implicit attitudes shift in response to these negative feelings (Levy, 2014, 12).19 In
this case, implicit attitudes wouldn’t be changing on the basis of the logical force of
good arguments. Rather, they would be changing because the agent feels bad. In
fact, Briñol and colleagues have explored the mechanisms underlying the effects of
persuasive arguments on implicit attitude change. They suggest that what drives the
effect is simply the net valence of positive versus negative thoughts that the study
participants have. It’s not the logical force of argument, in other words, but rather
that positive arguments have positive associations. They write, “The strong mes-
sage led to many favorable thoughts . . . the generation of each positive (negative)
thought provides people with the opportunity to rehearse a favorable (unfavorable)
evaluation of blacks, and it is the rehearsal of the evaluation allowed by the thoughts

  Although there may be reasons to doubt this interpretation. As Madva points out (pers. com.),
19

the common finding in the literature is that making people conscious of their implicit biases—​or mak-
ing those attitudes salient—​tends to amplify rather than diminish bias on indirect measures.
Deliberation and Spontane i t y 171

(not the thoughts directly) that are responsible for the effects on the implicit measure”
(2009, 295, emphasis added).
This is consistent with what I suggested about the possible mechanisms driving
the effects of CBT, namely, that agents are counterconditioning their associations
between particular thoughts and particular feelings and behavior. If this is what’s
happening in Briñol and colleagues’ study, then the right conclusion might not be
that agents ought to deliberate about persuasive reasons to be unbiased. It might
be more effective to simply make people feel bad about being biased.20 (I’m not
arguing that we should make people feel bad about their implicit biases. In the next
chapter I canvass various nondeliberative tactics for improving the virtuousness of
implicit attitudes.)
Recent developments in multinomial modeling of performance on indirect mea-
sures of attitudes can help to uncover the particular processes underlying implicit
attitude change. The Quadruple Process Model (Quad Model; Conrey et al., 2005),
for instance, is a statistical technique for distinguishing the uncontrollable activa-
tion of implicit associations from effortful control over activated associations. If per-
suasive arguments really do effectuate changes within implicit attitudes themselves,
rather than by amplifying aversive feelings (or by some other means), the Quad
Model would (in principle) demonstrate this by showing changes in the activation
of associations themselves rather than heightened effortful control. Unfortunately,
this particular analysis has not been done.21 The Quad Model has been used, how-
ever, to analyze the processes underlying other implicit attitude change interven-
tions. Thus far, it appears that interventions that change the activation of implicit
associations themselves—​in the context of implicit racial biases—​include retrain-
ing approach and avoid tendencies toward members of stigmatized racial groups
(Kawakami et  al., 2007b, 2008), reconditioning the valence of race-​related stim-
uli (Olson and Fazio, 2006), and increasing individuals’ exposure to images, film
clips, or even mental imagery depicting members of stigmatized groups acting in
stereotype-​discordant ways (Blair et  al., 2001; Dasgupta and Greenwald, 2001;
Weisbuch et al., 2009). What’s notable about this list of interventions—​all of which
I discuss in more depth in the next chapter—​is its striking dissimilarity to reasoned
persuasion. These techniques seem to operate via retraining and countercondition-
ing, not deliberation.
In other cases, conscious and seemingly deliberative efforts to be unbiased can
actually make implicit biases worse. The problem here—​as previously discussed—​
is that deliberation requires effortful attention, and self-​directed effortful attention
is cognitively depleting. The effect of cognitive depletion is then sometimes more
biased behavior (in addition to other unwanted behaviors, such as outbursts of

20
  For detailed discussion of Briñol and colleagues’ research and its bearing on questions about the
nature of implicit attitudes, see Madva (2016b).
21
  So far as I know; see Madva (2017).
172 Ethics

anger).22 Evan Apfelbaum and colleagues (2008) show that agents’ tendencies to
avoid talking about race in interracial interactions predict negative nonverbal behav-
iors. People who are “strategically color-​blind,” in other words, tend to demonstrate
more negative race-​based micro-​behaviors. Moreover, Apfelbaum and colleagues
find that this relationship is mediated by a decreased capacity to exert self-​control.
This finding is consistent with a cognitive depletion explanation of this phenom-
enon. One’s resources for controlling one’s biases through effortful and conscious
control become temporarily spent, thus leading to more biased behavior.23
When we move beyond a narrow focus on implicit bias, we see not only that
deliberation sometimes has problematic consequences, but also that deliberation
itself can be subject to biases and shortcomings. The order in which we contem-
plate questions or premises, the vivacity with which a moral dilemma is posed, the
framing of a policy as an act or an omission—​all of these dramatically affect the
judgments we make when deliberating. Perhaps nowhere are the vulnerabilities
of deliberation clearest, though, than in the confabulation literature. Nisbett and
Wilson’s (1977) classic paper, “Telling More than We Can Know: Verbal Reports
on Mental Processes,” instantiated a surge of studies demonstrating the frequent
gulf between the contents of our mental deliberations and the actual reasons for
our actions. Nisbett and Wilson’s findings pertained to the “position effect” on the
evaluation of consumer goods. When asked to rate the quality of four nightgowns
or four pairs of pantyhose, which were laid out in a line, participants routinely pre-
ferred the rightmost item. This was a dramatic effect; in the case of pantyhose, par-
ticipants preferred the pair on the right to the other pairs by a factor of 4 to 1. The
trick was that all the nightgowns and pantyhose were identical. Moreover, none of
the participants reported that they preferred the items on the right because they
were on the right. Instead, they confabulated reasons for their choices. Even more
dramatic effects have since been found. The research of Petter Johansson and col-
leagues (2005) on choice blindness demonstrated that people offer introspectively
derived reasons for choices that they didn’t even make. This was shown using a
double-​card ploy, in which participants are asked to state which of two faces shown
in photographs is more attractive and then to give reasons for their preference. The
ploy is that, unbeknownst to participants, the cards are switched after they state
their preference but before they explain their choice. Johansson and colleagues

22
  For cognitive depletion and anger, see Stucke and Baumeister (2006) and Gal and Liu (2011).
For cognitive depletion and implicit bias, see Richeson and Shelton (2003) and Govorun and Payne
(2006), in addition to Apfelbaum et al. (2008), discussed here. A recent meta-​analysis of the ego deple-
tion literature (Hagger et al., 2016) may complicate these claims, but I am unaware of any failures to
replicate these specific studies. For detailed discussion of this meta-​analysis and of effect sizes and
replication efforts in the ego depletion literature, see http://​soccco.uni-​koeln.de/​cscm-​2016-​debate.
html. See also my brief discussion in the Appendix.
23
  Although other factors are surely in play when agents attempt to “not see” race. A difficulty taking
the perspective of others is a likely factor, for example. Thanks to Lacey Davidson for this suggestion.
Deliberation and Spontane i t y 173

found that participants failed to detect the fact that they were giving reasons for pre-
ferring the face that they did not in fact prefer 74% of the time. This includes trials in
which participants were given as much time as they wanted to deliberate about their
choice as well as trials in which the face pairs were highly dissimilar.24
Studies like these make it hard to argue straightforwardly that thinking more
carefully and self-​critically about our unwanted implicit attitudes is always the opti-
mal solution for improving them. If there is often a gulf between the reasons for
which we act and the reasons upon which we deliberate, as studies like these sug-
gest, then deliberation upon our reasons for action is likely to change our behavior
or attitudes in only a limited way.
Further evidence still for this is found in studies that examine the judgments
and behavior of putative exemplars of deliberation. Consider, for example, research
on what Dan Kahan and colleagues (2017) call “motivated numeracy.” Numeracy
is a scale used to measure a person’s tendency to engage in quantitative reason-
ing and to do so reflectively and systematically in order to make valid inferences.
Kahan and colleagues told one set of participants about a study examining the
effects of a new cream developed for treating skin rashes. The participants were
given basic data from four conditions of the fictitious study (in a 2 × 2 contin-
gency table): how many patients were given the cream and improved; how many
were given the cream and got worse; how many were not given the cream and
improved; and how many were not given the cream and got worse. The partici-
pants were then asked whether the study supported the conclusion that people
who used the cream were more likely to get better or to get worse than those who
didn’t use the cream. This isn’t an easy question, as it involves comparing the ratios
of those who experienced different outcomes. Unsurprisingly, 59% of participants
(across conditions) gave the wrong answer. Also unsurprisingly, and confirming
extensive previous research, was that the participants’ numeracy scores predicted
their success: 75% of those in the 90th percentile and above gave the right answer.
In a second condition, Kahan and colleagues presented the participants with the
same data, except that instead of presenting them as the results of a medical study
on skin cream, they presented the data as the results of a study on the effects of
gun-​control laws  on  crime  rates. The same 2 × 2 table was presented, this time
showing how many cities had enacted gun-​control measures and crime increased;
how many cities had enacted gun-​control measures and crime decreased; how
many cities had not enacted gun-​control measures and crime increased; and how
many cities had not enacted gun-​control measures and crime decreased. And

24
  I note the general gendered creepiness of the research prompts used in these studies. One won-
ders why Nisbett and Wilson chose nightgowns and pantyhose, rather than, say, belts and aftershave.
Was it a presumption that women are more likely to confabulate than men? One also wonders why
Johansson and colleagues had participants rate the attractiveness of photographs of women only. Are
men not fitting subjects for evaluation as objects of attraction?
174 Ethics

the participants were then asked to indicate whether “cities that enacted a ban on
carrying concealed handguns were more likely to have a decrease in crime . . . or an
increase in crime than cities without bans” (Kahan et al., 2007, 63). Here, too, Kahan
and colleagues confirmed mountains of previous research, except that in this con-
dition the unsurprising finding had to do with motivated reasoning. Participants’
political attitudes affected their evaluations of the implications of the data. In the
gun-​control condition, but not in the skin-​cream condition, liberals increasingly
identified the right answer when the right answer (given the fictitious data) was that
gun-​control measures decrease crime, and conservatives increasingly identified the
right answer when the right answer was that gun-​control measures increase crime.25
What’s striking about this study is a third finding, which builds upon these two
unsurprising effects (i.e., that higher numeracy predicts participants’ quantitative
reasoning and that people engage in motivated reasoning when they are evaluat-
ing something that holds political, social, or moral meaning for them). The third
finding was that higher numeracy predicted participants’ performance only in the
gun-​control condition when the correct response was congenial to their political
outlooks. Higher numeracy didn’t protect participants from motivated reasoning,
in other words. In fact—​and this is what’s really striking—​motivated reasoning
increased dramatically in more numerate participants. Kahan and colleagues report:

On average, the high-numeracy partisans whose political outlooks were


affirmed by the data, when  properly interpreted, was 45% more likely
(±14%, LC = 0.95) to identify the conclusion actually supported by the
gun ban experiment than were the high numeracy partisans whose politi-
cal outlooks were affirmed by selecting the incorrect response. The aver-
age difference in the case of low numeracy partisans was 25%  points
(±10%)—​a difference of 20% (±16%) (2017, 72).

This suggests that people who are better at quantitative reasoning are more likely to
engage in motivated reasoning than people who are worse at quantitative reason-
ing. Kahan and colleagues suggest that this is because more numerate individuals
have a cognitive ability that they use opportunistically to sustain identity-​protective
beliefs. They are better at reasoning toward false conclusions, in other words, in
order to confirm their antecedent beliefs.26

  Within each condition, half of the participants were given data that supported the inference that
25

the skin cream/​gun-​measure improves/​increases the rash/​crime, and the other half were given data
that supported the inference that the skin cream/​gun-​measure worsens/​decreases the rash/​crime.
26
  But it is notable that Kahan and colleagues do not take this finding to suggest that these partici-
pants are acting irrationally. They write, “We submit that a form of information processing cannot reli-
ably be identified as ‘irrational’, ‘subrational’, ‘boundedly rational’, or the like independent of what an
individuals’ aims are in making use of information (Baron, 2008, p. 61). It is perfectly rational, from an
Deliberation and Spontane i t y 175

Kahan and colleagues’ findings don’t appear to be specific to quantitative rea-


soning. Kahan (2013) finds similar patterns using the Cognitive Reflection Test
(Toplak et al. 2011), a measure of dispositions to process information in delib-
erative, System 2ish ways. Likewise, Schwitzgebel and Cushman (2012) find that
professional philosophers are no less susceptible to order effects and other rea-
soning biases than are non-​philosophers, even when making judgments about
classic philosophical problems, such as the doctrine of double effect and the
principle of moral luck. All of this is consistent with research showing that cogni-
tive sophistication does not attenuate the “blind spot bias” (West et al., 2012),
which is the tendency to recognize biases in others more easily than in oneself.27
Research on ethical action, and on consistency between one’s values and one’s
behavior, yields similar results. Schwitzgebel and Joshua Rust (2014) compared
the ethical behavior of ethics professors with the behavior of other philosophy
professors as well as professors outside of philosophy. They examined both how
likely participants were to engage in seemingly ethical behavior—​like voting,
replying to students’ emails, and filling out questionnaires honestly—​and how
consistent participants’ behavior was with their stated values. What they found
was that ethics professors’ stated values were more stringent on some issues, like
vegetarianism and charitable donations, but on not one issue was the behavior
of ethics professors unequivocally better than that of those in the comparison
groups. Moreover, while the ethics professors’ attitude–​behavior consistency was

individual welfare perspective, for individuals to engage with decision-​relevant science in a manner that
promotes culturally or politically congenial beliefs. What any individual member of the public thinks
about the reality of climate change, the hazards of nuclear waste disposal, or the efficacy of gun control
is too inconsequential to influence the risk that that person or anyone he or she cares about faces.
Nevertheless, given what positions on these issues signify about a person’s defining commitments,
forming a belief at odds with the one that predominates on it within important affinity groups of which
such a person is a member could expose him or her to an array of highly unpleasant consequences
(Kahan, 2012). Forms of information processing that reliably promote the stakes that individuals have
in conveying their commitment to identity defining groups can thus be viewed as manifesting what
Anderson (1993) and others (Lessig, 1995; Akerlof and Kranton, 2000; Cohen, 2003; Hillman, 2010;
Stanovich, 2013) have described as expressive rationality. If ideologically motivated reasoning is expres-
sively rational, then we should expect those individuals who display the highest reasoning capacities to
be the ones most powerfully impelled to engage in it (Kahan et al., 2012). This study now joins the rank
of a growing list of others that fit this expectation and that thus supports the interpretation that ideo-
logically motivated reasoning is not a form of bounded rationality, but instead a sign of how it becomes
rational for otherwise intelligent people to use their critical faculties when they find themselves in the
unenviable situation of having to choose between crediting the best available evidence or simply being
who they are.” (2017, 77, emphasis in original).
27
  I say that this finding is consistent with Schwitzgebel and Cushman’s research on the presump-
tion that professional philosophers rate relatively high in cognitive sophistication. I  do not know
whether this is actually true. At the least, I suspect that it is true of common beliefs about professional
philosophers.
176 Ethics

strongest on some issues—​like voting—​it was weakest on others—​like making


charitable donations.

5. Conclusion
These arguments are not at all tantamount to the radical view that deliberation has
no role, or even an unimportant role, in making our spontaneous reactions more
virtuous. Rather, they show that deliberation is neither necessary nor always rec-
ommended for ethical action. This leaves open the possibility that some forms of
deliberation are better than others.28 But it forecloses the possibility that the way to
ensure that our implicit attitudes are ethical is always to scrutinize them, Alfred-​like.
This we often ought to do, but it isn’t always necessary and it can be costly. The ques-
tion remains, then: What ought we to do in these situations? How can we effectively
cultivate better “instincts” for acting both spontaneously and ethically, Belinda-​like?

  See Elizabeth Anderson (2005) for a compelling comment on the value of creating more effec-
28

tive “deliberative contexts” for combating the shortcomings of some moral heuristics.
7

The Habit Stance

In the preceding chapter, I argued that deliberation is neither necessary nor always
recommended for ethical action. One way to gloss this claim is to think in terms of
Dennett’s distinction between “stances” for predicting behavior. Dennett (1989)
popularized the terms “physical stance,” “design stance,” and “intentional stance”.
These terms refer to ways of making predictions about what entities in the world
will do. The physical stance is the basic orientation of the natural sciences. If you
want to predict what a rock will do when I drop it from my hand, you’ll use the
physical stance (or some lay approximation of it). That is, you’ll use what you know
about how gravity affects objects in order to make a prediction about what the rock
will do (namely, fall toward the earth). You won’t worry about what the rock has
been designed to do, unless you’re an extreme creationist or believe in an archaic
teleological version of physics. Nor will you worry about the rock’s wanting or
hoping or desiring to fall to the earth, since most  people  don’t  think rocks have
intentional states like these. If you want to predict what a toaster will do when you
press the start button, however, you probably won’t focus on the underlying physics
of heat and electricity, nor again will you focus on what you imagine the toaster’s
hopes or fears to be. Rather, because the toaster is an artifact, you’ll focus on what it
was designed to do. You’ll do this because it’s effective and reliable to predict what
the toaster will do on the basis of your understanding of what it was designed to do.
Your understanding of its design might be wrong, but taking the design stance is
still probably your best bet. Finally, if you want to make a prediction about what a
person will do—​say, what Selma will do if she hates her job—​you are best off focus-
ing on familiar features of her mind. She is likely to show up late, complain about
her boss, maybe quit, and so on. Jerry  Fodor illustrates the appeal of the inten-
tional stance: “If you want to know where my physical body will be next Thursday,
mechanics—​our best science of middle-​sized objects after all, and reputed to be
pretty good in its field—​is no use to you at all. Far the best way to find out (usually, in
practice, the only way to find out) is: ask me!” (1987, 6, emphasis in original).
Each stance can be taken toward one and the same entity, with variable degrees
of success. It won’t do much good to take the intentional stance toward rocks and

177
178 Ethics

toasters, but a neuroscientist might make very successful predictions about human
behavior by taking the physical stance. (This doesn’t take a neuroscientist, of course.
Taking the physical stance, I can predict that if I feed you a lot of caffeine, you will
act energetically.) Similarly, evolutionary psychologists might make predictions
about human emotions by taking the design stance. For example, we have evolved
to feel disgusted by germs and pathogens that can harm us (Kelly, 2013). This leads
to a design-​stance prediction that you will feel disgusted if I offer you a handful of
rotting food.
In arguing that deliberation is neither necessary nor always recommended for
ethical action, one could say that I am arguing against taking the intentional stance
alone toward oneself. Roughly, by this I mean that, in endeavoring to improve the
virtuousness of our implicit attitudes, we should not treat them as rational states to
be reasoned with. Admittedly, this is a bit of a stretch of Dennett’s concept. Taking
the intentional stance doesn’t mean focusing on an agent’s deliberations per se or on
making predictions about the effects of various self-​regulation strategies. Dennett’s
stances are tools for making predictions about what entities will do, by treating those
entities as if their behavior were driven just by physical forces, functional design, or
intentional states.1 What is adaptable from Dennett, though, is the broad distinction
between predictions that focus on an entity’s explicit mental states—​those states
capable of figuring into deliberation and practical reasoning—​and predictions that
don’t. This adaptation is useful when we are considering what kinds of interven-
tions for improving our implicit attitudes are effective and for conceptualizing the
attitude we take toward ourselves when aspiring to self-​improvement.2
At the same time, the ethics of implicit attitudes is a matter neither of adopting
the physical stance alone toward oneself nor of adopting the design stance alone
toward oneself. Implicit attitudes are intentional states with cognitive and affective
components. The actions they underwrite reflect upon us; they are examples of
what we care about. These qualities of implicit attitudes recommend against regu-
lating them by treating them as if they were governed by the laws of physics alone.
They also recommend against treating them as if they were designed to perform
certain fixed functions.

1
  The “as if ” is important for allowing Dennett room to endorse physicalism. The point of these
stances is not to distinguish entities that are and aren’t made of, or governed by, physical forces. The
point is rather about making the most accurate and efficient predictions of behavior.
2
  I am indebted to Louise Antony (2016) for the idea of characterizing various interventions for
changing implicit attitudes in terms of Dennett’s stances. Note that the relevant distinction here is not
between stances that maintain an engaged, first-​person perspective and stances that take up a disen-
gaged, third-​person perspective toward one’s implicit attitudes. In the way that I will use the concept of
“stances,” all stances will involve taking certain attitudes toward one’s implicit attitudes. These are ways
of treating our implicit attitudes when we try to change them. So even taking the intentional stance
toward one’s implicit attitudes involves adopting a third-​person perspective on them.
The Habit Stanc e 179

While improving our implicit attitudes is not reducible to taking the intentional,
physical, or design stance, it involves elements of each. Cultivating more virtu-
ous implicit attitudes is in part intentional, physical, and design-​focused, in other
words. In recognition of William James’s point that “our virtues are habits as much
as our vices” (1899, 64) (and in keeping with the tradition that any decent work of
philosophical psychology must quote James), I will call the most promising hybrid
of these stances the habit stance.3 The habit stance is in part intentional, because
improving our implicit attitudes ineliminably involves goal-​setting and adopting
particular motives. It is in part physical, because improving our implicit attitudes
necessarily involves rote repetition and practice, activities that operate on the basis
of brutely causal processes. And it is in part design, because improving our implicit
attitudes requires both appreciating what they are designed by evolution to do
and appreciating how to design our environments and situations to elicit the right
kinds of spontaneous responses.4 In this context, adopting the habit stance means
cultivating better implicit attitudes—​and hence better spontaneous decisions and
actions—​by treating them as if they were habits.
In this chapter, I describe what it means to take the habit stance in endeavoring
to improve the virtuousness of our implicit attitudes. I do so by appealing to the
most empirically well-​supported and promising interventions for implicit attitude
change. One virtue of this approach is that it skirts a worry about the “creepiness”
of implicit attitude change interventions. The creepiness worry is that these inter-
ventions will be widely rejected because they seem akin to a kind of brainwashing.
Some interventions involve rote repetition of associations, and some are even pre-
sented subliminally. On their own, these techniques seem to be ways of taking the
physical stance alone toward attitude change, which does have a Clockwork Orange
ring to it. Understanding these techniques in the broader context of the habit stance,
however, should alleviate the creepiness worry. Treating our implicit attitudes as if
they were habits means treating them as part of who we are, albeit a part distinct
from our ordinary, conscious beliefs and values.
Of course, research on implicit social cognition is relatively new, which means
that research on implicit attitude change is even newer. While it is clear that implicit
attitudes are malleable, there is much to learn about the most effective techniques
for changing them. First I’ll discuss three general approaches that increasingly
appear to be well supported in both lab-​based and field studies (§1). Each repre-
sents a different element of habit change, that is, a way of taking the habit stance
toward oneself. Respectively, I’ll outline rote practice (§1.1), pre-​commitment (§1.2),

3
  Thanks to Alex Madva for suggesting this concept.
4
  Understood this way, the idea of the habit stance marks a step beyond what others have rightly
noted about states like implicit attitudes, that they are reducible to neither “reasons” nor “causes”
(Gendler, 2012; Dreyfus, 2007a). This is true, and our efforts to improve them should similarly not be
reduced to targeting either of these alone.
180 Ethics

and context regulation (§1.3). These are not the only effective strategies for the self-​
regulation of implicit attitudes, nor will they necessarily prove to be the most effec-
tive. Future research will tell. But what they illustrate is the value and importance
of the habit stance, and in particular how this stance is composed of elements of
the others (i.e., of the intentional, physical, and design stances). Then I’ll consider
objections to taking the habit stance in this way (§2). Some objections are primarily
empirical; they are about the broadness and durability of implicit attitude change
interventions. Another is not empirical. It is about the nature of praise, in particular
whether the reshaping of one’s attitudes and behavior in the ways I describe counts
as a genuine form of ethics.

1. Techniques
Research on self-​regulation is immensely broad, denoting more a set of assump-
tions about techniques for self-​ improvement than a particular target set of
unwanted behaviors. Two assumptions are particularly salient in the field. One is
that approaches to ethical self-​cultivation can and should be subjected to empirical
scrutiny. I embrace this assumption in what follows. The second is that the relation-
ship between self-​regulation and self-​control is extremely tight. I reject this assump-
tion. In doing so, I cast a wider net in considering the most effective techniques for
improving our implicit attitudes, wider, that is, than a focus on self-​control strate-
gies alone would allow.
“Self-​regulation” and “self-​control” are not settled terms in the research literature.
But roughly, self-​regulation refers to the general process of managing one’s thoughts,
feelings, and behavior in light of goals and standards (e.g., Carver and Scheier
1982; Fujita 2011). Self-​control refers more narrowly to impulse inhibition (e.g.,
Mischel et al., 1988) or, as Angela Duckworth puts it, to “the voluntary regulation
of behavioral, emotional, and attentional impulses in the presence of momentarily
gratifying temptations or diversions.”5 Many researchers see self-​regulation and self-​
control, understood in roughly these terms, as very tightly linked. In their widely
cited meta-​analysis, for example, Denise T. D. de Ridder and colleagues say that,
despite important differences between competing theories, “researchers agree that
self-​control focuses on the efforts people exert to stimulate desirable responses and

  Duckworth’s definition differs from Mischel’s in the sense that she does not emphasize the inhi-
5

bition of impulses in the service of resisting temptation. One can exhibit self-​control, on her defini-
tion, by avoiding situations in which one will feel unwanted impulses. See https://​sites.sas.upenn.
edu/​duckworth/​pages/​research. Note also that others challenge these definitions of self-​regulation
and self-​control. See, e.g., Jack Block (2002), who advances a roughly Aristotelian conception of self-​
control, as a capacity that admits of deficiencies as well as excesses. For an elaboration of these issues,
as well as of the examples I present in the text, see Brownstein (ms).
The Habit Stanc e 181

inhibit undesirable responses and that self-​control thereby constitutes an impor-


tant prerequisite for self-​regulation” (2012, 77). Others take a stronger stance,
arguing that self-​control is not just a prerequisite for self-​regulation, but that it
inevitably promotes it. In what they call their “manual of the sanities”—​a positive
psychology handbook describing contemporary research on strengths of charac-
ter—​Christopher Peterson and Seligman argue that “there is no true disadvantage
of having too much self-​control” (2004, 515). Likewise, Baumeister and Jessica
Alquist argue that “trait self-​control . . . appears to have few or no downsides” (2009,
p. 115). The implied premise of these claims is that self-​control has no downsides
for self-​regulation, that is, for the agent’s ability to manage her thoughts, feelings,
and behavior in pursuit of her goals.
There are several reasons to reject the idea that self-​regulation and self-​control
are so tightly linked, however. One reason is that the benefits of self-​control appear
to accrue differently to people based on factors like race and socioeconomic sta-
tus (SES). Research suggests that the benefits of self-​control are mixed for low-​SES
youth, in particular low-​SES black youth. Gregory Miller and colleagues (2015),
for example, show that for low-​SES black teenagers, high trait self-​control predicts
academic success and psychosocial health, but at the expense of epigenetic aging
(i.e., a biomarker for disparities between biological and chronological age). “To the
extent that they had better self-​control,” the authors write, “low-​SES children went
on to experience greater cardiometabolic risk as young adults, as reflected on a com-
posite of obesity, blood pressure, and the stress hormones cortisol, epinephrine,
and norepinephrine” (2015, 10325). Gene Brody and colleagues (2013) arrive at
similar conclusions, finding that low-​SES black youth with high self-​control, while
achieving more academically than low-​SES black youth with low self-​control, also
experience more allostatic load (i.e., stress on the body, which in turn predicts the
development of chronic diseases and health disparities).
High self-​control may also be maladaptive for people with low SES. Sara Heller
and colleagues provide an illustrative vignette of the comparative benefits and con-
sequences of persistence (which is a central theme in research on self-​control) for
advantaged and disadvantaged people:

[I]‌magine a relative promises to pick up a teenager from school, but right


now it is not working out—​15 minutes after school lets out, the ride is still
not there. For a youth from an affluent background the adaptive response
is probably to wait (persist), because the adult will eventually show up and
the teen will be spared the cost of having to walk or bus home. In school,
if that youth is struggling to learn some difficult material the optimal
response is also to keep trying (persist); if needed, eventually a teacher
or parent or peer will help the youth master the material. Now imagine
a youth from a disadvantaged area where things can be more chaotic and
unpredictable. When the youth is waiting for a ride after school that is
182 Ethics

running late, the adaptive response to things not working out may be to
start walking home (explore a more promising strategy). When the youth
is struggling with schoolwork the automatic response could be either to
just move on, or to persist, depending partly on whether they think of this
as the same situation as someone being late picking them up from school
(for example if they think “this is yet another situation where I can’t rely on
others”). (2015, 10–11)

There are also counterexamples that challenge the idea that self-​regulation and
self-​control are always tightly linked. Self-​regulation is sometimes attained without
suppressing occurrent impulses or resisting temptations, such as when one success-
fully manages competing thoughts and feelings while playing sports (Fujita, 2011).
Shooting a free throw in basketball, for instance, can involve suppressing impulsive
feelings of fear about missing the shot, but it need not. And it is likely that the best
shooters don’t suppress these feelings of fear; rather, they simply don’t feel them.
Sometimes one achieves self-​regulation by acting on the basis of one’s impulses.
Alfred and Belinda, from Chapter  6, offer a good example of this. Both, we can
presume, share the long-​term intention to be healthy and sensible, and both also
enjoy spontaneously eating ice cream for dinner from time to time. Only Belinda,
however, has good instincts for when and where to spontaneously eat ice cream for
dinner. In other words, only Belinda has a good feel for when to act on the basis
of her impulses and temptations. She seems better self-​regulated than Alfred, who
cannot let himself indulge without carefully scrutinizing his decision (in such a way,
as I discussed earlier, that he ruins the idea of spontaneously eating ice cream for
dinner). In other words, Alfred seems to exert more self-​control than Belinda, but
Belinda seems better self-​regulated than Alfred.
As I have said, avoiding the assumption that self-​regulation necessarily involves
effortful self-​control enables me to cast a wider net in considering the most effec-
tive techniques for improving our implicit attitudes. I begin with the simplest: rote
practice.

1.1  Rote Practice


Arguably the most ineliminable form of self-​regulation of implicit attitudes is also
the least glamorous. The fact is that simple, repetitive practice is effective for shap-
ing and reshaping these states. This is clear across a number of domains. K. Anders
Ericsson first made the argument that it takes about ten thousand hours to become
an expert in some domain (Ericsson et  al., 1993). Gladwell (2011) popularized
this so-​called ten-​thousand-​hour rule, although it isn’t at all clear that ten thou-
sand hours is the right number (as discussed later). Building on Ericsson and oth-
ers’ research on chess grandmasters, composers, and athletes, Gladwell argued
that almost no one succeeds at elite levels of skilled action on the basis of innate
The Habit Stanc e 183

talent alone. Rather, “achievement is talent plus preparation” (Gladwell, 2013). In


particular, achievement in domains like these is talent plus thousands of hours of
preparation, give or take. Often, this preparation involves simple repetition. This
might seem unsurprising in the realm of sports. Andre Agassi (2010), for example,
describes his emptying basket after basket of tennis balls as a child, hitting serve
after serve after serve. Agassi’s tyrannical father presumed that anyone who hit a mil-
lion balls as a child would become a great player. This is false, of course, since very
few players, no matter how many balls they hit, could become as great as Agassi. But
the point is that even Agassi wouldn’t have been great without having put in endless
hours of rote practice.
Research suggests this is the case in more cerebral skilled activities too, like chess
and musical composition. John Hayes (1985), for example, found that almost all
famous classical composers created their best works only after they had been com-
posing for at least ten years. While this surely included much contemplation about
the nature of successful composition, careful consideration of successful and unsuc-
cessful scores, and so on, much of it also involved simply going through the process
of composing over and over again.
The ten-​thousand-​hour rule has received a lot of criticism. For one there
appear to be counterexamples. Apparently a significant percentage of professional
basketball players in Australia reached expertise with far fewer than ten thousand
hours of practice  (Epstein, 2013). The quicker-​to-​expertise phenomenon also
appears to be common in high jumping, sprinting, and darts (Epstein, 2013).
Others have developed techniques for accelerated skill learning.6 Moreover, other
things seem to have a significant impact on success in sports, such as having grown
up in a small rather than large city and having spent time as a child engaged in
structured activities that promote general self-​regulatory skills, such as patience
and grit (rather than activities more narrowly focused on skill development; for
review see Berry et al., 2008). A recent meta-​analysis even finds that deliberate
practice accounts for only 18% of the variance in sports performance and, more
striking still, for only 1% of the variance of performance for elite-​level athletes
(Macnamara et al., 2016).
These critiques are legitimate, but they don’t undermine the idea that rote
practice is an essential element of skill development. Some of the apparent coun-
terexamples are of highly skilled but not elite-​level performers (e.g., professional
basketball players in Australia, not National Basketball Association players in the
United States). Others pertain to activities that are not particularly complex, such
as darts. In the case of measuring variance in performance due to deliberate prac-
tice, there is debate about the correct definition of “practice.” Ericsson suggests
that deliberate practice in sports involves “one-​on-​one instruction of an athlete by

  See, e.g., Ferriss (2015).


6
184 Ethics

a coach, who assigns practice activities with explicit goals and effective practice
activities with immediate feedback and opportunities for repetition” (2016, 352).
Brooke Macnamara and colleagues, in their meta-​analysis, employ a much broader
understanding of practice, including activities like weight training and watching
film. Neither of these approaches speaks to the role of rote practice in skill develop-
ment. Surely such repetitive action is insufficient for expertise. But it is consistent
with the evidence that it is nearly always necessary.7
This is not a new discovery in other areas of life. Confucius’s Analects place great
emphasis on the rote practice of ritual activities (Li). Practice is central to living in
accord with what Confucius calls “the Way” (Dao). For example, in Analects 8.2,
“The Master said, ‘If you are respectful but lack ritual you will become exasperating;
if you are careful but lack ritual you will become timid; if you are courageous but
lack ritual you will become unruly; if you are upright but lack ritual you will become
inflexible’ ” (Confucius, 2003, p. 78). Performance of ritual covers a variety of activi-
ties in the Analects. On the Confucian picture, these rituals and rites serve to form
moral character and habits, which in turn help to create social harmony (Li, 2006).
Sarkissian (2010a) has combined these insights with recent research in social
psychology and neuroscience in arguing that prolonged ritual practice facilitates
spontaneous yet ethical behavior through the creation of somatic neural markers.
In short, these are affective tags that mark particular behavioral routines as positive
or negative (Damasio, 1994). Sarkissian writes:

The accrual of these markers over time fine-​tunes and accelerates the
decision-​making process; at the limit, the correct course of action would
come to mind immediately, with compelling emotional valence . . . famil-
iarity with a broad range of emotions, facilitated through exposure to litera-
ture, art, and social rituals, will allow one to perceive values in a wide range
of scenarios, thus improving the likelihood of responding appropriately in
any particular situation. (2010a, 7)

Somatic marker theory has been combined with theories of feedback-​error learn-
ing, of the sort I discussed in Chapter 2 (Holroyd et al., 2003). This nicely connects
my account of the functional architecture of implicit attitudes with established and
plausible theories of habit and self-​regulation.
The most head-​on evidence attesting to the importance of rote practice for implicit
attitude change, however, comes from research on changing implicit biases. Kerry
Kawakami and colleagues have demonstrated that biased approach and avoidance
tendencies can be changed through bare-​bones rote practice. In their research, this
involves training participants to associate counterstereotypes with social groups.

  See Gladwell (2013).


7
The Habit Stanc e 185

For example, in the training condition in Kawakami et al. (2007a), participants were
shown photographs of men and women, under which both gender-​stereotypic (e.g.,
“sensitive” for women) and gender-​counterstereotypic (e.g., “strong” for women)
information was written. Participants were asked to select the trait that was not cul-
turally associated with the social group represented in the picture. Participants prac-
ticed fairly extensively, repeating this counterstereotype identification a total of 480
times. The effect of practicing counterstereotypical thinking in this way was a sig-
nificant change in participants’ behavior. In the evaluation phase of the experiment,
participants made hypothetical hiring decisions about male and female candidates,
as well as rated the candidates in terms of stereotypical and counterstereotypical
traits. Kawakami and colleagues have varied the social groups and traits used as
prompts in other studies, as well as the behavioral outcome measures. For example,
using this procedure, Kawakami and colleagues (2000) asked participants to negate
black-​athletic stereotypes. They were able to completely reverse biased participants’
implicit attitudes. Moreover, variations in the training technique have also proved
effective. Kawakami et al. (2007b) used a procedure in which participants pulled a
joystick toward themselves when they were presented with black faces and pushed
the joystick away from themselves when presented with white faces. The notion
guiding this technique originates in research on embodied cognition. Gestures
representing physical avoidance—​pushing something away from one’s body—​can
signify mental avoidance—​effectively saying no to that thing. Likewise, gestures
representing physical approach—​pulling something toward oneself—​can signify
acceptance of that thing (Chen and Bargh, 1999).
Rote practice is at the heart of this paradigm, throughout its many variations. It
is crucial that participants in these experiments simply practice associating stigma-
tized social groups with counterstereotypes repeatedly. One way to think of this,
recommended by Madva (2017), is as the “contact hypothesis in a bottle.” In short,
the contact hypothesis—​originally proposed by Gordon Allport (1954) and exten-
sively researched since—​proposes that prejudiced attitudes can be changed by inter-
group contact. Put plainly, spending time and interacting with people from social
groups other than one’s own promotes egalitarian attitudes and mutual understand-
ing. Evidence for the effectiveness of contact in promoting these ends is substantive,
though only under particular conditions. Intergroup contact promotes egalitarian
explicit attitudes when both parties are of equal status and when they are engaged
in meaningful activities, for example (Pettigrew and Tropp, 2006). Evidence for the
effectiveness of intergroup contact in improving implicit attitudes is also substan-
tive, but also under particular conditions (Dasgupta and Rivera, 2008; Dasgupta,
2013). For example, Natalie Shook and Fazio (2008) found in a field study that
random assignment to a black roommate led white college students to have more
positive implicit attitudes toward black people.
Counterstereotype training mimics at least part of the mechanism that
drives the effects of intergroup contact:  repeated experiences that strengthen
186 Ethics

counterstereotypical associations. In other words, the intervention reduces one


element of effective intergroup contact—​unlearning stereotypes—​down to a sim-
ple activity that psychologists can operationalize in the lab and agents can prac-
tice quickly over and over again. A  perhaps unexpected yet apt analogy is found
in the advent of online poker. One of the central challenges aspiring poker experts
traditionally faced was the simple difficulty of seeing enough hands (i.e., logging
sufficient hours at the table to receive the incredible number of combinations of
cards and contexts that one faces when playing poker). At a live table, the game
goes slowly, so thousands of hours are required for extensive practice. Online poker
is much faster. It enables players to see at least ten times as many hands per hour.
Like counterstereotype training, online poker is not equivalent to live poker; there
are important differences, and live-​game pros and online pros have different sets of
skills. Similarly, pairing “women” with “math” isn’t the same as doing difficult math
problems in mixed-​gender groups. But the bottled versions of both of these activi-
ties are useful and effective nonetheless.
Kawakami and colleagues’ research nicely exemplifies how complex social atti-
tudes can be reshaped at least in part by the simplest of activities—​repeating the
associations and behaviors one wants. This is not reducible to an intentional stance
intervention. As I discussed in Chapter 2, the effects of something like rote prac-
tice on attitudes and behavior are indicative of the nature of one’s operative mental
states. Reflective states like belief paradigmatically change in the face of evidence
and reason; they do not paradigmatically change when one simply repeats words
or gestures. Indeed, why should repeatedly selecting traits not culturally associated
with a particular social group change whether one believes that that trait applies to
that group? That said, one must choose this kind of training, so it is not as if the
idea of the intentional stance is altogether irrelevant to rote practice interventions.8
Similarly, engaging in rote practice to improve one’s implicit attitudes involves tak-
ing the physical stance, but is not reducible to it. The repetition involved is in some
sense a way of treating oneself as if one were merely a physical thing, subject to
nothing but causal forces. One is simply pushing one’s implicit attitudes around.
And yet social contact, for instance, has salutary effects (in both its full and bot-
tled versions) only under conditions that mesh with other concerns of the agent,
such as being of equal status with others. Likewise, the emphasis on rote practice

  But see Kawakami et  al. (2007b) for a description of a subliminal counterstereotype training
8

technique, which does not require participants to choose to change their stereotypical associations.
Kawakami and colleagues find that participants sometimes resent counterstereotype training, and
thus they endeavored to design a less obtrusive intervention. It is noteworthy, though, that while par-
ticipants who resented counterstereotype training, and briefly showed ironic effects of the training
(i.e., showed more stereotypic responding on subsequent tasks), they subsequently demonstrated less
biased responding after time. It seems as if, once their resentment abated, they demonstrated the more
lasting effects of the intervention (i.e., less stereotypic responding).
The Habit Stanc e 187

in Confucian ethics is aimed at shaping the way that one perceives and acts upon
values. This is clearly irreducible to a physical stance ethics. And finally, while rote
practice certainly takes advantage of the ways in which we have evolved—​in par-
ticular the way in which our minds are akin to categorization machines that update
in the face of changing experiences (Gendler, 2011)—​rote practice is irreducible to
a design stance intervention too. The point of practicing a particular behavior is not
to fulfill some function we have acquired through evolution. Rather, it is to cultivate
better spontaneous responses to situations. Doing this is akin to cultivating better
habits, which involves intentional, physical, and design elements, but is reducible to
none of these.

1.2  Pre-​Commitment
In Chapter 5, I briefly mentioned Ulysses’ strategy of instructing his crew to bind
him to the mast of his ship so that he could resist the Sirens’ song. I called this a
form of pre-​commitment. Ulysses wanted to resist the song; he knew he wouldn’t
be able to resist it; so he made a plan that denied himself a choice when the moment
came. In one sense, this strategy clearly reflected Ulysses’ beliefs, values, and inten-
tions. If you wanted to predict that he would have used it, you would want to adopt
the intentional stance. In another sense, though, Ulysses took the physical stance
toward himself. He saw that he would be unable to resist the song on the basis of
his willpower. Instead, he relied on the physical force of the ropes and the mast to
keep him in place. If you wanted to predict whether Ulysses would in fact resist the
Sirens’ song, you would want to test the strength of the rope rather than the strength
of his resolve.
Ulysses’ technique was crude, literally relying on binding himself by force. More
subtle versions of self-​regulation via pre-​commitment have since been devised.
Principal among them—​and perhaps most intriguing for psychological reasons—​
are techniques that regulate one’s attention through pre-​commitment.
Consider implementation intentions. These are “if–​then” plans that appear to
be remarkably effective for redirecting attention in service of achieving a wide
range of goals. Implementation intentions specify a goal-​directed response that
an individual plans to perform when she encounters an anticipated cue. In the
usual experimental scenario, participants who hold a goal, “I want to X!” (e.g., “I
want to eat healthy!”) are asked to supplement their goal with a plan of the form,
“and if I  encounter opportunity Y, then I  will perform goal-​directed response
Z!” (e.g., “and if I get take-​out tonight, then I will order something with vegeta-
bles!”). Forming an implementation intention appears to improve self-​regulation
in (for example) dieting, exercising, recycling, restraining impulses, maintain-
ing focus (e.g., in sports), avoiding binge drinking, combating implicit biases,
and performing well on memory, arithmetic, and Stroop tasks (Gollwitzer and
Sheeran, 2006).
188 Ethics

It is worth digging a little deeper into the pathways and processes that allow
if–​then planning to work, given how remarkably effective and remarkably simple
this form of self-​regulation appears to be.9 Two questions are salient in particular.
First, why think that implementation intentions are a form of self-​regulation via
pre-​commitment, in particular involving the redirecting of attention? Second, why
think that self-​regulation of this kind reflects the habit stance?
Peter Gollwitzer and Paschal Sheeran (2006) argue that adopting an imple-
mentation intention helps a person to become “perceptually ready” to encoun-
ter a situation that is critical to that person’s goals. Becoming perceptually ready
to encounter a critical cue is a way of regulating one’s attention. Perceptual readi-
ness has two principle features: it involves increasing the accessibility of cues rel-
evant to a particular goal and automatizing one’s behavioral response to those cues.
For example, performance on a shooter bias test—​where one’s goal is to “shoot”
all and only those individuals shown holding weapons in ambiguous pictures—​is
improved when one adopts the implementation intention “and if I see a gun, then
I will shoot!” (Mendoza et al., 2010) Here the relevant cue—​“gun”—​is made more
accessible—​and the intended response—​to shoot when one sees a gun—​is made
more automatic. Gollwitzer and colleagues explain this cue–​response link in asso-
ciative terms:  “An implementation intention produces automaticity immediately
through the willful act of creating an association between the critical situation and
the goal-​directed response” (2008, 326). It is precisely this combination of willfully
acting automatically that makes implementation intentions fascinating.
I’ll say more about the automatization of behavioral response in a moment. It
seems that being “perceptually ready” is tied in particular to the relative accessi-
bility of goal-​relevant cues when one has adopted an implementation intention.
Researchers have demonstrated what they mean by cue accessibility using a num-
ber of tasks. Compared with controls, subjects who adopt if–​then plans have better
recall of specified cues (Achtziger et al., 2007); have faster reactions to cues (Aarts
et al., 1999); and are better at identifying and responding to cued words (Parks-​
Stamm et al., 2007). It appears that the critical cues specified and hence made more
accessible by adoption of implementation intentions can be either external (e.g., a
word, a place, or a time) or internal (e.g., a feeling of anxiety, disgust, or fear).
All of this suggests that adopting implementation intentions enhances the
salience of critical cues, whether they are internal or external to the subject. Cues,
in this context, become Features for agents. In some cases, salience is operation-
ally defined. In these cases, for a cue to be accessible just means for the subject to
respond to it quickly and/​or efficiently, regardless of whether the subject sees the
cue consciously or perceives it in any particular way. It is possible, though, that the

  See Gollwitzer and Sheeran (2006) for review of average effect sizes of implementation intention
9

interventions.
The Habit Stanc e 189

perceptual content of the cue changes for the subject, at the level of her percep-
tual experience. For example, if I am in a state—​due to having adopted an imple-
mentation intention—​where I am more likely to notice words that begin with the
letter “M,” I might literally see things differently from what I would have seen had
I merely adopted the goal of looking for words that begin with the letter “M.”10 In
both cases, implementation intentions appear to be a form of self-​regulation via the
pre-​planned redirecting of one’s attention.
Why think that this is a way of taking the habit stance? Without pre-​commitment
to the plan—​ which appears to represent an intentional stance ethics—​
implementation intentions can’t get off the ground. Indeed, implementation inten-
tions are tools to improve goal striving. Just as Ulysses consciously and intentionally
chose to bind himself to the mast, if–​then planners must consciously and intention-
ally choose to act in way Z when they encounter cue Y. But the analogy cuts both
ways. Ulysses had to follow through with literally binding himself. In an important
sense, he had to deny himself agency in the crucial moment. If–​then planning seems
to rely upon a related, but more subtle version of denying oneself choices and deci-
sions in the crucial moments of action. This is the automatization of behavioral
response. The idea is that if–​then planning makes it more likely that one just will
follow the plan, without evaluating, doubting, or (re)considering it. More generally,
this is the idea of pre-​commitment. It treats one’s future self as a thing to be man-
aged in the present, so that one has fewer choices later.11 One could imagine Alfred
benefiting from adopting a plan like “If I see ice cream bars at the grocery store, then
I’ll buy them!”12 Perhaps doing so would help him to avoid destroying his reasons
to be spontaneous by deliberating about those reasons.
We can find further reason to think that attention-​regulation via pre-​commit-
ment involves, but is not irreducible to, the intentional stance by considering related
self-​regulation strategies that don’t work. Adding justificatory force to one’s imple-
mentation intentions appears to diminish their effectiveness. Frank Wieber and col-
leagues (2009) show that the behavior response of subjects who were instructed
to adopt an if–​then plan that included a “why” component—​for example, “If
I am sitting in front of the TV, then I will eat fruit because I want to stay healthy!”
(emphasis added)—​returned to the level obtained in the goal-​intention-​only con-
dition. Being reminded of one’s reasons for striving for a goal seems to undermine
the effectiveness of adopting an implementation intention, in other words. Wieber

10
  See Chapter 2 for discussion of different versions of this possibility, some of which involve cogni-
tive penetration of perception and some of which don’t.
11
  See Bratman (1987) on how intentions can serve to foreclose deliberation.
12
  As I discuss in §2, the dominant focus in research on self-​regulation is on combating impulsivity.
But there are those of us like Alfred who are not too myopic, but are “hyperopic” instead. We (I admit)
struggle to indulge even when we have good reason to do so. For research on hyperopic decision-​
making, see Kivetz and Keinan (2006), Keinan and Kivetz (2011), and Brownstein (ms).
190 Ethics

and colleagues found this using both an analytic reasoning study and a two-​week
weight-​loss study. It would be interesting to see if “if–​then–​why planning” has simi-
larly detrimental effects in other scenarios.
While implementation intentions clearly aren’t reducible to a physical stance
intervention, there is evidence that their effectiveness is tied to surprisingly low-​
level physical processes. Implementation intentions appear to have effects on the
very beginning stages of perceptual processing. This emerges from research on the
role of emotion in if–​then planning. Inge Schweiger Gallo and colleagues (2009)
have shown that implementation intentions can be used to control emotional reac-
tivity to spiders in phobic subjects. Schweiger Gallo and colleagues’ paper is the
only one I know of that offers a neurophysiological measure of emotion regulation
using implementation intentions. I mention this because they find that implemen-
tation intentions diminish negative emotional reactions to spiders very early in the
perceptual process, within 100 msec of the presentation of the cue. Compared with
controls and subjects in the goal-​only condition, subjects who adopted if–​then
plans that specified an “ignore” reaction to spiders showed lower activity in the P1
component of visual cortex. This suggests that subjects are not merely controlling
their response to negative feelings; they are downregulating their fear itself, using
what James Gross (2002) calls an “antecedent-​based” self-​regulation strategy (ante-
cedent to more controlled processes).13
Adopting an implementation intention is an example of regulating one’s future
behavior by making it the case that one pays attention to one thing rather than
another or by changing how one sees things. While this involves both intentional
and physical stance elements, as I’ve argued, one could also see this kind of self-​
regulation strategy as reflective of the design stance. Pre-​commitment involves
“hacking” one’s future behavior. It takes advantage of the sort of creatures that we
are, in particular that we can forecast how we will and won’t behave in the future;

13
  This example is unlike many better-​known cases in which implementation intentions have
powerful effects on behavior, cases in which the self-​regulation of emotion does not appear to drive
the action. Cases in which if–​then plans promote the identification of words that start with the letter
“M,” for example, or performance on Stroop tasks or IATs, are not cases in which negative emotions
are explicitly restrained. In these cases, there is no obvious emotional content driving the subject’s
response to the cue. But as I discuss in Chapter 2, there is the possibility that low-​level affect plays a
pervasive role in visual processing. It is possible, then, that cues specified by implementation inten-
tions come to be relatively more imperatival for agents. In cases where one’s goal is to block emotional
reactions, as in spider phobia, the imperatival quality of goal-​relevant cues may be diminished. In other
cases, when one’s goal is to react to “M” words, for example, the imperatival quality of the goal-​relevant
cue may be increased. It is worth noting that leading implementation intention researchers Thomas
Webb and Paschal Sheeran (2008) make room for something like what I’m suggesting. They note that
it is an open question whether “forming an if–​then plan could influence how much positive affect is
attached to specified cues and responses, or to the cue–​response link, which in-​turn could enhance the
motivational impact of these variables” (2008, 389).
The Habit Stanc e 191

that we are “set up to be set off ” (see Chapter 4) by particular kinds of stimuli (e.g.,
sugar, threats, disgusting things); and that how likely we are to achieve our goals is
perhaps surprisingly susceptible to minor variations in the form of our plans (e.g.,
implementation intention researchers insist that including the exclamation point at
the end of if–​then plans matters—​that is, “if Y, then Z!” rather than “if Y, then Z”).14
As before, in the case of rote practice, the idea of the habit stance captures the
nature of these kinds of interventions better than any one of the intentional, physi-
cal, or design stances alone. Gollwitzer has described implementation intentions as
“instant habits,” and although the “instant” part is a bit tongue in cheek, it does not
seem far off. Habits are, ideally, expressions of our goals and values, but not just this.
They are useful precisely because they leave our goals and values behind at the cru-
cial moment, just as Ulysses left his autonomy behind by tying himself to the mast.
One cannot see why they work in this way without adopting the physical and design
stances too, without understanding, in the case of implementation intentions, how
early perceptual processing works and how we have evolved to respond to particular
cues in particular ways. For these reasons, self-​regulating our implicit attitudes via
pre-​commitment is best understood as an ethical technique born of treating our-
selves as if we were habit-​driven creatures.

1.3  Context Regulation


Melissa Bateson and colleagues (2006) show that a subtle cue suggesting that one
is being watched increases cooperative behavior.15 This “Watching Eyes Effect” is
striking. Simply placing a picture of a pair of eyes near an “honor box” used to col-
lect money for communal coffee in a university break room appears to cause people
to leave three times more money than they otherwise would. It is even possible
that these generous coffee drinkers are affected by the watching eyes without con-
sciously noticing them.
Behavioral economists have devised a number of analogous techniques for
increasing prosocial behavior—​tipping, honesty, charity, friendliness, and so on
(e.g., Ariely, 2011). These strategies are ways of changing small but apparently con-
sequential features of the environment. They have been expanded upon and even
promoted into public policy in the form of “nudge” technologies. New York City’s
ban on large sodas offers a vivid example. This policy builds upon now classic psy-
chological research, such as the “Bottomless Bowl” experiment, which showed
that people consume far more soup when they are unknowingly eating from a self-​
refilling bowl (Wansink et al., 2005). The core idea of the soda ban is that, while
putting no limit on how much soda people can drink (since they are free to order as

14
  T. Webb (pers. com.).
15
  See also Haley and Fessler (2005) for similar results on a lab-​based task.
192 Ethics

many sodas as they want), the policy encourages more moderate consumption by
removing the cue triggering overconsumption (i.e., the large cup). Richard Thaler
and Cass Sunstein (2008) describe policies like these as efforts to engineer the
“choice architecture” of social life. Such policies, as well as default plans for things
like retirement saving and organ donation, nudge people toward better behavior
through particular arrangements of the ambient environment.
Nudge policies are politically controversial, since they raise questions about
autonomy and perfectionism, but they are psychologically fascinating.16 The partic-
ular variables that moderate their effectiveness—​or combinations of variables—​are
revealing. For example, David Neal and colleagues (2011) have shown that people
who habitually eat popcorn at the movies will eat stale unappetizing popcorn even
if they’re not hungry, but only when the right context cues obtain. People eat less
out of habit if they are not in the right place—​for example, in a meeting room rather
than at a cinema—​or if they cannot eat in their habitual way—​for example, if they
are forced to eat with their nondominant hand. Identifying the context cues rel-
evant to different domains of self-​regulation is crucial.
Ideas like these, of manipulating subtle elements of the context, have been
used to promote the self-​regulation of implicit attitudes. Gawronski and Cesario
(2013) argue that implicit attitudes are subject to “renewal effects,” which is a term
used in animal learning literature to describe the recurrence of an original behav-
ioral response after the learning of a new response. As Gawronski and Cesario
explain, renewal effects usually occur in contexts other than the one in which the
new response was learned. For example, a rat with a conditioned fear response to
a sound may have learned to associate the sound with an electric shock in context
A (e.g., its cage). Imagine then that the fear response is counterconditioned in con-
text B (e.g., a different cage). An “ABA renewal” effect occurs if, after the rat is placed
back in A (its original cage), the fear response returns. Gawronski and Cesario argue
that implicit attitudes are subject to renewal effects like these.
For example, a person might learn biased associations while hanging out with her
friends (context A), effectively learn counterstereotypical associations while tak-
ing part in a psychology experiment (context B), then exhibit behaviors consistent
with biased associations when back with her friends. Gawronski and Cesario dis-
cuss several studies (in particular, Rydell and Gawronski, 2009; Gawronski et al.,
2010) that demonstrate in controlled laboratory settings renewal effects like these
in implicit attitudes. The basic method of these studies involves an impression for-
mation task in which participants are first presented with valenced information
about a target individual who is pictured against a background of a particular color.
This represents context A. Then, the same target individual is presented with oppo-
sitely valenced information against a background of a different color, representing

16
  See Waldron (2104); Kumar (2016a).
The Habit Stanc e 193

context B. Participants’ evaluations of the target are then assessed using an affec-
tive priming task in which the target is presented against the color of context A. In
this ABA pattern, participants’ evaluations reflect what they learned in context
A. Gawronski and Cesario report similar renewal effects (i.e., evaluations consist­
ent with the valence of the information that was presented first) in the patterns
AAB (where the original information and new information are presented against
the same background color A, and evaluations are measured against a novel back-
ground B) and ABC (where original information, new information, and evaluation
all take place against different backgrounds).
While the literature Gawronski and Cesario discuss emphasizes the return of
undesirable, stereotype-​consistent attitudes in ABA, AAB, and ABC patterns, they
also discuss patterns of context change in which participants’ ultimate evaluations
of targets reflect the counterconditioning information they learned in the second
block of training. These are the ABB and AAA patterns. The practical upshot of
this, Gawronski and Cesario suggest, is that one ought to learn counterstereotyp-
ing information in the same context in which one aims to be unbiased (ABB and
AAA renewal).17 And what counts as the “same” context can be empirically speci-
fied. Renewal effects appear to be more responsive to the perceptual similarity
of contexts than they are to conceptual identity or equivalence. So it is better, for
example, to learn counterstereotyping information in contexts that look like one’s
familiar environs than it is to learn them in contexts that one recognizes to be con-
ceptually equivalent. It may matter less that a de-​biasing intervention aimed at class-
room interactions is administered in another “classroom,” for example, than that it
is administered in another room that is painted the same color as one’s usual class-
room. Finally, Gawronski and Cesario suggest that if it is not possible to learn coun-
terstereotyping interventions in the same or similar context in which one aims to be
unbiased, one ought to learn counterstereotyping interventions across a variety of
contexts. This is because both ABA and ABC renewal is weaker when counterattitu-
dinal information is presented across a variety of contexts rather than just one. The
reason for this, they speculate, is that fewer contextual cues are incorporated into
the agent’s representation of the counterattitudinal information when the B context
is varied. A greater variety of contexts signals to the agent that the counterattitudinal
information generalizes to novel contexts.

17
  A  common critique of implicit bias interventions is that they are too focused on changing
individuals’ minds, rather than on changing institutions, political arrangements, unjust economics,
etc. (cf. Huebner, 2009; Haslanger, 2015). Sometimes this critique presents social activism and self-​
reformation as mutually exclusive, when in fact they are often compatible. The idea of learning coun-
terstereotypical information in the same context in which one hopes to act in unbiased ways represents
a small, but very practical way of combining “individual” and “institutional” efforts for change. The
relationship between individuals and institutions is reciprocal. Changing the context affects what’s in
our minds; and what’s in our minds shapes our willingness to change the context.
194 Ethics

These studies suggest that minor features of agents’ context—​like the back-
ground color against which an impression of a person is formed—​can influ-
ence the activation of implicit attitudes, even after those attitudes have been
“unlearned.” These are striking results, but they are consistent with a broad array
of findings about the influence of context and situation on the activation and
expression in behavior of implicit attitudes. Context is, of course, a broad notion.
Imagine taking an IAT. The background color against which the images of the
target subjects are presented is part of the context. Perhaps before having taken
the IAT, the experimenter asks the subject to imagine herself as a manager at a
large company deciding whom to hire. Having imagined oneself in this powerful
social role will have effects on one’s implicit evaluations (as discussed later), and
these effects can be thought of as part of one’s context too. Similarly, perhaps one
had a fight with one’s friend before entering the lab and began the task feeling an
acute sense of disrespect, or was hungry and jittery from having drunk too much
caffeine, and so on.
As I use the term, “context” can refer to any stimulus that moderates the way
an agent evaluates or responds behaviorally to a separate conditioned stimulus.
Anything that acts, in other words, as what theorists of animal learning call an
“occasion setter” can count as an element of context.18 A standard example of an
occasion setter is, again, an animal’s cage. A rat may demonstrate a conditioned
fear response to a sound when in its cage, but not when in a novel environment.
Similarly, a person may feel or express biased attitudes toward members of a
social group only when in a particular physical setting, when playing a certain
social role, when feeling a certain way, or the like. In Chapter 3, I discussed physi-
cal, conceptual, and motivational context cues that shape the way implicit atti-
tudes unfold. These involved the relative lightness or darkness of the ambient
environment (Schaller et  al., 2003); varying the category membership of tar-
gets on tests of implicit attitudes (Mitchell et  al., 2003; Barden et  al., 2004);
and inducing salient emotions (Dasgupta et al., 2009). Each of these kinds of
findings presents a habit stance opportunity for the cultivation of better implicit
attitudes.
As I mentioned previously, research suggests that one should learn counterste-
reotypical information in a context that is physically similar to the one in which one
aims to be unbiased. Changing the physical context in this way is a way of treating
our implicit attitudes as if they were habits. It stems from our long-​term reflective
goals; it operates on us as physical entities, whose behaviors are strongly affected by
seemingly irrelevant features of our situation; and it is design-​focused, akin to the
“urban planning” of our minds.

18
  I am indebted to Gawronski and Cesario (2013) for the idea that context acts as an occasion set-
ter. On occasion setting and animal learning, see Schmajuk and Holland (1998).
The Habit Stanc e 195

Interventions that focus on conceptual elements of context are similar. For exam-
ple, research suggests that perceiving oneself as high-​status contributes to more
biased implicit attitudes. Ana Guinote and colleagues (2010) led participants in a
high-​power condition to believe that their opinions would have an impact on the
decisions of their school’s “Executive Committee,” and these participants showed
more racial bias on both the IAT and Affect Misattribution Procedure (AMP) than
those in a low-​power condition, who were led to believe that their opinions would
not affect the committee’s decisions. These data should not be surprising. It is not
hard to imagine a person who treats her colleagues fairly regardless of race, for exam-
ple, but (unwittingly) grades her students unfairly on the basis of race. Perhaps being
in the superordinate position of professor activates this person’s prejudices while
being in an equal-​status position with her colleagues does not. To combat these
tendencies, one could devise simple strategies to diminish the salience of superordi-
nate status. Perhaps the chair of the department could occupy a regular office rather
than the fancy corner suite. Or perhaps she could adopt a relevant implementation
intention, such as “When I feel powerful, I will think, ‘Be fair!’ ” These also look to
be habit stance interventions, irreducible to Dennett’s more familiar stances, but
instead a hybrid of all three.
So, too, with motivational elements of context. In addition to salient emotions
like disgust, “internal” versus “external” forms of motivation selectively affect
implicit attitudes. People who aim to be unbiased because it’s personally impor-
tant to them demonstrate lower levels of implicit bias than people who want to
appear unbiased because they are concerned to act in accord with social norms
(Plant and Devine, 1998; Devine et  al., 2002; for a similar measure see Glaser
and Knowles, 2008). In the broader context of self-​regulation, Marin Milyavskaya
and colleagues (2015) find that people with intrinsic (or “want-​to”) goals—​such
as the desire to eat healthy food in order to live a long life—​have lower levels of
spontaneous attraction to goal-​disruptive stimuli (such as images of unhealthy
food) than do people with external (or “have-​to”) goals—​such as the desire to
eat healthy food in order to appease one’s spouse. The measure of spontaneous
attraction in this research is both IAT and AMP. Milyavskaya and colleagues
found that intrinsic, want-​to motivation is responsible for the effect. External or
have-​to motivation is unrelated to one’s automatic reactions to disruptive stimuli.
This is important because it is suggestive of the kinds of interventions for implicit
attitude change that are likely to be effective. It seems that we should try to adopt
internal kinds of motivation. While this is far from surprising to common sense,
it speaks to the nature of the habit stance. Internal forms of motivation seem hard
to simply adopt in the way that one can simply choose to adopt a goal. Instead,
internal motivation must itself be cultivated in the way of habit creation, much
as knowing how and when to deliberate is itself in part a nondeliberative skill
(Chapter 6).
196 Ethics

2. Objections
One worry about context-​regulation strategies in particular is that they seem to be
very fine-​grained. It seems as if there are a million different situational cues that
would need to be arranged in such-​and-​such a way as to shape and activate the
implicit attitudes we want. Moreover, it’s possible that different combinations of
different situations will lead to unpredictable outcomes. I think the only thing to do
in the face of these worries is await further data. Much more research is needed in
order to home in on the most powerful contextual motivators, effective combina-
tions of situational cues, and so on. This point applies in general to the other habit
stance interventions I’ve recommended, all of which are provisionally supported by
the extant data. I don’t think this implies waiting to employ these strategies, though,
since the extant data are the best data we’ve got. Rather, it simply means keeping our
confidence appropriately modest.
There are broader worries about the habit stance techniques I’ve discussed, how-
ever. Some of these are also empirical (§2.1). Are these techniques broadly applica-
ble, such that what works in the lab works across a range of real-​life situations? And
are the effects of these techniques durable? Will they last, or will agents’ unwanted
implicit attitudes return once they are bombarded with the images, temptations,
and solicitations of their familiar world? Another worry is not empirical (§2.2). It is
that these self-​regulation techniques do not produce genuinely ethical, praisewor-
thy action. Perhaps they are instead merely tricks for acting in ways that agents with
good ethical values already want to act. Are they perhaps a kind of cheating, akin to
taking performance-​enhancing drugs for ethical action?

2.1  Durability and Generality


Some have argued that techniques like those I’ve described are unlikely to be effec-
tive outside of psychology laboratories (e.g., Alcoff, 2010; Anderson, 2012; Devine
et al. 2012; Huebner, 2016; Mandelbaum, 2015b; Mendoza et al., 2010; Stewart
and Payne, 2008; Olson and Fazio, 2006; Wennekers, 2013).19 There are several
reasons for this. One is a “bombardment” worry: once agents leave the lab, they will
be bombarded by stereotypical images and temptations, and these will eventually
overwhelm or reverse any changes in their implicit attitudes. This amounts to a con-
cern with the durability of the effects of habit stance techniques. Perhaps because of
bombardment, but perhaps also for other reasons, the effects of these interventions
will simply be too weak to make a meaningful difference in the real world. Another
reason critics have given has to do with generality: perhaps implicit attitudes toward
members of particular groups can be retrained, the worry goes, but there are many

19
  See Madva (2017) for discussion.
The Habit Stanc e 197

socially stigmatized groups in the world, and the goal of any substantive ethics of
implicit attitudes should be to create broad dispositional ethical competencies. The
idea here is that these techniques seem to have too narrow a scope.
Much of this criticism can be adjudicated only empirically. How long-​lasting the
effects of the kinds of attitude-​change techniques I’ve discussed can be is an open
question. There is some reason for optimism. Reinout Wiers and colleagues (2011;
replicated by Eberl et al., 2012) significantly diminished heavy drinkers’ approach
bias for alcohol, as measured by the IAT, one year after lab-​based avoidance train-
ing using a joystick-​pulling task akin to those described earlier in Kawakami and
colleagues’ (2007b) research (§1.1). Dasgupta and Asgari (2004) found that the
implicit gender biases of undergraduate students at an all-​women’s college were
reduced over the course of one year and that this reduction in implicit bias was
mediated by the number of courses taken from female (i.e., counterstereotypical)
professors (for a related study, see Beaman et  al., 2009). Moreover, adopting an
implementation intention has been shown to affect indirect measures of attitudes
three months past intervention (Webb et al., 2010). And, finally, Patricia Devine
and colleagues (2012) demonstrated dramatically reduced IAT scores in subjects
in a twelve-​week longitudinal study using a combination of the aforementioned
interventions.20
Despite these promising results, however, none of these studies have investigated
the durability of interventions as an isolated independent variable in an extended
longitudinal study.21 Doing so is the next step in testing the durability of interven-
tions like these. A helpful distinction to take into account for future research may be
between those interventions designed to change the associations underlying one’s
implicit attitudes and those interventions designed to heighten agents’ control over
the activation of those associations.22 Originally, many prejudice-​reduction interven-
tions were association-​based (i.e., aimed at changing the underlying associations).

20
  However, see Forscher et al. (2016) for a comparatively sobering meta-​analysis of the stability of
change in implicit attitudes after interventions. See the Appendix for brief discussion.
21
  The exception is Webb et al. (2010). See also Calvin Lai et al. (2016), who found that nine rapid
interventions that produced immediate changes in IAT scores had no significant lasting effects on IAT
scores several hours to several days later. This finding suggests that a magic bullet—​i.e., a rapid and
simple intervention—​for long-​term prejudice reduction is unlikely to exist. But see later in the text for
discussion about more complex multipronged approaches to implicit attitude change, which I believe
are more likely to create lasting change, particularly if they involve repeated practice of interventions
within daily life settings.
22
  I  draw this distinction from Lai et  al. (2013), although I  note that the distinction is in some
respects blurry. Identifying changes in underlying association is tricky because agents’ control over
activated associations influences indirect measures like the IAT. Process-​dissociation techniques like
the Quad Model (Conrey et al., 2005) distinguish the influence of activated associations and control
over them, but these techniques are based on statistical assumptions that are relative rather than abso-
lute. Nevertheless, the distinction is useful for roughly categorizing kinds of interventions based on
their presumed mechanisms for affecting behavior.
198 Ethics

Counterstereotype training, for example, is designed to change implicit attitudes


themselves. So, too, is evaluative conditioning (Olson and Fazio, 2006) and increas-
ing individuals’ exposure to images, film clips, and mental imagery depicting mem-
bers of stigmatized groups acting in stereotype-​discordant ways (Dasgupta and
Greenwald, 2001; Wittenbrink et al., 2001; Blair et al., 2002). Control-​based tech-
niques are different in that they are alleged to prevent the expression of implicit atti-
tudes without necessarily altering the underlying associations. As Saaid Mendoza
and colleagues (2010, 513) put it: “Given recent research demonstrating that goal-​
driven responses can be engaged spontaneously with little deliberation (Amodio
and Devine, 2010), it appears fruitful to also consider interventions that target the
engagement of control, in addition to focusing on changing automatic associations.”
Control-​based strategies have been buoyed by research showing that volitional con-
trol can be automatic, for example, through chronic goal-​setting (Moskowitz et al.,
1999)  and nondeliberative goal pursuit (Amodio et  al., 2008). Implementation
intentions are thought to be control-​based in this sense. Process dissociation analy-
sis—​a statistical technique for pulling apart the various psychological contributions
to performance on measures like the IAT23—​suggests that implementation inten-
tions enhance controlled processing but do not affect automatic stereotyping itself
(Mendoza et  al., 2010). Similarly, according to the Self-​Regulation of Prejudice
(SRP) model (Monteith, 1993; Monteith et al., 2002), individuals who are moti-
vated to control their prejudiced responses can create “cues for control” so that they
learn to notice their prejudiced responses and the affective discomfort caused by
the inconsistency of those responses with their egalitarian goals. The SRP is by defi-
nition a control-​based strategy; it requires either “top-​down” deliberative conflict
monitoring or “bottom-​up” automated conflict monitoring between intact preju-
diced associations and egalitarian intentions.
This distinction between association-​and control-​based interventions may be
relevant given that one or the other of these kinds of techniques may prove more
durable. One possibility is that, in absolute terms (i.e., regardless of attitude or behav-
ioral measure), control-​based strategies will have longer-​lasting effects on indirect
measures of attitudes as well as on behavior, compared with association-​based inter-
ventions. Control-​based strategies offer individuals ways to implement the goals that
many people possess already, many of which are inherently long-​term (e.g., getting
tenure, eating healthy) and are made more accessible through intervention (Webb
and Sheeran, 2008). This prediction is in keeping with extant data on the durabil-
ity of interventions based on implementation intentions (e.g., Webb et  al., 2010).
By contrast, association-​based strategies may be overridden by renewed exposure
to stereotype-​consistent images in daily life. A  rival hypothesis, however, is that
control-​based strategies will be less durable in absolute terms than association-​based

23
  See the discussion of the Quad Model (Conrey et al., 2005) in Chapter 6.
The Habit Stanc e 199

strategies because control-​based strategies rely on individuals being motivated to


implement them (not to mention being in conditions favorable to self-​control, such
as being fed and well-​slept). For example, Devine and colleagues (2012) found that
individuals who report being more concerned about discrimination are more likely
to use intervention strategies and show greater reduction in implicit bias over time.
This finding suggests that the durability of control-​based strategies may be limited,
because individuals’ motivation to be unbiased can be temporary or inconsistent
(e.g., because the goal to act unbiased can become deactivated after individuals sat-
isfy it; Monin and Miller, 2001). Association-​based interventions might therefore
be more durable than control-​based interventions if the effectiveness of the former
do not depend on the sustained activation of individuals’ goals.
While the evidence is nascent but promising that habit stance techniques can
have lasting effects on implicit attitudes, it is indeed not yet clear how broad these
effects can be. Implementation intentions, for example, seem to be most effective
when they are very specific. One must commit oneself to noticing a particular cue—​
for example, “If I see a gun!”—​and enacting a specific behavioral response—​“then
I will shoot!” But, of course, there are myriad other situations in which a relevant
association—​in this case, a black–​danger association—​can manifest. One wants
not just to perform better on a shooter bias test or to shoot more accurately if one
were a police officer. If on a jury, one wants to deliberate without being influenced
by the Afrocentricity of the defendant’s facial features (Eberhardt et  al., 2006);
while walking down the street, one wants to display friendly, not aggressive or
defensive, micro-​behavior toward black people (Dovidio et al., 2002); when inter-
acting with children, one wants to avoid perceiving black boys as older and more
prone to criminality than white boys (Goff, 2014). An “If I see a gun, then I will
shoot!” plan is unlikely to improve one’s behavior in all (or any) of these situations.
As Stewart and Payne (2008, 1334) write, “The social import of stereotypes is that
they draw sweeping generalizations based on broad social categories. The most use-
ful antistereotyping strategy would be one whose remedy is equally sweeping, and
applies to the entire category.”
As with the worry about durability, it is an empirical question how broad the
effects of habit stance interventions can be. In some cases, testing this is straight-
forward. In the case of implementation intentions, for example, one can look to see
just how broad a formulation might work (e.g., “If she is talking, then I won’t!”).24 It
is also important to be reminded of how new this research is. Researchers initially
believed that implicit attitudes were relatively static and difficult to change (Bargh,
1999; Devine, 1989; Wilson et al., 2000), and it has been only since around the
year 2000—​with some exceptions—​that significant research effort has been put
into investigating interventions. So it is reasonable to hope that big breakthroughs

24
  Thanks to Louise Antony for this example.
200 Ethics

are still to come. Moreover, there is already some evidence that the effects of some
habit stance techniques may be broader than one might initially expect. For exam-
ple, Gawronski and colleagues (2008) found that counterstereotype race training
(specifically, training participants to affirm counterstereotypes—​e.g., “intelligent”
and “wealthy”—​about black people) reduced their implicit racial prejudice. That
is, learning to affirm specific counterstereotypes led participants to implicitly like
black people more. Liking is far broader than stereotyping. When we like or dislike
a group, we act in many specific ways consistent with this attitude. What this sug-
gests is that targeting specific implicit attitudes in interventions may affect the suite
of attitudes to which they are related.
We can find additional evidence for the potential broadness of these kinds of
techniques by looking again at the difference between association-​and control-​
based interventions. If a person is generally less likely to automatically stereotype
people on the basis of race, this should have more widespread effects on behavior
across a range of contexts, even when the person lacks the motivation or oppor-
tunity to control it. As such, it is possible that association-​based interventions are
likely to have greater effects on a wider range of behaviors than control-​based inter-
ventions (Kawakami et al., 2000). On the other hand, control-​based interventions
may have larger effects on “niche-​specific” behavior.
Ultimately, durable and broadly effective strategies for changing and regulat-
ing implicit attitudes will likely take a multipronged approach. This is especially
likely as circumstances change or as one moves into new environments, where the
demands of the situation require recalibrating one’s associations. When slower
court surfaces were introduced in professional tennis, for example, players presum-
ably had to exert top-​down control at first in order to begin the process of learn-
ing new habits. But this was also presumably accompanied by association-​based
retraining, such as the rote repetition of backcourt shot sequences (rather than
net-​attacking sequences). Indeed, Devine and colleagues (2012) label their multi-
pronged approach to long-​term implicit attitude change a “prejudice habit break-
ing intervention.” The next step is to test the extent to which specific interventions
work and then to test how they might be combined. Combining various interven-
tions might be mutually reinforcing in a way that makes their combination stronger
than the sum of the parts, perhaps by “getting the ball rolling” on self-​regulation.
Relatedly, future research may (and should) consider whether different forms of
presentation of interventions affect participants’ use of these interventions outside
the laboratory. Does it matter, for instance, if an intervention is described to partici-
pants as a form of control, change, self-​control, self-​improvement, or the like? Will
people be less motivated to use self-​regulatory tools that are described as “chang-
ing attitudes” because of privacy concerns or fears about mental manipulation, or
simply because of perceptions of the difficulty of changing their attitudes? Will
interventions that appeal to one’s sense of autonomy be more successful? These
open questions point to the possibility of fruitful discoveries to come, some of
The Habit Stanc e 201

which will hopefully alleviate current reasonable worries about habit stance tech-
niques for implicit attitude change.

2.2 Cheating
A different kind of broadly “Kantian” worry focuses on the nature of “moral credit”
and virtue.25 There are (at least) two different (albeit related) ways to understand this
worry. I have discussed versions of both worries earlier, in particular in Chapters 4
and 5. The discussion here focuses on how these worries apply to my account of the
habit stance.
One version of the worry is that actions based on “mere” inclinations can be
disastrous because they aren’t guided by moral sensibilities. As a result, an approach
to the self-​regulation of implicit attitudes focused on habit change, without a spe-
cific focus on improving moral sensibilities, would be limited and perhaps even
counterproductive. Barbara Herman (1981, 364–​365) explicates the Kantian con-
cern about “sympathy” unguided by morality in this way. She imagines a person
full of sympathy for others who sees someone struggling, late at night, with a heavy
package at the back door of the Museum of Fine Arts. If such a person acts only on
the basis of an immediate inclination to help others out, then she might end up abet-
ting an art thief. Herman’s point is that even “good” inclinations can lead to terrible
outcomes if they’re not guided by a genuinely moral sense. (This way of putting
the worry might be too consequentialist for some Kantians.) She puts the point
strongly: “We need not pursue the example to see its point: the class of actions that
follow from the inclination to help others is not a subset of the class of right or duti-
ful actions” (1981, 365).
The second way to understand this worry focuses on what makes having a moral
sense praiseworthy. Some argue that mere inclinations cannot, by definition, be
morally praiseworthy, because they are unguided by moral reasons. The key distinc-
tion is sometimes drawn in terms of the difference between acting “from virtue”
and acting “in accord with virtue.” A person who acts from virtue acts for the right
(moral) reasons. Contemporary theorists draw related distinctions. For example,
Levy (2014) argues that acting from virtue requires one to be aware of the mor-
ally relevant features of one’s action (see Chapter 5). Acting from virtue, in other
words, can be thought of as acting for moral reasons that one recognizes to be moral
reasons.
Both versions of what I’m calling the broadly “Kantian” worry seem to apply to
the idea of the habit stance. Perhaps even with the most ethical implicit attitudes, a
person will be no better off than Herman’s sympathetic person. After all, even the

25
  I place “Kantian” in scare quotes in order to emphasize that the following ideas may or may not
reflect the actual ideas of Immanuel Kant.
202 Ethics

most ethical implicit attitudes will give rise to spontaneous reactions, and these
reactions seem tied to the contexts in which they were learned. Their domain of
skillful and appropriate deployment might be worryingly narrow, then. Moreover
one might worry that adopting the habit stance does not require one to act from
virtue. There is little reason to think that rote practice, pre-​commitment, and con-
text regulation put one in a position of acting for moral reasons that one recognizes
to be moral. Nor do these strategies seem poised to boost one’s awareness of any
particular moral reasons for action.
Regarding the first worry, I think things are considerably more complex than
Herman’s example ostensibly illustrating the pitfalls of misguided inclination
suggests. Her example of the sympathetic person who stupidly abets an art thief
is a caricature. In reality, most people’s sympathetic inclinations are not blunt,
one-​size-​fits-​all responses to the world. Rather, most people’s “mere” inclinations
themselves can home in on key cues that determine whether this is a situation
requiring giving the person in need a hand or giving the police a call. Indeed, if
anything, it strikes me that the greatest difficulty is not usually in deciding, on the
basis of moral sense, whether to help or hinder what another person is trying to
do, although, of course, situations like this sometimes arise. What’s far more dif-
ficult is simply taking action when it’s morally required. Adopting the habit stance
in order to cultivate better spontaneous responses is directly suited to solving this
problem.
Of course, this does not mean that our implicit attitudes always get it right (as
I’ve been at pains to show throughout this book). As Railton puts it:

Statistical learning and empathy are not magic—​like perception, they can
afford only prima facie, defeasible epistemic justification. And—​again,
like perception itself—​they are subject to capacity limitations and can
be only as informative as the native sensitivities, experiential history, and
acquired categories or concepts they can bring to bear. If these are impov-
erished, unrepresentative, or biased, so will be our statistical and empathic
responses. Discrepancy-​reduction learning is good at reducing the effects
of initial biases through experience (“washing out priors”), but not if the
experiences themselves are biased in the same ways—​as, arguably, is often
the case in social prejudice. It therefore is of the first importance to episte-
mology and morality that we are beings who can critically scrutinize our
intuitive responses. (2014, 846)

But, like me, Railton defends a view of inclination according to which our spon-
taneous responses are not “dumb” one-​size-​fits-​all reflexes. His view is elabo-
rated in terms of the epistemic worth of intuition and focuses on identifying the
circumstances in which our unreasoned intuitions are and are not likely to be
reliable.
The Habit Stanc e 203

My defense of the practical intelligence that can be embodied in intuition is


not meant to encourage uncritical acceptance of “intuitions.” Rather, it seeks
to identify actual processes of learning and representation that would enable
us to see more clearly where our intuitions might be expected to be more reli-
able, but also where they might be expected to be less. (2014, 846)

One way to understand the aim of this chapter is in terms of identifying some key con-
ditions under which our implicit attitudes can be expected to be reliable in this sense.
A related response to the first version of the worry draws inspiration from the
recent debate over “situationism” and virtue ethics. In short, virtue ethicists argue that
morality stems from the cultivation of virtuous and broad-​based character traits (e.g.,
Hursthouse, 1999). But critics have argued that virtuous and broad-​based character
traits do not exist (Harman, 1999; Doris, 2002). The reason for this, the critics argue,
is that human behavior is pervasively susceptible to the effects of surprising features of
the situations in which we find ourselves. The situationist critics argue that exposure
to seemingly trivial things—​like whether one happens to luckily find a dime (Isen and
Levin, 1972) or whether one happens to be in a hurry (Darley and Batson, 1973)—​
is much better at predicting a person’s behavior than are her putative character traits.
Supposedly mean people will act nicely if they find a lucky dime, and supposedly nice
people will act meanly if they’re in a rush. In other words, people don’t typically have
virtuous and broad-​based character traits. Rather, their behavior is largely a reflection
of the situation in which they find themselves. Of course, defenders of virtue ethics
have responded in a number of ways (e.g., Kamtekar, 2004; Annas, 2008).
The similarity between the situationist debate and the first “Kantian” worry
about habit stance techniques for implicit attitude change is this: in both cases, crit-
ics worry that there is no “there” there. In the case of situationism, critics argue that
the broad-​based, multitrack character traits at the heart of virtue ethics don’t really
exist. If these don’t exist, then human behavior seems to be like grass blowing in the
wind, lacking the core thing that virtue ethicists purport guides right action. The
worry about mere inclination is analogous. Without online guidance of behavior
by some overarching, reflective, moral sense, we will similarly be left rudderless,
again like grass blowing in the wind. This is what we are to take away from Herman’s
example. The sympathetic person has good motives but is essentially rudderless,
bluntly doling out empathy everywhere, even where it’s inappropriate.
In the case of situationism, the fact is, though, that character traits do exist. And
this is instructive about implicit attitudes. There is a well-​established science of
character. It identifies five central traits, which appear to be culturally universal and
relatively stable across the life span ( John and Srivastava, 1999): openness, consci-
entiousness, extroversion, agreeableness, and neuroticism.26 The core of the debate
about situationism and virtue ethics is not, therefore, whether broad-​based traits

26
  For a review of the current science of character and personality, see Fleeson et al. (2015).
204 Ethics

exist. Rather, the core of the debate is (or should be) about whether the broad-​
based traits that do exist belong on any list of virtues (Prinz, 2009). The Big Five are
not virtues per se. They are dispositions to take risks, be outgoing, feel anxious, and
so on. Crucially, they can cut both ways when it comes to ethics. Anxiety can lead
one to be extremely cowardly or extremely considerate. Even “conscientiousness,”
which sounds virtuous, is really just a disposition to be self-​disciplined, organized,
and tidy. And certainly some of history’s most vicious people have been conscien-
tious in this sense. That said, the Big Five are not just irrelevant to virtue. Virtues
may be built out of specific combinations of character traits, deployed under the
right circumstances. Research may find matrices of traits that line up closely with
particular virtues. Similarly, what some call a moral sense might be constructed out
of the small shifts in attitudes engendered by habit stance techniques. For example,
practicing approach-​oriented behavior toward people stereotyped as violent, using
Kawakami-​style training, is not, just as such, virtuous. Herman might imagine a
white person approaching a black man who clearly wants to be left alone; that is,
an approach intuition unguided by moral sense. But practicing approach-​oriented
behavior in the right way, over time, such that the learning mechanisms embed-
ded in one’s implicit attitudes can incorporate feedback stemming from successes
and failures, is not likely to generate this result. Rather, it is more likely to result in
a spontaneous social sensibility for when and how to approach just those people
whom it is appropriate to approach.
Regarding the second version of the worry—​focused on the importance of act-
ing from virtue—​it is important to remember that we typically have very little grasp
of the competencies embedded in our implicit attitudes. So it is an extremely high
bar to cross to think that our implicit attitudes are praiseworthy only when we are
aware of the morally relevant features of our actions. It is also worth recalling the
claim of Chapter  4, that implicit attitudes are reflective of agents’ character. The
ethical quality of these attitudes, then, reflects on us too. Perhaps this can go some
way toward assuaging the worry that we don’t act for explicit moral reasons when
we reshape and act upon our implicit attitudes.
More broadly, I think it is misguided to worry that acting well is the kind of thing
that is susceptible to cheating, at least in the context of considering moral credit
and the habit stance. It strikes me that regulating one’s implicit attitudes is more like
striving for happiness than it is like pursuing full-​blown morality in this sense. And
happiness, it seems to me, is less susceptible to intuitions about cheating than is
morality. As Arpaly (2015) put it in a discussion of some people’s hesitance to take
psychoactive drugs to regulate mood disorders like depression, out of sense that
doing so is tantamount to cheating:

You might feel like taking medication would be “cheating,” a shortcut


to eudaimonia that doesn’t go through virtue. Wrong! Happiness is not
like an award, gained fairly or unfairly. Happiness is more like American
The Habit Stanc e 205

citizenship: some people have it because they were born with it, some per-
haps get it through marriage, some have to work for it. It’s not cheating
to find a non-​torturous way to get an advantage that other people have
from birth.

Regulating our implicit attitudes strikes me as being like American citizenship—​


and thus like happiness—​in this way. Our implicit attitudes are often victims or
beneficiaries of moral luck, and, all other things being equal, any tactic for improv-
ing them seems worthwhile. (Consider also the example of quitting smoking. Are
those who quit using nicotine patches rather than going cold turkey any less admira-
ble?) There is, of course, the fact that people often experience greater rewards when
they feel as if they worked hard for something. Hiking to the top of Mt. Washington
is more rewarding to many people than driving to the top. In this sense, happiness
seems susceptible to (perceptions about) cheating. But this is a psychological phe-
nomenon, not a moral one. If data show that implicit attitudes improve only when
people feel as if they have worked hard to change them—​in some earnest moral
way—​then so be it. The idea of the habit stance may need to be amended to incor-
porate this hypothetical finding. But this is not a reason to think that one doesn’t
deserve moral credit for working to improve one’s implicit attitudes using the habit
stance.
Of course, I don’t expect these ideas to change the mind of anyone who thinks
that adopting the habit stance isn’t genuinely morally worthwhile. This isn’t because
Kantians (or “Kantians”) about moral worth are stubborn. I  simply haven’t pro-
vided any real argument to obviate the distinction between acting in accord with
virtue and acting from virtue. That task is too tall for this domain. But I hope to
have at least raised concerns about its applicability in the case of implicit attitudes.
Improving these through use of the habit stance is clearly worthwhile, and the kind
of worth involved strikes me, at least, as relatively self-​standing.

3. Conclusion
I’ve argued for what I call a habit stance approach to the ethical improvement of our
implicit attitudes. I have proposed this approach in contrast to the idea, discussed in
the preceding chapter, that deliberation is necessary and always recommended for
cultivating better implicit attitudes. The habit stance approach largely falls out from
the current state of empirical research on what sorts of techniques are effective for
implicit attitude change. It also should seem consistent with the account of implicit
attitudes I have given, as sui generis states, different in kind from our more familiar
beliefs and values, yet reflective nevertheless upon some meaningful part of who we
are as agents. It is by treating our implicit attitudes as if they were habits—​packages
of thought, feeling, and behavior—​that we can make them better.
8

Conclusion

The central contention of this book has been that understanding the two faces of
spontaneity—​its virtues and vices—​requires understanding what I have called the
implicit mind. And to do this I’ve considered three sets of issues.
In Chapters 2 and 3, I focused on the nature of the mental states that make up
the implicit mind. I argued that in many moments of our lives we are not moved
to act by brute forces, yet neither do we move ourselves to act by reasoning on the
basis of our beliefs and desires. In banal cases of orienting ourselves around objects
and other people, in inspiring cases of acting spontaneously and creatively, and in
maddening and morally worrying cases where biases and prejudice affect what we
do, we enact complex attitudes involving rapid and interwoven thoughts, feelings,
motor impulses, and feedback learning. I’ve called these complex states implicit
attitudes and have distinguished them from reflexes, mere associations, beliefs, and
dispositions, as well as from their closest theoretical cousins, aliefs.
In Chapters  4 and 5, I  considered the relationship between implicit attitudes
and the self. These states give rise to actions that seem neither unowned nor fully
owned. Museumgoers and Huck and police offers affected by shooter bias reveal
meaningful things about themselves in their spontaneous actions, I’ve argued.
What is revealed may be relatively minor. How the museumgoer makes her way
around the large painting tells us comparatively little about her, in contrast to what
her opinion as to whether the painting is a masterpiece or a fraud might tell us. Nor
do these spontaneous reactions license holding her responsible for her action, in
the sense that she has met or failed to meet some obligation or that she is, even in
principle, capable of answering for her action. But when even this minor action is
patterned over time, involves feelings that reflect upon what she cares about, and
displays the right kinds of connections to other elements of her mind, it tells us
something meaningful about her. It gives us information about what kind of a char-
acter she has, perhaps that in this sort of context she is patient or impatient, an opti-
mizer or satisficer. These aren’t deep truths about her in the sense that they give the
final verdict on who she is. Character is a mosaic concept. And the ways in which
we are spontaneous is part of this mosaic, even when our spontaneity evades or
conflicts with our reasons or our awareness of our reasons for action.

206
Conclu s ion 207

Finally, in Chapters 6 and 7, I examined how ethics and spontaneity interact. How
can we move beyond Vitruvius’s unhelpful advice to “trust your instincts . . . unless
your instincts are terrible”? I’ve argued that becoming more virtuous in our spontane-
ity involves planning and deliberation, but that thinking carefully about how and when
to be spontaneous is not always necessary, and sometimes it can be distracting and even
costly. Retraining our implicit attitudes, sometimes in ways that flatly push against our
self-​conception as deliberative agents, is possible and promising. While there are long-​
standing traditions in ethics focused on this, the relevant science of self-​regulation is
new. So far, I’ve argued, it tells us that the ethics of spontaneity requires a blend of foci,
including what I’ve called intentional, physical, and design elements, and none of these
alone. The full package amounts to cultivating better implicit attitudes as if they were
habits.
All of this is pluralist and incomplete, as I foreshadowed in the Introduction. It is
pluralist about the kinds of mental states that lead us to act and that can lead us to act
well; about how and what kinds of actions might reflect upon me as an agent; about
how to cultivate better implicit attitudes in order to act more virtuously and spontane-
ously. Incompleteness intrudes at each of these junctions. I have not given necessary
and sufficient conditions for a state to be an implicit attitude. Rather, I have sketched
the paradigmatic features of implicit attitudes—​their FTBA components—​and argued
for their uniqueness in cognitive taxonomy. I have not said exactly when agents ought
to be praised or blamed for their spontaneous inclinations. Rather, I  have laid the
groundwork for the idea that these inclinations shed light on our character, even if they
fall short of being the sorts of actions for which we must answer or be held accountable.
And I have not incorporated the ineliminable importance of practical deliberation into
my recommendations for implicit attitude change, even though a deeply important ele-
ment of such change is believing it to be necessary and worthwhile. What I have done,
I think, is pushed back against the historical emphasis (in the Western philosophical
tradition, at least) on the overriding virtues of deliberation. And I have outlined data-​
supported tactics for cultivating spontaneous inclinations that match what I assume are
common moral aims.
While alternative interpretations of each of the cases I’ve discussed are available,
the shared features of these cases illuminate a pervasive feature of our lives: actions
the psychology of which is neither reflexive nor reasoned; actions that reflect
upon us as agents but not upon what we know or who we take ourselves to be; and
actions the ethical cultivation of which demand not just planning and deliberating
but also, centrally, pre-​committing to plans, attending to our contexts, and, as Bruce
Gemmel—​the coach of arguably the best swimmer of all time, Katie Ledecky—​
said, just doing the damn work.1

  Dave Sheinin writes, “Gemmell reacts with a combination of bemusement and annoyance
1

when people try to divine the ‘secret’ to Ledecky’s success, as if it’s some sort of mathematical for-
mula that can be solved by anyone with a Speedo and a pair of goggles. A few weeks ago, a website
208 Conclusion

Many open questions remain, not only at these junctures and in tightening the
characteristics of the family resemblance I’ve depicted, but also in connecting this
account of the implicit mind to ongoing research that bears upon, and sometimes
rivals, the claims I have made along the way. My arguments for the sui generis struc-
ture of implicit attitudes draw upon affective theories of perception, for example,
but it is still unclear whether all forms of perception are affective in the way some
theorists suggest. Similarly, new theories of practical inference with relatively mini-
malist demands—​compared with the more classical accounts of inference from
which I  draw—​may show that implicit attitudes are capable of figuring into cer-
tain kinds of rationalizing explanation.2 This bears upon perhaps the largest open
question facing Chapters 2 and 3: Under what conditions do implicit attitudes form
and change? I have given arguments based on the extant data, but the entire stream
of research is relatively young. Future studies will hopefully clarify, for example,
whether implicit attitudes change when agents are presented with persuasive argu-
ments, and if they do, why they do.
Future theorizing about implicit attitudes and responsibility will likely move
beyond the focus I’ve had on individuals’ minds, to consider various possible forms
of collective responsibility for unintended harms and how these interface with con-
cepts like attributability and accountability. One avenue may take up a different tar-
get: not individuals but collectives themselves. Are corporations, neighborhoods,
or even nations responsible for the spontaneous actions of their members in a non-
additive way (i.e., in a way that is more than the sum of the individuals’ own respon-
sibility)? In some contexts this will be a question for legal philosophy. Can implicit
bias, for example, count as evidence of disparate impact in the absence of discrimi-
natory intent? In other contexts, this may be a question for evolutionary psychol-
ogy. As some have theorized about the functional role of disgust and other moral
emotions, have implicit attitudes evolved to enforce moral norms? If so, does this
mean that these attitudes are in any way morally informative or that groups are more
or less responsible for them? In other contexts still, this may be a question about the
explanatory power and assumptions embedded in various kinds of social science.
Critics of research on implicit bias have focused on  the ostensible limits of psy-
chological explanations of discrimination. For instance, while acknowledging that
there is space for attention to implicit bias in social critique, Sally Haslanger argues
that it is “only a small space,” because “the focus on individuals (and their attitudes)
occludes the injustices that pervade the structural and cultural context and the ways

called SwimmingScience.net promoted a Twitter story called ‘40 Must Do Katie Ledecky Training
Secrets.’ Gemmell couldn’t let that go by unchallenged. ‘Tip #41,’ he tweeted in reply. ‘Just do the damn
work.’ ” https://​www.washingtonpost.com/​sports/​olympics/​how-​katie-​ledecky-​became-​better-​at-​
swimming-​than-​anyone-​is-​at-​anything/​2016/​06/​23/​01933534-​2f31-​11e6-​9b37-​42985f6a265c_​
story.html.
  See, e.g., Buckner (2017).
2
Conclu s ion 209

that the context both constrains and enables our action. It reinforces fictional con-
ceptions of autonomy and self-​determination that prevents us from taking responsi-
bility for our social milieu (social meanings, social relations, and social structures)”
(2015, 10). I think my account of implicit attitudes provides means for a less fic-
tional account of autonomy than those Haslanger might have in mind, but vindicat-
ing this would take much work.
An intriguing set of questions facing the practical ethics of spontaneity focuses
on the concept of self-​trust. Whether and when we trust others has moral stakes, as
moral psychologists like Patricia Hawley (2012) and Meena Krishnamurthy (2015)
have shown. Do these stakes apply equivalently in the case of self-​trust? What are
the general conditions under which we ought to trust ourselves, as some theorists
suggest is the hallmark of expertise? Self-​trust of the sort I imagine here is in tension
with what I take to be one of the imperatives of practical deliberation, which is that
it represents a check against the vices of our spontaneous inclinations. Being delib-
erative is, in some ways, an expression of humility, of recognizing that we are all sub-
ject to bias, impulsivity, and so on. But are there forms of deliberation that perhaps
skirt this tension? I have emphasized the nondeliberative elements of coaching in
skill learning, but perhaps other features are more conducive to a reflective or quasi-​
intellectualist orientation. Historians of philosophy may find some elements in an
Aristotelian conception of motivation, for example, which some say distinguishes
skill from virtue; perhaps fresher approaches may be found in non-​Western ethical
traditions.
I ended the Introduction by suggesting that I would be satisfied by convincing
the reader that a fruitful way to understand both the virtues and vices of spontaneity
is by understanding the implicit mind, even if some of the features of the implicit
mind as I  describe it are vague, confused, or flat-​out wrong. Another way to put
this is that in identifying the phenomenon of interest and offering an account of it,
I incline more toward lumping than splitting. Crudely—​as a lumper like me would
say—​lumping involves answering questions by placing phenomena into categories
that illuminate broad issues. Lumpers focus more on similarities than differences.
Splitters answer questions by identifying key differences between phenomena, with
the aim of refining definitions and categories. Lumping and splitting are both neces-
sary for making intellectual progress, I believe. If this work inspires splitters more
skilled than me to refine the concept of the implicit mind, in the service of better
understanding what we do when we act spontaneously, I’ll be deeply satisfied.
Appendix

ME A SURES, METHODS,
AND PSYCHOLOGICAL SCIENCE

I briefly described how implicit attitudes are measured in psychological science in


the Introduction. In my opinion, the ability to measure implicit attitudes represents
one of the major advancements in the sciences of the mind in the twentieth century.
But controversies surround both implicit attitude research and the sciences of the
mind writ large.1 In this appendix, I explain how the Implicit Association Test (IAT)
works in more detail and compare it with other indirect measures of attitudes (§1).
Then, I describe and discuss critiques of some of the psychometric properties of the
IAT, which is the most widely used measure of implicit attitudes, and identify addi-
tional current open questions facing IAT research (§2). This discussion leads into
a broader one about the status of psychological science, in particular the reproduc-
ibility of experimental results in social psychology (§3). My aim in discussing the
replication crisis is not to adjudicate its significance or to make novel recommenda-
tions to researchers, but rather to explain the principles I have used in writing this
book, principles underwriting the confidence I have in the experimental research
I’ve presented.

1.  The Measurement of Implicit Attitudes


When discussing research on implicit attitudes (as they are usually understood in
psychology), I have relied mostly on IAT research. The IAT is a reaction time meas­
ure. It assesses participants’ ability to sort words, pictures, or names into categories
as fast as possible, while making as few errors as possible. In the following examples,

1
  In this appendix, I do not explicitly discuss all of the areas of research presented in this book (e.g.,
I do not discuss methodological issues facing research on expertise and skill acquisition). While I mean
the broad discussion of replicability and methodological principles to cover these areas, I  focus on
psychometric issues pertaining to research on implicit social cognition.

211
212 Appendix

the correct way to sort the name “Michelle” would be left (Figure A.1) and right
(Figure A.2), and the correct way to sort the word “Business” would be left (Figure
A.3) and right (Figure A.4).
One computes an IAT score by comparing speed and error rates on the
“blocks” (or sets of trials) in which the pairing of concepts is consistent with
common stereotypes (Figures A.1 and A.3) to the speed and error rates on the
blocks in which the pairing of the concepts is inconsistent with common ste-
reotypes (Figures A.2 and A.4).2 Most people are faster and make fewer errors
on stereotype-​consistent trials than on stereotype-​inconsistent trials. While
this “gender–​career” IAT pairs concepts (male and career), other IATs, such
as the black–​w hite race evaluation IAT, pair a concept with an evaluation (e.g.,
black and “bad”).3 Other IATs test implicit attitudes toward body size, age,
sexual orientation, and so on (including more relatively banal targets, like brand
preferences). More than 16 million unique participants had taken an IAT as of
2014 (B. A. Nosek, pers. com.). One review (Nosek et al., 2007), which exam-
ined the results of over 700,00 subjects’ scores on the black-​white race evalua-
tion IAT, found that more than 70% of white participants more easily associated
black faces with negative words (e.g., war, bad) and white faces with positive
words (e.g., peace, good) than the inverse (i.e., black faces with positive words
and white faces with negative words). The researchers consider this an implicit
preference for white faces over black faces. In this study, black participants on
average showed a very slight preference for black faces over white faces. This
is roughly consistent with other studies showing that approximately 40% of
black participants demonstrate an implicit in-​group preference for black faces
over white faces, 20% show no preference, and 40% demonstrate an implicit
out-​group preference for white faces over black faces (Nosek et al., 2002, 2007;
Ashburn-​Nardo et al., 2003; Dasgupta, 2004; Xu et al., 2014).
Although the IAT remains the most popular indirect measure of attitudes, it is
far from the only one.4 A  number of variations of priming tasks are widely used

  This is how an IAT “D score” is calculated. There are other ways of scoring the IAT too.
2

  See the discussion in Chapter 2 of the view that these two types of measures assess two separate
3

constructs: implicit stereotypes and implicit evaluations (or prejudices).


4
  Throughout, I have referred to the IAT and other measures of implicit attitudes as “indirect.” In
Chapter 1, I said that these tests are indirect in the sense that people are not asked about their feelings or
thoughts directly. Sometimes the term “implicit measure” is used in the literature, rather than “indirect
measure.” I follow the recommendation of Jan De Houwer and colleagues (2009) that the terms “direct”
and “indirect” be used to describe characteristics of measurement techniques and the terms “implicit”
and “explicit” to describe characteristics of the psychological constructs assessed by those techniques.
Note, though, that “direct” and “indirect” can also refer to different kinds of explicit measures. For
example, a survey that asks, “What do you think of black people?” is explicit and direct, while one that
asks, “What do you think about Darnel?” is explicit and indirect (because the judgment is explicit, but
the content of what is being judged [i.e., race] is inferred). The distinction between direct and indirect
Female Male
or or
Family Career

Michelle

Figure A.1 

Male Female
or or
Family Career

Michelle

Figure A.2 
Male Female
or or
Career Family

Business

Figure A.3 

Male Female
or or
Family Career

Business

Figure A.4 
Appendix 215

(e.g., evaluative priming (Fazio et al., 1995), semantic priming (Banaji and Hardin,
1996), and the Affect Misattribution Procedure (AMP; Payne et al., 2005)). A “sec-
ond generation” of categorization-​based measures has also been developed in order
to improve psychometric validity. For example, the Go/​No-​go Association Task
(GNAT; Nosek and Banaji, 2001) presents subjects with one target object rather
than two in order to determine whether preferences or aversions are primarily
responsible for scores on the standard IAT (e.g., on a measure of racial attitudes,
whether one has an implicit preference for whites or an implicit aversion to blacks;
Brewer, 1999).
I have relied mostly on the IAT, but also, to a lesser degree, on the AMP, not only
because these are among the most commonly used measures of implicit attitudes,
but also because reviews show them to be the most reliable indirect measures of
attitudes (Bar-​Anan and Nosek, 2014; Payne and Lundberg, 2014).5 In assessing
seven indirect measures, Bar-​Anan and Nosek (2014) used the following evaluation
criteria:

• Internal consistency (i.e., the consistency of response across items within


the test)
• Test-​retest reliability (i.e., the stability of test results within subjects over time)
• Sensitivity to group differences (i.e., whether test results reflect expected dif-
ferences in response between groups, such as how black and white participants
score on the black–​white race evaluation IAT)
• Correlation with other indirect and direct measures of the same topics
• Measurement of single-​category evaluation (i.e., how well the measure can dis-
criminate evaluations of distinct categories, such as liking white people vs. dislik-
ing black people)
• Sensitivity to non-​extreme attitudes (i.e., how sensitive the measure is to mean-
ingful differences in response when extreme responses have been removed from
the data set)
• Effects of data exclusion (i.e., how likely participants are to follow the instruc-
tions and complete the measure appropriately)

I discuss some of these evaluation criteria in the following section.

measures is also relative rather than absolute. Even in some direct measures, such as personality inven-
tories, subjects may not be completely aware of what is being studied. It is important to note, finally,
that in indirect tests subjects may be aware of what is being measured. One ought not to conflate the
idea of assessing a construct with a measure that does not presuppose introspective availability with the
idea of the assessed construct being introspectively unavailable (Payne and Gawronski, 2010).
5
  Bar-​Anan and Nosek (2014) are less sanguine about the reliability of the AMP once outliers with
strong explicit attitudes toward targets are removed from the sample. Payne and Kristjen Lundberg
(2014) present several plausible replies to this critique, however.
216 Appendix

2. Psychometrics
There are a number of challenges facing IAT research. I focus on those that I believe
to be the most significant and that have arisen in the philosophical literature. There
is, I believe, significant room for improvement in the design of the IAT. One of my
hopes is that the account of implicit attitudes I have presented will instigate such
improvements. As I discuss later (§2.3), IAT measures can be improved by target-
ing more specific attitudes in more specific contexts.

2.1  Correlation between Indirect Measures


In Chapter  3, I  discussed the concern that indirect measures of attitudes do not
correlate strongly with each other. If a set of measures are valid representations of
the same construct, then they should, ceteris paribus, correlate with one another.
However, I argued in Chapter 3 that weak correlations between some indirect atti-
tude measures appear to reflect differences in the content of what is being measured
(e.g., self-​esteem, race, or political evaluations), with the weakest correlations found
when the most complex concepts are targeted.
A second possibility is that not all indirect measures are as valid as others (Bar-​
Anan and Nosek, 2014; Payne and Lundberg, 2014). One should not expect valid
measures to correlate with measures with weaker psychometric properties. One
finding that seems to speak against this possibility, however, is that the most well-​
validated measures—​variations of the IAT and the AMP—​don’t always strongly
correlate. This suggests that low correlations between indirect measures aren’t only
due to some measures having more validity than others. However, it is not clear that
comparisons of the IAT and the AMP have controlled for the content of what was
measured, so even this point must be taken cautiously.
A third option for explaining weak correlations between some indirect mea-
sures, as Bar-​Anan and Nosek (2014) and Machery (2016) suggest, is that dif-
ferent measures, or subsets of measures, reflect different components of implicit
cognition. The AMP, for example, works on the premise of a three-​step process
(Loersch and Payne, 2016). A prime (e.g., a black face) stimulates the accessibil-
ity of a general concept (e.g., danger). That concept is then misattributed by
the subject to the perception of a target (e.g., a degraded image of either a gun
or a pliers). The misattributed content of the prime is then used by the subject
to answer the question asked by the measure (e.g., What is that, a gun or a pair
of pliers?). On the face of it, what this procedure entails is significantly different
from what happens in a categorization task, like the IAT. Many instantiations of
the AMP also use primes that are highly affectively arousing for targeted subjects,
such as images of men kissing for heterosexual subjects. All of this is suggestive
of the proposal that different measures or subsets of measures reflect different
components of implicit cognition. This isn’t devastating for implicit attitude
Appendix 217

research, of course. One possibility is that it favors a dispositional view of implicit


attitudes like Machery’s (2016; see the discussion in Chapter 3). Another pos-
sibility is that models of implicit attitudes that take them to be mental states that
occur can come to make principled predictions about the conditions under which
indirect measures will diverge. I speculate that this may be one virtue to come of
the FTBA formula of implicit attitudes I have proposed. For example, I argued in
Chapter 3 that the affective intensity of implicit attitudes—​the salience of a felt
tension—​is a function of content-​specific, person-​specific, and context-​specific
variables. Controlling for these variables across comparisons of indirect measures
might reveal predictable correlations. For example, the correlation between an
AMP using highly affective images of same-​sex couples kissing and an attitude
IAT might become stronger, while the correlation between this AMP and a ste-
reotyping IAT (the affective activation of which is lower than that of the attitude
IAT) may become weaker.

2.2  Test-​Retest Reliability


I also discussed in Chapter 3 the concern that indirect measures typically have low
to moderate test-​retest reliability. This suggests that these measures may not reflect
stable features of agents. Perhaps what indirect measures reveal are on-​the-​fly con-
structions that predominantly reflect situational variables.
However, I  also mentioned in Chapter  3 that research by Gschwendner and
colleagues has demonstrated one way in which IAT scores may become more
stable over time within individuals. Gschwendner and colleagues (2008) assessed
German participants’ implicit evaluations of German versus Turkish faces on an
IAT and varied the background context during each block of the text (i.e., they
manipulated the blank space on the computer screen immediately below the tar-
get images and attribute words). Participants in the experimental condition saw a
picture of a mosque, while participants in the control condition saw a picture of
a garden. Gschwendner and colleagues then compared the stability of participant
scores over a two-​week period. The intra-​individual test-​retest reliability of the IAT
tends to be low to moderate (.51 in one meta-​analysis; Hofmann et al., 2005), but
the addition of the background picture in Gschwendner and colleagues’ study had
a dramatic influence on this. Participants in the control condition had a low stabil-
ity coefficient—​.29—​but participants in the experimental conditions had a high
stability coefficient—​.72.
This finding is methodologically significant for its ability to generate temporally
stable IAT results. It should, I believe, dampen down the fear that the IAT reflects
nothing more than on-​the-​fly constructions. Moreover, this research reflects some
of the core principles of my account of implicit attitudes. In embedding images
into the background of the IAT materials, Gschwendner and colleagues effectively
brought the testing context into closer alignment with the content of the test. They
218 Appendix

incorporated the very elements into the measure that my account of implicit atti-
tudes posits as components of them (e.g., context-​and content-​specific variables).6
It is important to note that Gschwendner and colleagues also found an interaction
between chronic accessibility of the relevant concept in the experimental condition,
but not in the control condition (i.e., participants for whom the concept Turkish is
chronically accessible had greater test-​retest stability when the image of the mosque
rather than the image of the garden was embedded in the IAT). This, too, speaks to
the durability of test scores by improving features of the measure. And the very fea-
tures being improved are those that reflect person-​specific variables—​such as the
chronic availability of the concept Turkish—​that I’ve emphasized.
The broad point here is not simply to defend the status quo. The work of
Gschwendner and colleagues (2008) is suggestive of how IAT research can be
improved. Targeting person-​specific and context-​specific features of attitudes can
help to improve psychometric validity, for example, test-​retest reliability. This
improvement may come at the expense of domain generality. That is, the increased
ability to predict behavior given contextual constraints may entail that the measure
won’t necessarily be predictive absent those constraints. I take this to represent a
trade-​off facing experimental psychology generally, not a decrease in explanatory
power as such. Measures that apply across a variety of contexts are less likely to have
strong correlation coefficients, while measures that apply to more specific circum-
stances are more likely to be predictive, but only under those conditions.7

2.3  Predictive Validity


The widest concern about indirect measures focuses on their ability to predict
real-​world behavior. Critics have argued that the IAT in particular predicts behav-
ior poorly, so much so that “the IAT provides little insight into who will discrimi-
nate against whom, and provides no more insight than explicit measures of bias”
(Oswald et  al., 2013, 18). This conclusion emerges from Frederick Oswald and
colleagues’ (2013) meta-​analysis of forty-​six published and unpublished reports,
the overall finding of which was a correlational effect size of .148. This contrasts
with Greenwald and colleagues’ (2003, 2009) meta-​analyses, which suggested that
racially discriminatory behavior is better predicted by IAT measures than by self-​
report measures. Understanding the differences between these meta-​analyses is

  Another way to put this is that the embedded images reduce noise in the measure by holding fixed
6

the meaning or value of the situation for participants. Thanks to Jonathan Phillips for this idea.
7
  This is a version of the principles of correspondence (Ajzen and Fishbein, 1970, 1977)  and
compatibility (Ajzen, 1998; Ajzen and Fishbein, 2005). The core idea of these principles is that the
predictive power of attitudes increases when attitudes and behavior are measured at the same level
of specificity. For example, attitudes about birth control pills are more predictive of the use of birth
control pills than are attitudes about birth control (Davidson and Jaccard, 1979).
Appendix 219

important. Once they are understood, I do not believe that Oswald and colleagues’
normative claim—​that the IAT is a poor tool that provides no insight into discrimi-
natory behavior—​is warranted.
First, predicting behavior is difficult. Research on implicit social cognition
partly arose out of the recognition that self-​report measures of attitudes don’t pre-
dict behavior very well. This is particularly the case when the attitudes in question
are about socially sensitive topics, such as race. Greenwald and colleagues (2009)
report an average attitude–​behavior correlation of .118 for explicit measures of
black–​white racial attitudes.8 It is important to establish this baseline for compari-
son. It may be a mark of progress for attitude research even if indirect measures of
attitudes have relatively small correlations with behavior.
More important, the fact that even self-​report measures of attitudes don’t predict
behavior very well is not cause to abandon such measures. Rather, since the 1970s,
researchers have recognized that the key question is not whether self-​reported atti-
tudes predict behavior, but rather when self-​reported attitudes predict behavior.
One important lesson from this research is that attitudes better predict behavior
when there is correspondence between the attitude object and the behavior in ques-
tion (Ajzen and Fishbein, 1977). For example, while generic attitudes toward the
environment do not predict recycling behavior very well, specific attitudes toward
recycling do (Oskamp et al., 1991).9 A wealth of theoretical models of attitude–​
behavior relations elaborate on issues like this and make principled predictions
about when attitudes do and do not predict behavior. According to dual-​process
models, the predictive relations of self-​reported and indirectly measured attitudes
to behavior should depend on (a) the type of behavior; (b) the conditions under
which the behavior is performed; and (c) the characteristics of the person who is
performing the behavior.10
From the perspective of these theories, meta-​analyses that ignore theoretically
derived moderators are likely to find poor predictive relations between attitudes
(whether self-​reported or indirectly measured) and behavior. Oswald et al. (2013)
make exactly this mistake. They included any study in which a race IAT and a behav-
ioral outcome measure were used, but they did not differentiate between behavioral
outcomes that should or should not be predicted on theoretical grounds.
For example, recall from Chapter 2 Amodio and Devine’s (2006) finding that
the standard race IAT predicted how much white participants liked a black student,
but it did not predict how white participants expected a black student to perform
on a sports trivia task. They also found that a stereotyping IAT, which measured

8
  Other reviews have estimated relatively higher attitude–​behavior correlations for explicit preju-
dice measures. Kraus (1995) finds a correlation of .24 and Talaska et al. (2008) find a correlation of .26.
9
  See also footnote 7.
10
  See, e.g., the “Reflective Impulse Model” (RIM; Strack and Deutsch, 1994) and “Motivation and
Opportunity as Determinants” (MODE; Fazio, 1990).
220 Appendix

associations between black and white faces and words associated with athleticism
and intelligence, predicted how white participants would expect a black student to
perform on a sports trivia task, but failed to predict white students’ likability rat-
ings of a black student. Amodio and Devine predicted these results on the basis of a
theoretical model distinguishing between “implicit stereotypes” and “implicit prej-
udice.” If Amodio and Devine had reported the average correlation between their
IATs and behavior, they would have found the same weak relationship reported by
Oswald et al. (2013). However, this average correlation would conceal the insight
that predictive relations should be high only for theoretically “matching” types of
behavior.
Because they included any study that arguably measured some form of dis-
crimination, regardless of whether there was a theoretical basis for a correlation,
Oswald and colleagues included all of Amodio and Devine’s (2006) IAT findings
in their analysis. That is, they included the failure of the evaluative race IAT to pre-
dict how participants described the black essay writer, as well as how participants
predicted another black student would perform on an SAT task compared with a
sports trivia task. Likewise, Oswald and colleagues included in their analysis the
failure of Amodio and Devine’s stereotyping IAT to predict seating-​distance deci-
sions and likability ratings of the black essay writer. In contrast, Greenwald et al.
(2009) did not include these findings, given Amodio and Devine’s theoretical basis
for expecting them. Amodio and Devine predicted what the two different kinds of
IAT wouldn’t predict, in other words. Oswald and colleagues include this as a failure
of the IAT to predict a discriminatory outcome. In contrast, Greenwald and col-
leagues count this as a theoretically motivated refinement of the kinds of behavior
IATs do and don’t predict.11

  Oswald and colleagues (2015) contest Greenwald et al.’s (2014) analysis. They argue that their
11

approach to inclusion criteria is more standard than Greenwald et al.’s (2009) approach. At issue is
whether IAT researchers’ theoretically motivated predictions of results should be taken into account.
For example, Oswald et al. (2015) contrast the findings of Rudman and Ashmore (2007) with those
of Amodio and Devine (2006). They write, “[Rudman and Ashmore (2007)] predicted that an IAT
measuring an attitude would predict stereotyping, and reported a statistically significant correlation
in support of this prediction. In contrast, Amodio and Devine (2006) predicted that a racial attitude
IAT would not predict stereotyping, and reported a set of nonsignificant correlations to support their
prediction. Had we followed an inclusion rule based on the ‘author-​provided rationale’ for each of these
studies, then all of the statistically significant effects would have been coded as supporting the IAT—​
even though some yielded inconsistent or opposing results—​and all of the statistically nonsignificant
results would have been excluded. This was how the Greenwald et al. (2009) meta-​analysis handled
inconsistencies” (Oswald et al., 2015, 563). This is, however, not a fully convincing reply. Consider the
studies. Rudman and Ashmore (2007) predicted, and confirmed, that the standard racial attitude IAT
would predict self-​reported verbal forms of discrimination and scores on an explicit measure of racial
attitudes (the Modern Racism Scale; McConahay, 1986), whereas an IAT measure of black–​violent
associations would better predict self-​reported behavioral forms of social exclusion (see Chapter  5,
footnote 5). This is plausible given the content of the specific stereotypes in question. Black–​violent
Appendix 221

This point is not specific to the theoretically tendentious distinction between


implicit stereotypes and implicit prejudice.12 Consider Justin  Levinson,
Robert Smith, and Danielle Young (2014), who developed a novel IAT that found
a tendency for respondents to associate white faces with words like “merit” and
“value” and black faces with words like “expendable” and “worthless.” This meas­
ure predicted, among other things, that mock jurors with stronger “white–​value/​
black–​worthless” associations were comparatively more likely to sentence a black
defendant to death rather than life in prison. Prima facie, this correlation suggests
that the race–​value IAT is tracking, at least to some extent, something like a disposi-
tion to devalue black lives. This suggestion is supported by the fact that another IAT
that measured associations between white and black faces and words like “lazy” and
“unemployed” did not predict death sentencing. These measures capture different
implicit associations and should predict different behavior.
Moreover, this point fits the framework for understanding the nature of implicit
attitudes for which I’ve argued. Consider Rooth and colleagues’ finding that
implicit work-​performance stereotypes predicted real-​world hiring discrimination
against both Arab Muslims (Rooth, 2010) and obese individuals (Agerström and
Rooth, 2011) in Sweden. Employers who associated these social groups with lazi-
ness and incompetence were less likely to contact job applicants from these groups
for an interview. In both cases, the indirect measure significantly predicted hiring

associations are coherently related to threat perception and social exclusion. Amodio and Devine
(2006) predicted, and confirmed, that the standard racial attitudes IAT would be uncorrelated with
the use of stereotypes about intelligence and athletic ability, but that a specific “mental–​physical”
IAT would be. This, too, is plausible, given the match between the specific stereotypes captured by
the measure and the behavioral outcomes being assessed (e.g., picking a teammate for a sports trivia
task). More broadly, Rudman and Ashmore’s (2007) and Amodio and Devine’s (2006) use of different
implicit stereotyping measures makes Oswald et al.’s (2015) direct comparison of them problematic.
In other words, Oswald and colleagues’ claim that these are equivalent measures of implicit stereotypes
with inconsistent findings is too blunt. Note again, however, that relations between putative implicit
stereotypes and implicit prejudice are more complex, in my view, than these analyses suggest. Madva
and Brownstein (2016) propose an “evaluative stereotyping” IAT that might more successfully predict
behavioral manifestations of stereotypes about race, intelligence, and physicality.
Oswald and colleagues (2015) double down on their claims by noting that Oswald et al. (2013)
also included failures of black–​white racial attitude IATs to predict discrimination toward white people.
They draw upon the true claim that in-​group favoritism is a significant feature of implicit bias, argu-
ably as important as out-​group derogation (or even more so according to some; e.g., Greenwald and
Pettigrew, 2014). But this point does not justify Oswald et al.’s (2013) inclusion policy given typical
outcome measures, such as shooting and hiring decisions, and nonverbal insults. It is hardly a failure
of IATs that they do not predict discriminatory behavior originating from white subjects toward white
targets in these domains.
12
  See Madva and Brownstein (2016) for detailed discussion and critique of the two-​type view of
implicit cognition.
222 Appendix

discrimination over and above explicit measures of attitudes and stereotypes, which
were uncorrelated or were very weakly correlated with the IATs. The predictive
power of the obesity IAT was particularly striking, because explicit measures of
anti-​obesity bias did not predict hiring discrimination at all—​even though a full
58% of participants openly admitted a preference for hiring normal-​weight over
obese individuals.13 This study is significant not just because the indirect measure
outperformed the self-​report measure, but arguably because it did so by homing in
on specific affectively laden stereotypes (lazy and incompetent) that are relevant
to a particular behavioral response (hiring) in a particular context (hiring Arabs or
people perceived as obese). This is in keeping with the points I made in § 2.3 about
improving the psychometric properties of indirect measures by targeting the co-​
activation of specific combinations of FTBA relata.
The virtue of Oswald and colleagues’ approach is that it avoids cherry-​picking
findings that support a specific conclusion. It also confirms that attitudes are rel-
atively poor predictors if person-​, context-​, and behavior-​specific variables are
ignored. But the main point is that this is to be expected. In fact, it would be bizarre
to think otherwise, that is, to think that a generic measure of racial attitudes like
the IAT would predict a wide range of race-​related behavior irrespective of whether
such a relation can be expected on theoretical grounds.
When the variables specified by theoretical models of implicit attitudes are con-
sidered, it is clear that indirect measures of attitudes are scientifically worthwhile
instruments. For example, Cameron, Brown-​Iannuzzi, and Payne (2012) analyzed
167 studies that used sequential priming measures of implicit attitudes. They
found a small average correlation between sequential priming tasks and behavior
(r = .28). Yet correlations were substantially higher under theoretically expected
conditions and lower under conditions where no relation would be expected.
Cameron and colleagues identified their moderators from the fundaments of three
influential dual-​process models of social cognition.14 While these models differ
in important ways, they converge in predicting that indirect measures will cor-
respond more strongly with behavior when agents have low motivation or little
opportunity to engage in deliberation or when implicit associations and delibera-
tively considered propositions are consistent with each other. Moreover, in con-
trast to Greenwald et al. (2009), Cameron and colleagues did not simply take the
stated expectations of the authors of the included studies for granted in coding
moderators. Rather, the dual-​process moderators were derived a priori from the
theoretical literature.

13
  For additional research demonstrating how implicit and explicit attitudes explain unique aspects
of behavior, see Dempsey and Mitchell (2010), Dovidio et al. (2002), Fazio et al. (1995), Galdi et al.
(2008), and Green et al. (2007).
14
  Specifically, from MODE (Fazio, 1990), APE (“Associative-​Propositional Evaluation”; Gawronski
and Bodenhausen, 2006), and MCM (“Meta-​Cognitive Model”; Petty, Briñol, and DeMarree, 2007).
Appendix 223

Finally, in considering the claim that the IAT provides little insight into who will
discriminate against whom, it is crucial to consider how tests like the IAT might
be used to predict behavior. IAT proponents tend to recommend against using the
IAT as a diagnostic tool of individual attitudes. At present, the measure is far from
powerful enough, on its own, to classify people as likely to engage in discrimination.
Indeed, I have been at pains to emphasize the complexity of implicit attitudes. But
this does not necessarily mean that the IAT is a useless tool. Greenwald and col-
leagues (2014) identify two conditions under which a tool that measures statisti-
cally small effects can track behavioral patterns with large social significance. One is
when the effects apply to many people, and the other is when the effects are repeat-
edly applied to the same person. Following Samuel Messick (1995), Greenwald and
colleagues refer to this as the “consequential validity” of a measure. They provide
the following example to show how small effects that apply to many people can be
significant for predicting discrimination:

As a hypothetical example, assume that a race IAT measure has been adminis-
tered to the officers in a large city police department, and that this IAT meas­
ure is found to correlate with a measure of issuing citations more frequently to
Black than to White drivers or pedestrians (profiling). To estimate the mag-
nitude of variation in profiling explained by that correlation, it is necessary
to have an estimate of variability in police profiling behavior. The estimate
of variability used in this analysis came from a published study of profiling
in New  York City (Office of the Attorney General, 1999), which reported
that, across 76 precincts, police stopped an average of 38.2% (SD = 38.4%)
more of each precinct’s Black population than of its White population. Using
[Oswald and colleagues’ (2013)] r = .148 value as the IAT–​profiling correla-
tion generates the expectation that, if all police officers were at 1 SD below the
IAT mean, the city-​wide Black–​W hite difference in stops would be reduced
by 9,976 per year (5.7% of total number of stops) relative to the situation if
all police officers were at 1 SD above the mean. Use of [Greenwald and col-
leagues’ (2009)] larger estimate of r = .236 increases this estimate to 15,908
(9.1% of city-​wide total stops). (Greenwald et al., 2015)15

15
  See the critical discussion of this example in Oswald et al. (2015), where it is argued that infer-
ences about police officers cannot be drawn given that the distribution of IAT scores for police offi-
cers is unknown. This strikes me as unmoving, given both that Greenwald and colleagues present the
example explicitly as hypothetical and that there is little reason to think that police officers would dem-
onstrate less anti-​black bias on the IAT compared with the average IAT population pool. Moreover,
Greenwald and colleagues’ general point about small effect sizes having significant consequences has
been made elsewhere, irrespective of the details of this particular example. Robert Rosenthal, for exam-
ple, (1991; Rosenthal and Rubin, 1982) shows that an r of .32 for a cancer treatment, compared with a
placebo, which accounts for only 10% of variance, translates into a survival rate of 66% in the treatment
group compared with 34% in the placebo group.
224 Appendix

This analysis could be applied to self-​report measures too, of course. But the effect
sizes for socially sensitive topics are even smaller with these. This suggests that a
measure with a correlational effect size of .236 (or even .148) can contribute to
our understanding of patterns of discriminatory behavior. So, too, is this the les-
son when discriminatory impact accumulates over time by repeatedly affecting the
same person (e.g., in hiring, testing, healthcare experiences, and law enforcement).
With repetition, even a tiny impact increases the chances of significantly undesir-
able outcomes. Greenwald and colleagues draw an analogy to a large clinical trial of
the effect of aspirin on the prevention of heart attacks:

The trial was terminated early because data analysis had revealed an unex-
pected effect for which the correlational effect size was the sub-​small value
of r = .035. This was “a significant (P < 0.00001) reduction [from 2.16% to
1.27%] in the risk of total myocardial infarction [heart attack] among those
in the aspirin group” (Steering Committee of the Physicians’ Health Study
Research Group, 1989). Applying the study’s estimated risk reduction of 44%
to the 2010 U.S. Census estimate of about 46 million male U.S. residents 50
or older, regular small doses of aspirin should prevent approximately 420,000
heart attacks during a 5-​year period. (2015, p. 558)

The effect of taking aspirin on the likelihood of having a heart attack for any par-
ticular person is tiny, but the sub-​small value of the effect was significant enough to
terminate data analysis in order to advance the research for use in public policy.16
The comparatively larger effect sizes associated with measures of implicit attitudes
similarly justify taking immediate action.

2.4  Other Open Questions


Significant challenges remain, of course, for research on implicit bias. One of the
most daunting is the difficulty of designing interventions with durable effects. Calvin
Lai and colleagues (2016) found that nine interventions that immediately reduced
implicit preferences as measured by IAT had no significant effects after a delay of
several hours to several days. The interventions tested in this study, however, were all
relatively constrained in time and scope. That is, the interventions themselves were
not long-​lasting and no interventions took a multifaceted approach. For example, in
one intervention participants viewed twenty black faces paired with positive words
and twenty white faces paired with negative words, and in another intervention par-
ticipants played a simulated dodgeball game with interracial teammates. These are
relatively simple and quick activities. And in no condition did participants do both

16
  For further discussion, see Valian (1998, 2005).
Appendix 225

of these activities. This isn’t a critique of Lai and colleagues’ analysis. Their interven-
tions were quick and simple by design, in order to make it easy to administer and in
order to facilitate comparisons between interventions. Their review helps to show
what probably won’t work and moreover indicates that creating lasting implicit atti-
tude change likely demands relatively robust interventions.
It is also important to note that implicit attitude research is itself relatively new and
that the effort to find interventions with lasting effects is even newer. For example,
Patrick Forscher and colleagues (2016) reviewed 426 studies involving procedures for
changing implicit bias, but only 22 of the studies in their sample collected longitudi-
nal data. It may be an unreasonable expectation that feasible implicit attitude change
interventions with durable effects could be discovered more or less in the fifteen years
researchers have been focused on this. Some patience is appropriate.
As before, in consideration of the psychometric properties of indirect measures, my
hope is that my account of implicit attitudes could inspire new directions in research on
effective interventions for implicit bias. I applaud Devine and colleagues’ (2012) habit-​
based approach, but perhaps it didn’t go far enough. It did not include, for example,
the kind of rote practice of egalitarian responding found in Kawakami’s research (see
Chapter 7). I suspect that more effective interventions will have to be more robust,
incorporating repeated practice over time like this. This is akin to what athletes do
when they practice. Just as a core feature of skill acquisition is simply doing the work—​
that is, putting in the effort over and over—​acquiring more egalitarian habits may be
less the result of divining some hitherto unknown formula than, at least in large part,
just putting in the time and (repeated) effort.17

3.  Replication in the Psychological Sciences


In contrast to some promising results (e.g., Devine et al., 2012), the difficulty of
creating reliably lasting changes in implicit attitudes speaks to a broader concern
about ongoing research in the sciences of the mind. The so-​called replication crisis
has destabilized many researchers’ confidence in what were previously thought to
be well-​established beliefs in experimental psychology. While concerns about rep-
licability have been percolating in psychology (and elsewhere) for some time (e.g.,
Meehl, 1990; Cohen, 1990), they exploded onto the public scene in light of concur-
rent events. These included discoveries of outright fraud in some high-​profile labs,
the identification of potentially widespread questionable research practices (e.g., “p-​
hacking”), growing concerns about the difficulty of publishing null results and the
related “file drawer” problem, and failures to replicate seminal discoveries (e.g., in

17
  See Chapter 8, footnote 1.
226 Appendix

research on priming, such as in Bargh et al., 1996).18 These events led researchers to
attempt large-​scale well-​organized efforts to replicate major psychological findings.
The findings—​for example, of the Open Science Collaboration’s (OSC) (2015)
“Estimating the Reproducibility of Psychological Science”—​were not encouraging.
The OSC attempted to replicate one hundred studies from three top journals but
found that the magnitude of the mean effect size of replications was half that of the
original effects. Moreover, while 97% of the original studies had significant results,
only 36% of the replications did.
These and other findings are deeply worrying. They raise serious concerns about
the accuracy of extant research. They also point to problems for future research that
may be very difficult to solve. The solutions to problems embedded in significance
testing, causal inference, and so on are not yet clear.19 One thing that is important to
note is that these are not problems for psychology alone. Replication efforts of land-
mark studies in cell biology, for example, were successful 11 and 25% of the time,
as noted in two reports (Begley and Ellis, 2012; Prinz et al., 2011). Indeed, what
is noteworthy about psychology is probably not that it faces a replication crisis,
but rather how it has started to respond to this crisis, with rigorous internal debate
about methodology, reforming publication practices, and the like. The founding
of the Society for the Improvement of Psychological Science, for example, is very
promising.20 Michael Inzlicht ominously writes, for example:

I lost faith in psychology. Once I saw how our standard operating procedures
could allow impossible effects to seem possible and real, once I understood
how honest researchers with the highest of integrity could inadvertently and
unconsciously bias their data to reach erroneous conclusions at disturbing
rates, once I  appreciated how ignoring and burying null results can warp
entire fields of inquiry, I finally grasped how broken the prevailing paradigm
in psychology (if not most of science) had become. Once I  saw all these
things, I could no longer un-​see them.

But, he continues:

I feel optimistic about psychology’s future. There are just so many good things
afoot right now, I can’t help but feel things are looking up: Replications are
increasingly seen as standard practice, samples are getting larger and larger,

  For a timeline of events in the replication crisis, see http://​andrewgelman.com/​2016/​09/​21/​


18

what-​has-​happened-​down-​here-​is-​the-​winds-​have-​changed/​.
19
  An excellent encapsulation of these and other concerns is found in Sanjay Srivastava’s (2016)
fake, but also deadly serious “Everything Is Fucked: The Syllabus.”
20
  See the society’s mission statement at https://​mfr.osf.io/​render?url=https://​osf.io/​jm27w/​?act
ion=download%26direct%26mode=render.
Appendix 227

pre-​registration is something that is becoming more common, faculty appli-


cants are increasingly asked what they are personally doing to improve their
science, scientists are increasingly posting their data and analysis code, old
methods are being challenged and new ones recommended, and there now
exists an entire society devoted to improving our field.21

Of course, while these efforts are likely to improve future psychological science,
they don’t indicate how much confidence we ought to have in extant studies. My
view is that one can be cautiously confident in presenting psychological research in
a context like this by adhering to a few relatively basic principles (none of which are
the least bit novel to researchers in psychology or to philosophers working on these
topics). As with most principles, though, things are complicated. And cautious con-
fidence by no means guarantees the right outcomes.
For example, studies with a large number of participants are, ceteris paribus, more
valuable than studies with a small number of participants. And so I have tried to
avoid presenting the results of underpowered studies. Indeed, this is a core strength
of IAT research, much of which is done with online samples. As a result, many of the
studies I discuss (when focused on research on implicit attitudes) have tremendous
power to detect significant effects, given sample sizes in the thousands and even
sometimes in the hundreds of thousands. As a methodological principle, however,
there are complications stemming from the demand that all psychological research
require large samples. One complication is that large samples are sometimes diffi-
cult or impossible to obtain. This may be due to research methodology (e.g., neural
imaging research is often prohibitively expensive to do with large samples) or to
research topics (e.g., some populations, such as patients with rare diseases, are dif-
ficult to find). While there have been Herculean efforts to locate difficult to find
populations (e.g., Kristina Olson’s research in the Transyouth Project),22 in some
cases the problem may be insurmountable. A related complication stems from other
kinds of costs one pays in order to obtain large samples. There are downsides, for
example, to studies using online participants. People in these participant pools are
self-​selecting and may not be representative of the population at large.23 Their test-
ing environment is also not well controlled. The tension between the importance
of obtaining large samples and the costs associated with online samples is one that
methodologists will continue to work through.
Another seemingly simply, but ultimately not uncomplicated, principle is focus-
ing on sets of studies rather than on any one individual study. It is generally bad

21
  See http://​michaelinzlicht.com/​getting-​better/​2016/​8/​22/​the-​unschooling-​of-​a-​faithful-
​mind.
22
  See http://​depts.washington.edu/​transyp/​.
23
  Of course, this may be a problem facing psychology as a whole. See Henrich et al. (2010). For
additional worries about attrition rates in online samples, see Zhou and Fishbach (2016).
228 Appendix

practice to take any one study as definitive of anything. Meta-​analyses tend to be


more reliable. But complications and concerns abound here too. Sometimes, as dis-
cussed in §2.3, multiple meta-​analyses have conflicting results. This can be the result
of legitimate differences in analytic principles (e.g., principles guiding inclusion
criteria). Comparing meta-​analyses or sets of replication efforts across subareas of
research is also complicated, given potentially legitimate differences between com-
mon research practices. For example, the OSC found that social psychology fared
worse than other subdisciplines, such as cognitive psychology. By the OSC’s met-
ric, only fifteen of fifty-​five social psychology effects could be replicated, whereas
twenty-​one of forty-​two effects in cognitive psychology could. This may be due
in part to differences in experimental design common within these fields. As the
authors note, within-​subjects designs are more common in cognitive psychology,
and these often have greater power to detect significant effects with equivalent num-
bers of participants. This leaves open the question whether social psychologists
ought to alter their research designs or whether there are features of social life—​
the object of study of social psychology—​that require between-​subjects designs or
other, more comparatively risky practices.
While some of the complications facing meta-​analyses have to do with open
questions like these, other concerns are more unambiguous. A meta-​analysis of
a set of unreliable studies will itself yield unreliable results. This is the so-​called
garbage in, garbage out problem (Inzlicht et al., 2015). The file drawer prob-
lem—​that is, the problem of including only studies that find effects and leaving
aside studies that don’t—​can also render meta-​analyses unreliable. Sometimes
this problem is obvious, such as when a meta-​analysis includes only published
studies. Psychology journals typically do not publish null results, so including
only published studies can serve as a proxy for the file drawer problem. Even
when unpublished studies are included, as Inzlicht points out, another indi-
cation that the file drawer problem has affected a meta-​analysis is that there
is an association between the number of participants in the included studies
and the effect sizes of those studies. Unbiased samples should show no such
association.24
It is important not to confuse this problem with the idea that multiple stud-
ies that fail to demonstrate a statistically significant effect confirm the absence
of an effect, however. A  set of studies free from major biases can indicate a
significant effect when pooled properly, even if each study on its own finds no
significant effect. This points to the enduring importance of meta-​analyses. As
Steven Goodman and colleagues point out in a meta-​analysis of the effect of
tamoxifen on breast cancer survival (Early Breast Cancer Trialists Collaborative
Group, 1988):

24
  See http://​soccco.uni-​koeln.de/​cscm-​2016-​debate.html.
Appendix 229

In this pooled analysis, 25 of 26 individual studies of tamoxifen’s effect were


not statistically significant. Naïvely, these nonsignificant findings could be
described as having been replicated 25 times. Yet, when properly pooled, they
cumulatively added up to a definitive rejection of the null hypothesis with
a highly statistically significant 20% reduction in mortality. So the proper
approach to interpreting the evidential meaning of independent studies is not
to assess whether or not statistical significance has been observed in each,
but rather to assess their cumulative evidential weight. (Goodman et  al.,
2016, p. 3)

Thus while relying on meta-​analyses will not by any stretch solve the problems fac-
ing psychological science and while meta-​analyses can have their own problems,
they continue to have a central role to play in adjudicating the credence one ought
to have in a body of research. They can play this role best when researchers evaluate
the design and structure of meta-​analyses themselves (as they are now doing).
A third principle to which I have tried to hew closely—​and is particularly ger-
mane in the case of implicit attitude research—​is to identify reasons to expect fea-
tures of the context to affect the outcomes of studies. This follows from my account
of implicit attitudes as states with FTBA components. It figures into my recommen-
dations for creating IATs with greater predictive validity. And it also speaks to broad
issues in the replication crisis. It is clear from decades of research that social attitudes
and behavior are persistently affected by the conceptual, physical, and motivational
elements of agents’ contexts (see Chapters 3 and 7 and, for a review, see Gawronski
and Cesario, 2013). In one sense, this suggests that we should expect many efforts
to replicate psychological experiments to fail, given the myriad cultural, historical,
and contextual features that could influence the outcome. Of course, replication
under specified circumstances is a core tenet of science in general. In psychological
science, the ideal is to identify outcome-​affecting variables that are psychologically
meaningful for principled reasons.
For example, consider the finding that the ambient light of the room in which one
sits affects the activation of implicit racial biases (Schaller et al., 2003). A principled
explanation of the specific moderating contextual factors is available here, I believe.
It stems from the connection between an agent’s cares and the selective activation
of her implicit attitudes (see Chapters 3 and 4). Schaller and colleagues’ findings
obtained only for agents with chronic beliefs in a dangerous world. The dangerous-
ness of the world is something the agent cares about (as chronic beliefs are cer-
tainly sufficient to fix cares). This care is connected to the activation of the agent’s
implicit biases, given that ambient darkness is known to amplify threat perception
(e.g., Grillon et al., 1997) and that black faces are typically seen by white people
as more threatening than white faces (e.g., Hugenberg and Bodenhausen, 2003).
Likewise predictions could be made based on other factors relevant to agents’ cares.
One might expect that the possibilities for action available to an agent, based on
230 Appendix

her physical surroundings, also affect the activation of threat-​related implicit biases.
Indeed, precisely this has been found.
Consider a second example, regarding related (and much criticized) research on
priming. Cesario and Jonas (2014) address the replicability of priming research in
their “Resource Computation Model” by identifying conditions under which primes
will and won’t affect behavior. They model social behaviors following priming as a
result of computational processing of information that determines a set of behav-
ioral possibilities for agents. They describe three sources of such information: (1)
social resources (who is around me?); (2)  bodily resources (what is the state of
my physiology and bodily position?); and (3) structural resources (where am I and
what objects are around me?). I described one of Cesario and Jonas’s experiments
in Chapter 2. They documented the effects of priming white participants with the
concept “young black male,” contingent upon the participants’ physical surround-
ings and how threatening they perceived black males to be. The participants were
seated either in an open field, which would allow a flight response to a perceived
threat, or in a closed booth, which would restrict a flight response. Their outcome
measure was the activation of flight versus fight words. Cesario and Jonas found that
fight-​related words were activated for participants who were seated in the booth and
who associated black men with danger, whereas flight-​related words were activated
for participants who were seated in the field and who associated black men with
danger.25
The line demarcating principled predictions about outcome-​affecting variables
from ad hoc reference to “hidden moderators” is not crystal clear, however. I take
the articulation of this line to be an important job for philosophers of science. It
is clear enough that one should not attempt to explain away a failure to replicate a
given study by searching after the fact for differences between the first and second
experimental contexts.26 But I also note that this sort of post hoc rationalization can
point the way to fruitful future research, which could help to confirm or discon-
firm the proposal. What starts as post hoc rationalization for a failed replication can
become an important theoretical refinement of previous theory, given additional
research. (Or not! So the research needs to be done!)

4. Conclusion
The aforementioned ambiguity over what constitutes a principled explanation of an
effect points toward a broader set of issues regarding replication, the psychological

25
  My point is not that all priming research can be saved through this approach. I do not know if
there are principled explanations of moderators of many forms of priming.
26
  For a prescient, helpful, and related critique of post hoc hypothesizing about data, see Kerr
(1998).
Appendix 231

sciences, and philosophy. There is much work to do at this juncture for philosophers
of science. Aside from questions about replication, there are persistently difficult
tensions to resolve in research methodology and interpretation. The comparative
value of field studies versus experiments is particularly fraught, I believe. Each offers
benefits, costs, and seemingly inevitable trade-​offs.27 With respect to replication,
there are trade-​offs between conceptual and direct replications, and moreover it
isn’t even entirely clear what distinguishes a successful from an unsuccessful repli-
cation. Even more broadly, it is not clear how a successful replication indicates what
is and isn’t true. The OSC (2015) writes, for example, “After this intensive effort to
reproduce a sample of published psychological findings, how many of the effects
have we established are true? Zero. And how many of the effects have we estab-
lished are false? Zero. Is this a limitation of the project design? No.” The OSC rec-
ommends a roughly Bayesian framework, according to which all studies—​originals
and replications alike—​simply generate cumulative evidence for shifting one’s prior
credence in an idea:28

The original studies examined here offered tentative evidence; the replica-
tions we conducted offered additional, confirmatory evidence. In some cases,
the replications increase confidence in the reliability of the original results;
in other cases, the replications suggest that more investigation is needed to
establish the validity of the original findings. Scientific progress is a cumula-
tive process of uncertainty reduction that can only succeed if science itself
remains the greatest skeptic of its explanatory claims.

These assertions bear upon long-​standing debates in the philosophy of science.


Some philosophers of science have begun considering these very questions.29 As
with most philosophical questions, clear answers probably won’t be found. But
my cautiously optimistic sense, given all that I have discussed, is that the epistemic
value of the sciences of the mind, and of the majority of the findings I’ve presented
in this book, is secure. I second the OSC’s sense that science is its own greatest skep-
tic.30 Psychological science in particular may use flawed means for understanding
the mind, but these means strike me as comparatively less flawed than other, nonsci-
entific means alone. In this way, one might echo what Winston Churchill putatively
said about democracy: that is was the worst form of government—​except for all
the others. Psychological science may be the worst way to understand the mind—​
except for all the others.

27
  See, e.g., Berkowitz and Donnerstein (1982) and Falk and Heckman (2009).
28
  See Goodman et al. (2016) on Bayesian approaches to understanding reproducibility.
29
  See, e.g., Mayo (2011, 2012).
30
  For another statement of cautious optimism, see Simine Vazire’s (2006) blog post “It’s the End of
the World as We Know It . . . and I Feel Fine.”
REFERENCES

Aarts, H., Dijksterhuis, A., & Midden, C. 1999. To plan or not to plan? Goal achievement or inter-
rupting the performance of mundane behaviors. European Journal of Social Psychology, 29,
971–​979.
Achtziger, A., Bayer, U., & Gollwitzer, P. 2007. Implementation intentions: Increasing the accessibil-
ity of cues for action. Unpublished paper, University of Konstanz.
Agassi, A. 2010. Open: An autobiography. New York: Vintage.
Agerström, J., & Rooth, D. O. 2011. The role of automatic obesity stereotypes in real hiring discrimi-
nation. Journal of Applied Psychology, 96(4), 790–​805.
Ajzen, I. 1988. Attitudes, personality, and behavior. Homewood, IL: Dorsey Press.
Ajzen, I., & Fishbein, M. 1970. The prediction of behavior from attitudinal and normative variables.
Journal of Experimental Social Psychology, 6, 466–​487. doi:10.1016/​0022-​1031(70)90057-​0.
Ajzen, I., & Fishbein, M. 1977. Attitude–​behavior relations: A theoretical analysis and review of
empirical research. Psychological Bulletin, 84, 888–​918. doi:10.1037/​0033-​2909.84.5.888.
Ajzen, I., & Fishbein, M. 2005. The influence of attitudes on behavior. In D. Albarracín, B. T.
Johnson, & M. P. Zanna (Eds.), The handbook of attitudes, 173–​221. Mahwah, NJ: Erlbaum.
Akerlof, G. A., & Kranton, R. E. 2000. Economics and identity. Quarterly Journal of Economics, 115,
715–​753.
Alcoff, L. M. 2010. Epistemic identities. Episteme, 7(2), 128–​137.
Allport, G. 1954. The nature of prejudice. Cambridge, MA: Addison-​Wesley.
Amodio, D. M., & Devine, P. G. 2006. Stereotyping and evaluation in implicit race bias: Evidence
for independent constructs and unique effects on behavior. Journal of Personality and Social
Psychology, 91(4), 652.
Amodio, D. M., & Devine, P. G. 2009. On the interpersonal functions of implicit stereotyping
and evaluative race bias: Insights from social neuroscience. In R. E. Petty, R. H. Fazio, & P.
Briñol (Eds.), Attitudes:  Insights from the new wave of implicit measures, 193–​226. Hillsdale,
NJ: Erlbaum.
Amodio, D. M., & Devine, P. G. 2010. Regulating behavior in the social world: Control in the con-
text of intergroup bias. In R. R. Hassin, K. N. Ochsner, & Y. Trope (Eds.), Self control in society,
mind, and brain, 49–​75. New York: Oxford University Press.
Amodio, D. M., Devine, P. G., & Harmon-​Jones, E. 2008. Individual differences in the regulation
of intergroup bias: The role of conflict monitoring and neural signals for control. Journal of
Personality and Social Psychology, 94, 60–​74.
Amodio, D. M., & Hamilton, H. K. 2012. Intergroup anxiety effects on implicit racial evaluation and
stereotyping. Emotion, 12, 1273–​1280.
Amodio, D. M., & Ratner, K. 2011. A memory systems model of implicit social cognition. Current
Directions in Psychological Science, 20(3), 143–​148.

233
234 References

Anderson, E. 1993. Value in ethics and economics. Cambridge, MA: Harvard University Press.


Anderson, E. 2005. Moral heuristics: Rigid rules or flexible inputs in moral deliberation? Behavioral
and Brain Sciences, 28(4), 544–​545.
Anderson, E. 2012. Epistemic justice as a virtue of social institutions. Social Epistemology, 26(2),
163–​173.
Andrews, K. 2015, September 30. Naïve normativity. http://​philosophyofbrains.com/​2014/​09/​
30/​naive-​normativity.aspx.
Annas, J. 2008. The phenomenology of virtue. Phenomenology and the Cognitive Sciences, 7, 21–​34.
Antony, L. 2016. Bias:  Friend or foe? Reflections on Saulish skepticism. In M. Brownstein & J.
Saul (Eds.), Implicit bias and philosophy: Vol. 1, Metaphysics and epistemology, 157–​190.
Oxford: Oxford University Press.
Apfelbaum, E. P., Sommers, S. R., & Norton, M. I. 2008. Seeing race and seeming racist? Evaluating
strategic colorblindness in social interaction. Journal of Personality and Social Psychology, 95,
918–​932.
Argyle, M., Alkema, F., & Gilmour, R. 1971. The communication of friendly and hostile attitudes by
verbal and non-​verbal signals. European Journal of Social Psychology, 1(3), 385–​402.
Ariely, D. 2011. The upside of irrationality. New York: Harper Perennial.
Aristotle. 2000. Nicomachean ethics (R. Crisp, Ed. & Trans.). Cambridge: Cambridge University Press.
Arnold, M. 1960. Emotion and personality. New York: Columbia University Press.
Arpaly, N. 2004. Unprincipled virtue: An inquiry into moral agency. Oxford: Oxford University Press.
Arpaly, N. 2015. Book review of Consciousness and moral responsibility. Australasian Journal of
Philosophy. doi: 10.1080/​00048402.2014.996579.
Arpaly, N. 2015. Philosophy and depression. Web log comment. http://​dailynous.com/​2015/​02/​
23/​philosophy-​and-​depression/​.
Arpaly, N., & Schroeder, T. 1999. Praise, blame and the whole self. Philosophical Studies, 93(2),
161–​188.
Arpaly, N., & Schroeder, T. 2012. Deliberation and acting for reasons. Philosophical Review, 121(2),
209–​239.
Ashburn-​Nardo, L., Knowles, M. L., & Monteith, M. J. 2003. Black Americans’ implicit racial asso-
ciations and their implications for intergroup judgment. Social Cognition, 21(1), 61–​87.
Ashburn-​Nardo, L., Voils, C. I., & Monteith, M. J. 2001. Implicit associations as the seeds of
intergroup bias:  How easily do they take root? Journal of Personality and Social Psychology,
81(5), 789.
Bahill, A., & LaRitz, T. 1984. Why can’t batters keep their eyes on the ball? A laboratory study of bat-
ters tracking a fastball shows the limitations of some hoary baseball axioms. American Scientist,
72(3), 249–​253.
Banaji, M. R. 2013. Our bounded rationality. In J. Brockman (Ed.), This explains everything: Deep,
beautiful, and elegant theories of how the world works, 94–​95. New York: Harper Perennial.
Banaji, M. R., & Greenwald, A. G. 2013. Blindspot:  Hidden biases of good people. New  York: 
Delacorte Press.
Banaji, M., & Hardin, C. 1996. Automatic stereotyping. Psychological Science, 7, 136–​141.
Bar-​Anan, Y., & Nosek, B. 2014. A comparative investigation of seven implicit attitude measures.
Behavioral Research, 46, 668–​688.
Barden, J., Maddux, W., Petty, R., & Brewer, M. 2004. Contextual moderation of racial bias: The
impact of social roles on controlled and automatically activated attitudes. Journal of Personality
and Social Psychology, 87(1), 5–​22.
Bargh, J. A. 1999. The cognitive monster: The case against the controllability of automatic stereo-
type effects. In S. Chaiken & Y. Trope (Eds.), Dual-​process theories in social psychology, 361–​
382. New York: Guilford Press.
Bargh, J. A., Chen, M., & Burrows, L. 1996. Automaticity of social behavior: Direct effects of trait
construct and stereotype-​activation on action. Journal of Personality and Social Psychology, 71,
230–​244.
References 235

Bargh, J. A., Gollwitzer, P. M., Lee-​Chai, A., Barndollar, K., & Trotschel, R. 2001. The automated
will: Unconscious activation and pursuit of behavioral goals. Journal of Personality and Social
Psychology, 81, 1004–​1027.
Baron, J. (2008), Thinking and deciding. New York: Cambridge University Press.
Barrett, L. F., & Bar, M. 2009. See it with feeling: Affective predictions during object perception.
Philosophical Transactions of the Royal Society, 364, 1325–​1334.
Barrett, L. F., Oschner, K. N., & Gross, J. J. 2007. On the automaticity of emotion. In J. A. Bargh
(Ed.), Social psychology and the unconscious: The automaticity of higher mental processes, 173–​
218. Philadelphia: Psychology Press.
Barry, M. 2007. Realism, rational action, and the Humean theory of motivation. Ethical Theory and
Moral Practice, 10, 231–​42.
Bartsch, K., & Wright, J. C. 2005. Toward an intuitionist account of moral development. Behavioral
and Brain Sciences, 28(4), 546–​547.
Bateson, M., Nettle, D., & Roberts, G. 2006. Cues of being watched enhance cooperation in a real-​
world setting. Biology letters, 2(3), 412–​414.
Baumeister, R., & Alquist, J. 2009. Is there a downside to good self-​control? Self and Identity, 8(2–​3),
115–​130.
Baumeister, R., Bratslavsky, E., Muraven, M., & Tice, D. 1998. Ego depletion: Is the active self a
limited resource? Journal of Personality and Social Psychology, 74(5), 1252.
Bealer, G. 1993. The incoherence of empiricism. In S. Wagner & R. Warner (Eds.), Naturalism: A
critical appraisal, 99–​183. Notre Dame, IN: University of Notre Dame Press.
Beaman, L., Chattopadhyay, R., Duflo, E., Pande, R., & Topalova, P. 2009. Powerful women: Does
exposure reduce bias? Quarterly Journal of Economics, 124(4), 1497–​1540.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. 1997. Deciding advantageously before knowing
the advantageous strategy. Science, 275, 1293–​1295.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. 2005. The Iowa Gambling Task and the
Somatic Marker Hypothesis: Some questions and answers. Trends in Cognitive Science, 9(4),
159–​162.
Beck, J. S. 2011. Cognitive behavioral therapy, second edition:  Basics and beyond. New  York:
Guilford Press.
Begley, C., & Ellis, L. 2012. Drug development:  Raise standards for preclinical cancer research.
Nature, 483, 531–​533. doi: 10.1038/​483531a.
Beilock, S. 2010. Choke: What the secrets of the brain reveal about getting it right when you have to.
New York: Free Press.
Beilock, S. L., Wierenga, S. A., & Carr, T. H. (2002). Expertise, attention, and memory in sensorimo­
tor skill execution: Impact of novel task constraints on dual-​task performance and episodic
memory. The Quarterly Journal of Experimental Psychology: Section A, 55(4), 1211–​1240.
Bennett, J. 1974. The conscience of Huckleberry Finn. Philosophy, 49, 123–​134.
Berger, J. 2012. Do we conceptualize every color we consciously discriminate? Consciousness and
Cognition, 21(2), 632–​635.
Berkowitz, L., & Donnerstein, E. 1982. External validity is more than skin deep:  Some answers
to criticisms of laboratory experiments. American Psychologist, 37(3), 245–​257. doi: http://​
dx.doi.org/​10.1037/​0003-​066X.37.3.245.
Berridge, K., & Winkielman, P. 2003. What is an unconscious emotion? The case for unconscious
“liking.” Cognition and Emotion, 17(2), 181–​211.
Berry, J., Abernethy, B., & Côté, J. 2008. The contribution of structured activity and deliberate
play to the development of expert perceptual and decision-​making skill. Journal of Sport and
Exercise Psychology, 30(6), 685–​708.
Bhalla, M., & Proffitt, D. R. 1999. Visual–​motor recalibration in geographical slant perception.
Journal of Experimental Psychology: Human Perception and Performance, 25(4), 1076.
Blair, I. 2002. The malleability of automatic stereotypes and prejudice. Personality and Social
Psychology Review, 6, 242–​261.
236 References

Blair, I., Ma, J., & Lenton, A. 2001. Imagining stereotypes away: The moderation of implicit stereo-
types through mental imagery. Journal of Personality and Social Psychology, 81, 828.
Block, J. 2002. Personality as an affect-​processing system:  Toward an integrative theory. Hillsdale,
NJ: Erlbaum.
Bourdieu, P. 1990. In other words:  Essays towards a reflexive sociology. Stanford, CA:  Stanford
University Press.
Bratman, M. 1987. Intentions, plans, and practical reason. Stanford, CA:  Center for the Study of
Language and Information.
Bratman, M. 2000. Reflection, planning, and temporally extended agency. Philosophical Review,
109(1), 38.
Brewer, M. B. 1999. The psychology of prejudice: Ingroup love and outgroup hate? Journal of Social
Issues, 55(3), 429–​444.
Briñol, P., Petty, R., & McCaslin, M. 2009. Changing attitudes on implicit versus explicit mea-
sures: What is the difference? In R. Petty, R. Fazio, and P. Briñol (Eds.), Attitudes: Insights from
the new implicit measures, 285–​326. New York: Psychology Press.
Brody, G. H., Yu, T., Chen, E., Miller, G. E., Kogan, S. M., & Beach, S. R. H. 2013. Is resilience only
skin deep? Rural African Americans’ socioeconomic status—​related risk and competence
in preadolescence and psychological adjustment and allostatic load at age 19. Psychological
Science, 24(7), 1285–​1293.
Brownstein, M. 2014. Rationalizing flow: Agency in skilled unreflective action. Philosophical Studies,
168(2), 545–​568. doi: 10.1007/​s11098-​013-​0143-​5.
Brownstein, M. 2015, February. Implicit bias. Stanford encyclopedia of philosophy, Spring 2015 ed.
(E. Zalta, Ed.). http://​plato.stanford.edu/​entries/​implicit-​bias/​.
Brownstein, M. 2016. Implicit bias, context, and character. In M. Brownstein & J. Saul (Eds.),
Implicit bias and philosophy: Vol. 2, Moral responsibility, structural injustice, and ethics, 215–​234.
Oxford: Oxford University Press.
Brownstein, M. 2017. Implicit attitudes, social learning, and moral credibility. In J. Kiverstein (Ed.),
The Routledge handbook on philosophy of the social mind, 298–​319. New York: Routledge.
Brownstein, M. Self-​control and overcontrol: Conceptual, ethical, and ideological issues in positive
psychology. Unpublished paper.
Brownstein, M., & Madva, A. 2012a. Ethical automaticity. Philosophy of the Social Sciences,
42(1), 67–​97.
Brownstein, M., & Madva, A. 2012b. The normativity of automaticity. Mind and Language, 27(4),
410–​434.
Brownstein, M. & Michaelson, E. 2016. Doing without believing: Intellectualism, knowing-​how,
and belief attribution. Synthese, 193(9), 2815–​2836.
Brownstein, M., & Saul, J. (Eds.). 2016. Implicit bias and philosophy: Vol. 2, Moral responsibility, struc-
tural injustice, and ethics. Oxford: Oxford University Press.
Bryan, W. L., & Harter, N. 1899. Studies on the telegraphic language: The acquisition of a hierarchy
of habits. Psychological Review, 6(4), 345–​375.
Buchsbaum, D., Gopnik A., & Griffiths, T. L. 2010. Children’s imitation of action sequences is
influenced by statistical evidence and inferred causal structure. Proceedings of the 32nd Annual
Conference of the Cognitive Science Society.
Buckner, C. 2011. Two approaches to the distinction between cognition and “mere association.”
International Journal of Comparative Psychology, 24(4), 314–​348.
Buckner, C. 2017. The rationality of intuitive inference. Rational inference:  The lowest bounds.
Philosophy and Phenomenological Research. doi: 10.1111/​phpr.12455
Buon, M., Jacob, P., Loissel, E., & Dupoux, E. 2013. A non-​mentalistic cause-​based heu-
ristic in human social evaluations. Cognition, 126(2), 149–​ 155. doi:  10.1016/​
j.cognition.2012.09.006.
Byrne, A. 2009. Experience and content. Philosophical Quarterly, 59(236), 429–​451. doi: 10.1111/​
j.1467-​9213.2009.614.x.
References 237

Cameron, C. D., Brown-​Iannuzzi, J. L., & Payne, B. K. (2012). Sequential priming measures of
implicit social cognition: A meta-​analysis of associations with behavior and explicit attitudes.
Personality and Social Psychology Review, 16, 330–​350.
Cameron, C., Payne, B., & J. Knobe. 2010. Do theories of implicit race bias change moral judg-
ments? Social Justice Research, 23, 272–​289.
Carey, S. 2009. The origin of concepts. Oxford: Oxford University Press.
Carlsson, R., & Björklund, F. 2010. Implicit stereotype content: Mixed stereotypes can be measured
with the implicit association test. Social Psychology, 41(4), 213–​222.
Carruthers, P. 2009. How we know our own minds:  The relationship between mindreading and
metacognition. Behavioral and Brain Sciences, 32, 121–​138.
Carruthers, P. 2013. On knowing your own beliefs: A representationalist account. In N. Nottelmann
(Ed.), New essays on belief: Constitution, content and structure, 145–​165. Basingstoke: Palgrave
MacMillan.
Carver, C. S., & Harmon-​Jones, E. 2009. Anger is an approach-​related affect: Evidence and implica-
tions. Psychological Bulletin, 135, 183–​204.
Carver, C. S., & Scheier, M. F. 1982. Control theory: A useful conceptual framework for personality—​
social, clinical, and health psychology. Psychological Bulletin, 92, 111–​135.
Cath, Y. 2011. Knowing how without knowing that. In J. Bengson & M. Moffett (Eds.), Knowing-​
how: Essays on knowledge, mind, and action, 113–​135. New York: Oxford University Press.
Cesario, J., & Jonas, K. 2014. Replicability and models of priming: What a resource computation
framework can tell us about expectations of replicability. Social Cognition, 32, 124–​136.
Cesario, J., Plaks, J. E., & Higgins, E. T. 2006. Automatic social behavior as motivated preparation to
interact. Journal of Personality and Social Psychology, 90, 893–​910.
Chan, D. 1995. Non-​intentional actions. American Philosophical Quarterly, 32, 139–​151.
Chemero, A. 2009: Radical embodied cognitive science. Cambridge, MA: MIT Press.
Chen, M., & Bargh, J. A. 1999. Consequences of automatic evaluation: Immediate behavioral pre-
dispositions to approach or avoid the stimulus. Personality and Social Psychology Bulletin, 25,
215–​224.
Clark, A. 1997. Being there. Cambridge, MA: MIT Press.
Clark, A. 1999. An embodied cognitive science? Trends in Cognitive Science, 3(9), 345–​351.
Clark, A. 2007. Soft selves and ecological control. In D. Spurrett, D. Ross, H. Kincaid, and L.
Stephens, (eds.), Distributed cognition and the will. Cambridge, MA: MIT Press.
Clark, A. 2015. Surfing uncertainty:  Prediction, action, and the embodied mind. Oxford:  Oxford
University Press.
Cohen, G. L. 2003. Party over policy: The dominating impact of group influence on political beliefs.
Journal of Personality and Social Psychology, 85, 808–​822.
Cohen, J. 1990. Things I have learned so far. American Psychologist, 45(12), 1304–​1312.
Colombetti, G. 2007. Enactive appraisal. Phenomenology and the Cognitive Sciences, 6, 527–​546.
Confucius. 2003. Analects. (E. Slingerland, Trans.). Indianapolis: Hackett.
Conrey, F., Sherman, J., Gawronski, B., Hugenberg, K., & Groom, C. 2005. Separating multiple pro-
cesses in implicit social cognition: The Quad-​Model of implicit task performance. Journal of
Personality and Social Psychology, 89, 469–​487.
Corbetta, M., and Shulman, G. L. 2002. Control of goal-​directed and stimulus-​driven attention in
the brain. Nature Reviews: Neuroscience, 3, 201–​215. doi: 10.1038/​nrn755.
Correll, J., Park, B., Judd, C., & Wittenbrink, B. 2002. The police officer’s dilemma: Using race to
disambiguate potentially threatening individuals. Journal of Personality and Social Psychology,
83, 1314–​1329.
Correll, J., Park, B., Judd, C. M., & Wittenbrink, B. 2007. The influence of stereotypes on decisions
to shoot. European Journal of Social Psychology, 37(6), 1102–​1117.
Correll, J., Wittenbrink, B., Crawford, M. T., & Sadler, M. S. 2015. Stereotypic vision:  How ste-
reotypes disambiguate visual stimuli. Journal of Personality and Social Psychology, 108(2),
219–​233.
238 References

Cosmides, L., & Tooby, J. 2000. Evolutionary psychology and the emotions. In M. Lewis & J. M.
Haviland-​Jones (Eds.), Handbook of emotions, 2d ed., 91–​115. New York: Guilford Press.
Critcher, C., Inbar, Y., & Pizarro, D. 2013. How quick decisions illuminate moral character. Social
Psychological and Personality Science, 4(3), 308–​315.
Crockett, M. 2013. Models of morality. Trends in Cognitive Sciences, 17(8), 363–​366.
Csikszentmihalyi, M. 1990. Flow: The psychology of optimal experience. New York: Harper & Row.
Currie, G., & Ichino, A. 2012. Aliefs don’t exist, though some of their relatives do. Analysis, 72(4),
788–​798.
Cushman, F. 2013. Action, outcome, and value: A dual-​system framework for morality. Personality
and Social Psychology Review, 17(3), 273–​292.
Cushman, F., Gray, K., Gaffey, A., & Mendes, W. B. 2012. Simulating murder: The aversion to harm-
ful action. Emotion, 12(1), 2.
Damasio, A. 1994. Descartes’ error: Emotion, reason, and the human brain. New York: Putnam’s.
Danziger, S., Levav, J., & Avnaim-​Pesso, L. 2011. Extraneous factors in judicial decisions. Proceedings
of the National Academy of Sciences, 108(17), 6889–​6892.
Dardenne, B., Dumont, M., & Bollier, T. 2007. Insidious dangers of benevolent sexism: Consequences
for women’s performance. Journal of Personality and Social Psychology, 93(5), 764.
Darley, J., and Batson, C. 1973. From Jerusalem to Jericho: A study of situational and dispositional
variables in helping behavior. Journal of Personality and Social Psychology, 27, 100–​108.
Dasgupta, N. 2004. Implicit ingroup favoritism, outgroup favoritism, and their behavioral manifes-
tations. Social Justice Research, 17(2), 143–​168.
Dasgupta, N. 2013. Implicit attitudes and beliefs adapt to situations: A decade of research on the
malleability of implicit prejudice, stereotypes, and the self-​concept. Advances in Experimental
Social Psychology, 47, 233–​279.
Dasgupta, N., & Asgari, S. 2004. Seeing is believing: Exposure to counterstereotypic women leaders
and its effect on automatic gender stereotyping. Journal of Experimental Social Psychology, 40,
642–​658.
Dasgupta, N., DeSteno, D., Williams, L. A., & Hunsinger, M. 2009. Fanning the flames of preju-
dice: The influence of specific incidental emotions on implicit prejudice. Emotion, 9(4), 585.
Dasgupta, N., & Greenwald, A. 2001. On the malleability of automatic attitudes: Combating auto-
matic prejudice with images of admired and disliked individuals. Journal of Personality and
Social Psychology, 81, 800–​814.
Dasgupta, N., & Rivera, L. 2008. When social context matters:  The influence of long-​term con-
tact and short-​term exposure to admired group members on implicit attitudes and behavioral
intentions. Social Cognition, 26, 112–​123.
Davidson, A. R., & Jaccard, J. J. 1979. Variables that moderate the attitude–​behavior rela-
tion: Results of a longitudinal survey. Journal of Personality and Social Psychology, 37, 1364–​
1376. doi: 10.1037/​0022-​ 3514.37.8.1364.
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. 2011. Model-​based influences on
humans’ choices and striatal prediction errors. Neuron, 69, 1204–​1215.
D’Cruz, J. 2013. Volatile reasons. Australasian Journal of Philosophy, 91(1), 31–​40.
De Becker, G. 1998. The gift of fear. New York: Dell.
De Coster, L., Verschuere, B., Goubert, L., Tsakiris, M., & Brass, M. 2013. I suffer more from your
pain when you act like me: Being imitated enhances affective responses to seeing someone
else in pain. Cognitive, Affective, & Behavioral Neuroscience, 13(3), 519–​532.
De Houwer, J. 2014. A propositional model of implicit evaluation. Social Psychology and Personality
Compass, 8(7), 342–​353.
De Houwer, J., Teige-​Mocigemba, S., Spruyt, A., & Moors, A. 2009. Implicit measures: A normative
analysis and review. Psychology Bulletin, 135(3), 347–​368. doi: 10.1037/​a0014211.
Dempsey, M. A., & and Mitchell, A. A. 2010. The influence of implicit attitudes on consumer
choice when confronted with conflicting product attribute information. Journal of Consumer
Research, 37(4), 614–​625.
References 239

Dennett, D. C. 1989. The intentional stance. Cambridge, MA: MIT Press.


Deutsch, R., Gawronski, B., & Strack, F. 2006. At the boundaries of automaticity: Negation as reflec-
tive operation. Journal of Personality and Social Psychology, 91, 385–​405.
Devine, P. 1989. Stereotypes and prejudice: Their automatic and controlled components. Journal of
Personality and Social Psychology, 56, 5–​18.
Devine, P., Forscher, P., Austin, A., & Cox, W. 2012. Long-​term reduction in implicit race bias: A
prejudice habit-​ breaking intervention. Journal of Experimental Social Psychology, 48(6),
1267–​1278.
Devine, P. G., Plant, E. A., Amodio, D. M., Harmon-​Jones, E., & Vance, S. L. 2002. The regulation of
explicit and implicit race bias: The role of motivations to respond without prejudice. Journal of
Personality and Social Psychology, 82, 835–​848.
de Waal, F. 2016. Are we smart enough to know how smart animals are? New York: W. W. Norton.
Deweese-​Boyd, I. 2006/​2008. Self-​deception. Stanford encyclopedia of philosophy. (E. N. Zalta, Ed.).
http://​plato.stanford.edu/​entries/​self-​deception/​.
Doggett, T. 2012. Some questions for Tamar Szabó Gendler. Analysis, 72(4), 764–​774.
Donders, N. C., Correll, J., & Wittenbrink, B. 2008. Danger stereotypes predict racially biased atten-
tional allocation. Journal of Experimental Social Psychology, 44(5), 1328–​1333.
Doris, J. 2002. Lack of character:  Personality and moral behavior. Cambridge:  Cambridge
University Press.
Doris. J. 2015. Talking to ourselves. Oxford: Oxford University Press.
Dovidio, J. F., & Gaertner, S. L (Eds.). 1986. Prejudice, discrimination, and racism: Historical trends
and contemporary approaches. New York: Academic Press.
Dovidio, J., Kawakami, K., & Gaertner, S. 2002. Implicit and explicit prejudice and interracial inter-
action. Journal of Personality and Social Psychology, 82, 62–​68.
Dretske, F. 1981. Knowledge and the flow of information. Cambridge, MA: MIT Press.
Dretske, F. 1986. Misrepresentation. In R. Bogdan (Ed.), Belief: Form, content and function, 17–​36.
Oxford: Oxford University Press.
Dreyfus, H. 2002a. Intelligence without representation: The relevance of phenomenology to scien-
tific explanation. Phenomenology and the Cognitive Sciences, 1(4), 367–​383.
Dreyfus, H. 2002b. Refocusing the question: Can there be skillful coping without propositional rep-
resentations or brain representations? Phenomenology and the Cognitive Sciences, 1, 413–​425.
Dreyfus, H. 2005. Overcoming the myth of the mental: How philosophers can profit from the phe-
nomenology of everyday experience. Proceedings and Addresses of the American Philosophical
Association, 79(2), 43–​49.
Dreyfus, H. 2007a. Detachment, involvement, and rationality: Are we essentially rational animals?
Human Affairs, 17, 101–​109.
Dreyfus, H. 2007b. The return of the myth of the mental. Inquiry, 50(4), 352–​365.
Dreyfus, H. & Kelly, S. 2007. Heterophenomenology: Heavy-​handed sleight-​of-​hand. Phenomenology
and the Cognitive Sciences, 6, 45–​55.
Duncan, S., & Barrett, L. F. 2007. Affect is a form of cognition: A neurobiological analysis. Cognition
and Emotion, 21(6), 1184–​1211.
Early Breast Cancer Trialists’ Collaborative Group. 1988. Effects of adjuvant tamoxifen and of
cytotoxic therapy on mortality in early breast cancer. New England Journal of Medicine, 319,
1681–​1692.
Eberhardt, J. L., Davies, P. G., Purdie-​Vaughns, V. J., & Johnson, S. L. 2006. Looking deathwor-
thy perceived stereotypicality of black defendants predicts capital-​sentencing outcomes.
Psychological Science, 17(5), 383–​386.
Eberhardt, J., Goff, P., Purdie, V., & Davies, P. 2004. Seeing black: Race, crime, and visual process-
ing,” Journal of Personality and Social Psychology, 87(6), 876.
Eberl, C., Wiers, R. W., Pawelczack, S., Rinck, M., Becker, E. S., & Lindenmeyer, J. 2012. Approach
bias modification in alcohol dependence: Do clinical effects replicate and for whom does it
work best? Developmental Cognitive Neuroscience, 4, 38–​51.
240 References

Egan, A. 2008. Seeing and believing: Perception, belief formation and the divided mind. Philosophical
Studies, 140(1), 47–​63.
Egan, A. 2011. Comments on Gendler’s “The Epistemic Costs of Implicit Bias.” Philosophical Studies,
156, 65–​79.
Elster, J. 2000. Ulysses unbound: Studies in rationality, precommitment, and constraints. Cambridge: 
Cambridge University Press.
Epstein, D. 2013. The sports gene. New York: Penguin.
Ericsson, K. A. 2016. Summing up hours of any type of practice versus identifying optimal practice
activities:  Commentary on Macnamara, Moreau, & Hambrick. Perspectives on Psychological
Science, 11(3), 351–​354.
Ericsson, K. A., Krampe, R., & Tesch-​Römer, C. 1993. The role of deliberate practice in the acquisi-
tion of expert performance. Psychological Review, 100(3), 363–​406.
Evans, G. 1982. The varieties of reference. Oxford: Oxford University Press.
Evans, J. S.  B., & Frankish, K. E. 2009. In two minds:  Dual processes and beyond. Oxford: Oxford
University Press.
Evans, J. S. B., & Stanovich, K. E. 2013. Dual-​process theories of higher cognition: Advancing the
debate. Perspectives on Psychological Science, 8(3), 223–​241. doi: 10.1177/​1745691612460685.
Falk, A., & Heckman, J. J. 2009. Lab experiments are a major source of knowledge in the social sci-
ences. Science, 326(5952), 535–​538. doi: 10.1126/​science.1168244.
Faraci, D., & Shoemaker, D. 2014. Huck vs. Jojo. In T. Lombrozo & J. Knobe (Eds.), Oxford studies
in experimental philosophy, Vol. 1, 7–​26. Oxford: Oxford University Press.
Faucher, L. 2016. Revisionism and moral responsibility. In M. Brownstein & J. Saul (Eds.), Implicit
bias and philosophy: Vol. 2, Moral Responsibility, Structural Injustice, and Ethics, 115–​146.
Oxford: Oxford University Press.
Fazio, R. H. (1990). Multiple processes by which attitudes guide behavior: The MODE model as an
integrative framework. Advances in Experimental Social Psychology, 23, 75–​109.
Fazio, R. H., Jackson, J. R., Dunton, B. C., & Williams, C. J. 1995. Variability in Automatic Activation
as an unobtrusive measure of racial attitudes: A bona fide pipeline? Journal of Personality and
Social Psychology, 69(6), 1013–​1027.
Ferriss, T. 2015, 27 April. Tim Ferriss on accelerated learning, peak performance and living the
good life. Podcast. http://​thepsychologypodcast.com/​tim-​ferriss-​on-​accelerated-​learning-​
peak-​performance-​and-​living-​the-​good-​life/​.
Festinger, L. 1956. A theory of cognitive dissonance. Stanford, CA: Stanford University Press.
Firestone, C. 2013. How “paternalistic” is spatial perception? Why wearing a heavy backpack
doesn’t—​and couldn’t—​make hills look steeper. Perspectives on Psychological Science, 8(4),
455–​473.
Firestone, C., & Scholl, B. J. 2014. “Top-​down” effects where none should be found: The El Greco
fallacy in perception research. Psychological Science, 25(1), 38–​46.
Firestone, C., & Scholl, B. J. 2015. Cognition does not affect perception: Evaluating the evidence for
“top-​down” effects. Behavioral and Brain Sciences, 1, 1–​77.
Fischer, J. M., & M. Ravizza. 1998. Responsibility and Control:  A Theory of Moral Responsibility.
Cambridge: Cambridge University Press.
Flannigan, N., Miles, L. K., Quadflieg, S., & Macrae, C. N. 2013. Seeing the unex-
pected: Counterstereotypes are implicitly bad. Social Cognition, 31(6), 712–​720.
Flavell, J. H., Miller, P. H., & Miller, S. A. 1985. Cognitive development. Englewood Cliffs,
NJ: Prentice-​Hall.
Fleeson, W. R., Furr, M., Jayawickream, E., Helzer, E. R., Hartley, A. G., & Meindel, P. 2015.
Personality science and the foundations of character. In C. R. Miller, M. Furr, A. Knobel, &
W. Fleeson (Eds.), Character: New directions from philosophy, psychology, and theology, 41–​74.
New York: Oxford University Press.
Fodor, J. A. 1983. The modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT Press.
References 241

Fodor, J. A. 1987. Psychosemantics: The problem of meaning in the philosophy of mind. Cambridge,


MA: MIT Press.
Forscher, P. S., Lai, C. K., Axt, J. R., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. 2016.
Meta-​analysis of change in implicit bias. Open Science Network. https://​osf.io/​b5m97/​.
Förster, J., Liberman, N., & Friedman, R. S. 2007. Seven principles of goal activation: A systematic
approach to distinguishing goal priming from priming of non-​goal constructs. Personality and
Social Psychology Review, 11, 211–​233.
Frankfurt, H. 1988. The importance of what we care about. Cambridge: Cambridge University Press.
Frankish, K., & Evans, J. 2009. The duality of mind:  An historical perspective. In J. Evans & K.
Frankish (Eds.), In two minds: Dual process and beyond, 1–​29. Oxford: Oxford University Press.
Fridland, E. 2015. Automatically minded. Synthese, 1–​27. doi:10.1007/​s11229-​014-​0617-​9.
Frijda, N. 1986. The emotions. Cambridge: Cambridge University Press.
Fujita, K. 2011. On conceptualizing self-​control as more than the effortful inhibition of impulses.
Personality and Social Psychology Review, 15(4), 352–​366.
Gal, D., & Liu, W. 2011. Grapes of wrath:  The angry effects of self-​control. Journal of Consumer
Research, 38(3), 445–​458.
Galdi, S., Arcuri, L., & Gawronski, B. 2008. Automatic mental associations predict future choices of
undecided decision-​makers. Science, 321(5892), 1100–​1102.
Gawronski, B., & Bodenhausen, G. V. 2006. Associative and propositional processes in evaluation: An
integrative review of implicit and explicit attitude change. Psychology Bulletin, 132(5), 692–​731.
Gawronski, B., & Bodenhausen, G. V. 2011. The associative-​propositional evaluation model: Theory,
evidence, and open questions. Advances in Experimental Social Psychology, 44, 59–​127.
Gawronski, B., Brannon, S., and Bodenhausen, G. 2017. The associative-​propositional duality in
the representation, formation, and expression of attitudes. In R. Deutsch, B. Gawronski,
& W. Hofmann (Eds.), Reflective and impulsive determinants of human behavior, 103–​118.
New York: Psychology Press.
Gawronski, B., & Cesario, J. 2013. Of mice and men: What animal research can tell us about context
effects on automatic response in humans. Personality and Social Psychology Review, 17(2), 187–​215.
Gawronski, B., Deutsch, R., Mbirkou, S., Seibt, B., & Strack, F. 2008. When “just say no” is not
enough: Affirmation versus negation training and the reduction of automatic stereotype acti-
vation. Journal of Experimental Social Psychology, 44(2), 370–​377.
Gawronski, B., Hofmann, W., & Wilbur, C. 2006. Are “implicit” attitudes unconscious? Consciousness
and Cognition, 15, 485–​499.
Gawronski, B., Rydell, R. J., Vervliet, B., & De Houwer, J. 2010. Generalization versus contextualiza-
tion in automatic evaluation. Journal of Experimental Psychology, 139(4), 683–​701.
Gawronski, B., Rydell, R.J., De Houwer, J., Brannon, S.M., Ye, Y., Vervliet, B., & Hu, X. 2017.
Contextualized Attitude Change. Advances in Experimental Social Psychology. doi: https://doi.
org/10.1016/bs.aesp.2017.06.001
Gawronski, B., & Sritharan, R. 2010. Formation, change, and contextualization of mental associa-
tions: Determinants and principles of variations in implicit measures. In B. Gawronski & B.
K. Payne (Eds.), Handbook of implicit social cognition: Measurement, theory, and applications,
216–​240. New York: Guilford Press.
Gendler, T. S. 2008a. Alief and belief. Journal of Philosophy, 105(10), 634–​663.
Gendler, T. S. 2008b. Alief in action (and reaction). Mind and Language, 23(5), 552–​585.
Gendler, T. S. 2011. On the epistemic costs of implicit bias. Philosophical Studies, 156, 33–​63.
Gendler, T. S. 2012. Between reason and reflex:  Response to commentators. Analysis, 72(4),
799–​811.
Gertler, B. 2011. Self-​knowledge and the transparency of belief. In A. Hatzimoysis (Ed.), Self-​
knowledge, 125–​145. Oxford: Oxford University Press.
Gibran, K. 1923. The prophet. New York, NY: Knopf.
Gibson, J. J. 1979. The ecological approach to visual perception. Boston: Houghton Mifflin.
Gilbert, D. T. 1991. How mental systems believe. American Psychologist, 46, 107–​119.
242 References

Gino, F., Schweitzer, M. E., Mead, N. L., & Ariely, D. 2011. Unable to resist temptation: How self-​
control depletion promotes unethical behavior. Organizational Behavior and Human Decision
Processes, 115(2), 191–​203.
Ginsborg, H. 2011. Primitive normativity and skepticism about rules. Journal of Philosophy, 108(5),
227–​254.
Gladwell, M. 2007. Blink. New York: Back Bay Books.
Gladwell, M. 2011. Outliers. New York: Back Bay Books.
Gladwell, M. 2013, 21 August. Complexity and the ten thousand hour rule. New Yorker. http://​
www.newyorker.com/​news/​sporting-​scene/​complexity-​and-​the-​ten-​thousand-​hour-​rule/​.
Glaser, J. C. 1999. The relation between stereotyping and prejudice: Measures of newly formed automatic
associations. Doctoral dissertation, Harvard University.
Glaser, J., & Knowles, E. 2008. Implicit motivation to control prejudice. Journal of Experimental
Social Psychology, 44, 164–​172.
Glasgow, J. 2016. Alienation and responsibility. In M. Brownstein & J. Saul (Eds.), Implicit bias and
philosophy: Vol. 2, Moral responsibility, structural injustice, and ethics, 37–​61. Oxford: Oxford
University Press.
Goff, P. A., Jackson, M. C., Di Leone, B. A. L., Culotta, C. M., & DiTomasso, N. A. 2014. The essence
of innocence: Consequences of dehumanizing Black children. Journal of Personality and Social
Psychology, 106(4), 526.
Gollwitzer, P., Parks-​Stamm, E., Jaudas, A., & Sheeran, P. 2008. Flexible tenacity in goal pur-
suit. In J. Shah & W. Gardner (Eds.), Handbook of motivation science, 325–​341. New York:
Guilford Press.
Gollwitzer, P. M., & Sheeran, P. 2006. Implementation intentions and goal achievement: A meta-​
analysis of effects and processes. In M. P. Zanna (Ed.), Advances in experimental social psychol-
ogy, 69–​119. New York: Academic Press.
Goodman, S., Fanelli, D., & Loannidis, J. P.  A. 2016. What does research reproducibility mean?
Science, 8(341), 341–​353.
Gopnik, A. 2016, 31 July. What babies know about physics and foreign languages. New York Times.
http://​www.nytimes.com/​2016/​07/​31/​opinion/​sunday/​what-​babies-​know-​about-​physics-​
and-​foreign-​languages.html/​.
Govorun, O., & Payne, B. K. 2006. Ego-​depletion and prejudice: Separating automatic and con-
trolled components. Social Cognition, 24(2), 111–​136.
Green, A. R., Carney, D. R., Pallin, D. J., Ngo, L. H., Raymond, K. L., Iezzoni, L. I., Banaji, M. R.
2007. Implicit bias among physicians and its prediction of thrombolysis decisions for Black
and White patients. Journal of General Internal Medicine, 22, 1231–​1238.
Greene, J. D. 2013. Moral tribes:  Emotion, reason, and the gap between us and them. New  York: 
Penguin.
Greene, J., & Haidt, J. 2002. How (and where) does moral judgment work? Trends in Cognitive
Sciences, 6(12), 517–​523.
Greenwald, A. G., Banaji, M. R., Rudman, L. A., Farnham, S. D., Nosek, B. A., & Mellott, D. S. 2002.
A unified theory of implicit attitudes, stereotypes, self-​esteem, and self-​concept. Psychological
Review, 109(1), 3.
Greenwald, A. G., Banaji, M. R., & Nosek, B. A. 2014. Statistically small effects of the implicit asso-
ciation test can have societally large effects. Journal of Personality and Social Psychology, 108,
553–​561.
Greenwald, A., McGhee, D., & Schwartz, J. 1998. Measuring individual differences in implicit
cognition:  The implicit association test. Journal of Personality and Social Psychology, 74,
1464–​1480.
Greenwald, A. G., Nosek, B. A., & Banaji, M. R. 2003. Understanding and using the implicit associa-
tion test: I. An improved scoring algorithm. Journal of Personality and Social Psychology, 85,
197–​216.
Greenwald, A. G., & Pettigrew, T. F. 2014. With malice toward none and charity for some: Ingroup
favoritism enables discrimination. American Psychologist, 69, 669–​684.
References 243

Greenwald, A., Poehlman, T., Uhlmann, E., & Banaji, M 2009. Understanding and using the implicit
association test:  III. Meta-​analysis of predictive validity. Journal of Personality and Social
Psychology, 97(1), 17–​41.
Gregg A., Seibt B., & Banaji, M. 2006. Easier done than undone: Asymmetry in the malleability of
implicit preferences. Journal of Personality and Social Psychology, 90, 1–​20.
Grillon, C., Pellowski, M., Merikangas, K. R., & Davis, M. 1997. Darkness facilitates acoustic startle
reflex in humans. Biological Psychiatry, 42, 453–​460.
Gross, J. 2002. Emotion regulation: Affective, cognitive, and social consequences. Psychophysiology,
39, 281–​291.
Gschwendner, T., Hoffman, W., & Schmitt, M. 2008. Convergent and predictive validity of implicit
and explicit anxiety measures as a function of specificity similarity and content similarity.
European Journal of Psychological Assessment, 24(4), 254–​262.
Guinote, A., Guillermo, B. W., & Martellotta, C. 2010. Social power increases implicit prejudice.
Journal of Experimental Social Psychology, 46, 299–​307.
Hacker, P. 2007. Human nature: The categorical framework. Oxford: Blackwell.
Hagger, M.S., Chatzisarantis, N.L.D., Alberts, H., Anggono, C.O., Batailler, C., Birt, A.R., Brand,
R., et al. 2016. A multilab preregistered replication of the ego-depletion effect. Perspectives on
Psychological Science, 11(4), 546–573.
Haley, K., & Fessler, D. M. T. 2005. Nobody’s watching? Subtle cues affect generosity in an anony-
mous economic game. Evolution and Human Behavior, 26, 245–​256.
Harman, G. 1999. Moral philosophy meets social psychology: Virtue ethics and the fundamental
attribution error. Proceedings of the Aristotelian Society, 99, 315–​331.
Haslanger, S. 2015. Social structure, narrative, and explanation. Canadian Journal of Philosophy,
45(1), 1–​15. doi: 10.1080/​00455091.2015.1019176.
Hasson, U., & Glucksberg, S. 2006. Does negation entail affirmation? The case of negated meta-
phors. Journal of Pragmatics, 38, 1015–​1032.
Hawley, K. 2012. Trust, distrust, and commitment. Noûs, 48(1), 1–​20.
Hayes, J. R. 1985. Three problems in teaching general skills. Thinking and Learning Skills, 2, 391–​406.
Heller, S. B., Shah, A. K., Guryan, J., Ludwig, J., Mullainathan, S., & Pollack, H. A. 2015. Thinking,
fast and slow? Some field experiments to reduce crime and dropout in Chicago (No.
w21178). National Bureau of Economic Research.
Helm, B. W. 1994. The significance of emotions. American Philosophical Quarterly, 31(4), 319–​331.
Helm, B. W. 2007. Emotional reason: Deliberation, motivation, and the nature of value. Cambridge:
Cambridge University Press.
Helm, B. W. 2009. Emotions as evaluative feelings. Emotion Review, 1(3), 248–​255.
Henrich, J., Heine, S. J., & Norenzayan, A. 2010. The weirdest people in the world? Behavioral and
Brain Sciences, 33(2–​3), 61–​83.
Herman, B. 1981. On the value of acting from the motive of duty. Philosophical Review, 90(3),
359–​382.
Hermans, D., De Houwer, J., & Eelen, P. 2001. A time course analysis of the effective priming affect.
Cognition and Emotion, 15, 143–​165.
Hertz, S. G., & Krettenauer, T. 2016. Does moral identity effectively predict moral behavior? A meta-​
analysis. Review of General Psychology, 20(2), 129–​140.
Hieronymi, P. 2008. Responsibility for believing. Synthese, 161, 357–​373.
Hillman, A. L. 2010. Expressive behavior in economics and politics. European Journal of Political
Economy, 26, 403–​418.
Hofmann, W., Gawronski, B., Gschwendner, T., Le, H., & Schmitt, M. 2005. A meta-​analysis on the
correlation between the implicit association test and explicit self-​report measures. Personality
and Social Psychology Bulletin, 31(10), 1369–​1385.
Holroyd, C. B., Nieuwenhuis, S., Yeung, N., & Cohen, J. D. 2003. Errors in reward prediction are
reflected in the event-​related brain potential. Neuroreport, 14(18), 2481–​2484.
Holroyd, J. 2012. Responsibility for implicit bias. Journal of Social Philosophy, Special
issue: Philosophical Methodology and Implicit Bias, 43(3), 274–​306.
244 References

Holroyd, J. & Sweetman, J. 2016. The heterogeneity of implicit biases. In M. Brownstein & J.
Saul (Eds.), Implicit bias and philosophy, Vol. 1:  Metaphysics and epistemology, 80–​103.
Oxford: Oxford University Press.
Hood, H. K., & Antony, M. M. 2012. Evidence-​based assessment and treatment of specific phobias
in adults. In T. E. Davis et al. (Eds.), Intensive one-​session treatment of specific phobias, 19–​42.
Berlin: Springer Science and Business Media.
Hu, X., Gawronski, B., & Balas, R. 2017. Propositional versus dual-​process accounts of evalua-
tive conditioning: I. The effects of co-​occurrence and relational information on implicit and
explicit evaluations. Personality and Social Psychology Bulletin, 43(1), 17–​32.
Huang, J. Y., & Bargh, J. A. 2014. The selfish goal: Autonomously operating motivational structures
as the proximate cause of human judgment and behavior. Behavioral and Brain Sciences, 37(2),
121–​135.
Huddleston, A. 2012. Naughty beliefs. Philosophical Studies, 160(2), 209–​222.
Huebner, B. 2009. Trouble with stereotypes for Spinozan minds. Philosophy of the Social Sciences,
39, 63–​92.
Huebner, B. 2015. Do emotions play a constitutive role in moral cognition? Topoi, 34(2), 1–​14.
Huebner, B. 2016. Implicit bias, reinforcement learning, and scaffolded moral cognition. In M.
Brownstein & J. Saul (Eds.), Implicit bias and philosophy, Vol. 1: Metaphysics and epistemology,
47–​79. Oxford: Oxford University Press.
Hufendiek, R. 2015. Embodied emotions:  A naturalist approach to a normative phenomenon.
New York: Routledge.
Hugenberg, K., & Bodenhausen, G. V. 2003. Facing prejudice: Implicit prejudice and the perception
of facial threat. Psychological Science, 14(6), 640–​643.
Hume, D. 1738/​1975. A treatise of human nature. (L. A. Selby-​Bigge, Ed.; 2d ed. revised by P. H.
Nidditch). Oxford: Clarendon Press.
Hunter, D. 2011. Alienated belief. Dialectica, 65(2), 221–​240.
Hursthouse, R. 1999. On virtue ethics. Oxford: Oxford University Press.
Inbar, Y., Pizarro, D., Iyer, R., & Haidt, J. 2012. Disgust sensitivity, political conservatism, and voting.
Social Psychological and Personality Science, 3(5), 537–​544.
Inzlicht, M. 2016, 22 August. The unschooling of a faithful mind. http://​michaelinzlicht.com/​
getting-​better/​2016/​8/​22/​the-​unschooling-​of-​a-​faithful-​mind/​.
Inzlicht, M., Gervais, W., & Berkman, E. 2015. Bias-​correction techniques alone cannot determine
whether ego depletion is different from zero:  Commentary on Carter, Kofler, Forster, &
McCullough. Social Science Research Network. doi: http://​dx.doi.org/​10.2139/​ssrn.2659409.
Isen, A., & Levin, P. 1972. Effect of feeling good on helping:  Cookies and kindness. Journal of
Personality and Social Psychology, 21(3), 384–​388.
Ito, T. A., & Urland, G. R. 2005. The influence of processing objectives on the perception of faces: An
ERP study of race and gender perception. Cognitive, Affective, and Behavioral Neuroscience,
5, 21–​36.
James, W. 1884. What is an emotion? Mind, 9, 188–​205.
James, W. 1890. The principles of psychology. New York: Henry Holt and Co.
James, W. 1899. The laws of habit. In Talks to teachers on psychology—​and to students on some of life’s
ideals, 64–​78. New York: Metropolitan Books/​Henry Holt and Co. doi: http://​dx.doi.org/​
10.1037/​10814-​008.
Jarvis, W. B. G., & Petty, R. E. 1996. The need to evaluate. Journal of Personality and Social Psychology,
70(1), 172–​194.
Jaworska, A. 1999. Respecting the margins of agency: Alzheimer’s patients and the capacity to value.
Philosophy & Public Affairs, 28(2), 105–​138.
Jaworska, A. 2007a. Caring and internality. Philosophy and Phenomenological Research, 74(3),
529–​568.
Jaworska, A. 2007b. Caring and full moral standing. Ethics, 117(3), 460–​497.
Johansson, P., Hall, L., Sikström, S., & Olsson, A. 2005. Failure to detect mismatches between inten-
tion and outcome in a simple decision task. Science, 310, 116–​119.
References 245

John, O. P., & Srivastava, S. 1999. The Big Five trait taxonomy: History, measurement, and theoreti-
cal perspectives. In In L. A. Pervin & O. P. John (Eds.), Handbook of personality: Theory and
research, 102–​138. New York: Guilford.
Johnson, M. K., Kim, J. K., & Risse, G. 1985. Do alcoholic Korsakoff ’s syndrome patients acquire
affective reactions? Journal of Experimental Psychology:  Learning, Memory, and Cognition,
11(1), 22–​36.
Jordan, C. H., Whitfield, M., & Zeigler-​Hill, V. 2007. Intuition and the correspondence
between implicit and explicit self-​esteem. Journal of Personality and Social Psychology, 93,
1067–​1079.
Kahan, D. 2012. Why we are poles apart on climate change. Nature, 488, 255.
Kahan, D. M. 2013. Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision
Making, 8, 407–​424.
Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and enlightened
self-​government. Behavioural Public Policy, 1(1), 54–​86.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D. & Mandel, G. 2012. The
polarizing impact of science literacy and numeracy on perceived climate change risks. Nature
Climate Change, 2, 732–​735.
Kahneman, D. 2003. A perspective on judgment and choice:  Mapping bounded rationality.
American Psychologist, 58(9), 697.
Kahneman, D. 2011. Thinking, fast and slow. New York: Macmillan.
Kamtekar, R. 2004. Situationism and virtue ethics: On the content of our character. Ethics, 114(3),
458–​491.
Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S. & Russin, A. 2000. Just say no (to stereotyp-
ing): Effects of training in the negation of stereotypic associations on stereotype activation.
Journal of Personality and Social Psychology, 78, 871–​888.
Kawakami, K., Dovidio, J. F., & van Kamp, S. 2007a. The impact of counterstereotypic training and
related correction processes on the application of stereotypes. Group Processes and Intergroup
Relations, 10(2), 139–​156.
Kawakami, K., Phils, C., Steele, J., & Dovidio, J. 2007b. (Close) distance makes the heart grow
fonder:  Improving implicit racial attitudes and interracial interactions through approach
behaviors. Journal of Personality and Social Psychology, 92(6), 957–​971.
Kawakami, K., Steele, J. R., Cifa, C., Phills, C. E., & Dovidio, J. F. 2008. Approaching math increases
math = me, math = pleasant. Journal of Experimental Social Psychology, 44, 818–​825.
Keinan, A., & Kivetz, R. 2011. Productivity orientation and the consumption of collectable experi-
ences. Journal of Consumer Research, 37(6), 935–​950.
Kelly, D. 2013. Yuck! The nature and moral significance of disgust. Cambridge, MA: MIT Press.
Kelly, S. 2005. Seeing things in Merleau-​Ponty. In T. Carman (Ed.), The Cambridge companion to
Merleau-​Ponty, 74–​110. Cambridge: Cambridge University Press.
Kelly, S. 2010. The normative nature of perceptual experience. In B. Nanay (Ed.), Perceiving the
world, 146–​159. Oxford: Oxford University Press.
Kerr, N. L. 1998. HARKing:  Hypothesizing after the results are known. Personality and Social
Psychology Review, 2(3), 196–​217.
Kidd, C., Palmeri, H., & Aslin, R. N. 2013. Rational snacking: Young children’s decision-​making
on the marshmallow task is moderated by beliefs about environmental reliability. Cognition,
126(1), 109–​114.
King, M., & Carruthers, P. 2012. Moral responsibility and consciousness. Journal of Moral Philosophy,
9(2), 200–​228.
Kishida, K., Saez, I., Lohrenz, T., Witcher, M., Laxton, A., Tatter, S., White, J., et al. 2015. Subsecond
dopamine fluctuations in huam striatum encode superposed error signals about actual
and counterfactual reward. Proceedings of the National Academy of Sciences. doi:  10.1073/​
pnas.1513619112.
Kivetz, R., & Keinan, A. 2006. Repenting hyperopia: An analysis of self-​control regrets. Journal of
Consumer Research, 33, 273–​282.
246 References

Klaassen, P., Rietveld, E. & Topal, J. 2010. Inviting complementary perspectives on situated norma-
tivity in everyday life. Phenomenology and the Cognitive Sciences, 9, 53–​73.
Knobe, J. 2003. Intentional action and side effects in ordinary language. Analysis, 63(279), 190–​194.
Knowles, M., Lucas, G., Baumeister, R., & Gardner, W. 2015. Choking under social pressure: Social
monitoring among the lonely. Personality and Social Psychology Bulletin, 41(6), 805–​821.
Koch, C., & Crick, F. 2001. The zombie within. Nature, 411(6840), 893–​893.
Kolling, N., Behrens, T. E. J., Mars, R. B., & Rushworth, M. F. S. 2012. Neural mechanisms of forag-
ing. Science, 366, 95–​98.
Korsgaard, C. 1997. The normativity of instrumental reason. In G. Cullity & B. Gaut (Eds.), Ethics
and practical reason, 215–​254. Oxford: Oxford University Press.
Korsgaard, C. 2009. The activity of reason. Proceedings and Addresses of the American Philosophical
Association, 83, 27–​47.
Kraus, S. J. 1995. Attitudes and the prediction of behavior: A metaanalysis of the empirical literature.
Personality and Social Psychology Bulletin, 21, 58–​75. doi:10.1177/​0146167295211007.
Kriegel, U. 2012. Moral motivation, moral phenomenology, and the alief/​belief distinction.
Australasian Journal of Philosophy, 90(3), 469–​486.
Krishnamurthy, M. (2015). (White) tyranny and the democratic value of distrust. Monist, 98(4),
391–​406.
Kubota, J., & Ito, T. 2014. The role of expression and race in weapons identification. Emotion, 14(6),
1115–​1124.
Kumar, V. 2016a. Nudges and bumps. Georgetown Journal of Law and Public Policy, 14, 861–876.
Kumar, V. 2016b. Moral vindications. Cognition, 176, 124–134.
Lai, C. K., Hoffman, K. M., & Nosek, B. A. 2013. Reducing implicit prejudice. Social and Personality
Psychology Compass, 7, 315–​330.
Lai, C. K., Skinner, A. L., Cooley, E., Murrar, S., Brauer, M., Devos, T., Calanchini, J., et al. 2016.
Reducing implicit racial preferences:  II. Intervention effectiveness across time. Journal of
Experimental Psychology: General, 145, 1001–​1016.
Land, M. F., & McLeod, P. 2000. From eye movements to actions: How batsmen hit the ball. Nature
Neuroscience, 3(12), 1340–​1345.
Lange, C. G. 1885. Om sindsbehvaegelser:  Et psyko-​ fysiologisk studie. Copenhagen:  Jacob
Lunds. Reprinted in The emotions (C. G.  Lange & W.  James (Eds.), I.  A. Haupt (Trans.),
Baltimore: Williams & Wilkins, 1922.
LaPiere, R. T. 1934. Attitudes vs. actions. Social Forces, 13(2), 230–​237.
Laplante, D., & Ambady, N. 2003. On how things are said:  Voice tone, voice intensity, verbal
content, and perceptions of politeness. Journal of Language and Social Psychology, 22(4),
434–​441.
Layden, T. 2010, 8 November. The art of the pass. Sports Illustrated. http://​www.si.com/​vault/​
2010/​11/​08/​106003792/​the-​art-​of-​the-​pass#.
Lazarus, R. S. 1991. Emotion and adaptation. New York: Oxford University Press.
Lebrecht, S., & Tarr, M. 2012. Can neural signals for visual preference predict real-​world choices?
BioScience, 62(11), 937–​938.
Leder, Garson. 2016. Know thyself? Questioning the theoretical foundations of cognitive behav-
ioral therapy. Review of Philosophy and Psychology. doi: 10.1007/​s13164-​016-​0308-​1.
Lessig, L. 1995. The regulation of social meaning. University of Chicago Law Review, 62(3),
943–​1045.
Levine, L. 1988. Bird: The making of an American sports legend. New York: McGraw-​Hill.
Levinson, J. D., Smith, R. J., & Young, D. M. (2014). Devaluing death: An empirical study of implicit
racial bias on jury-​eligible citizens in six death penalty states. NYUL Rev., 89, 513–​581.
Levy, N. 2011a. Expressing who we are:  Moral responsibility and awareness of our reasons for
action. Analytic Philosophy, 52(4), 243–​261.
Levy, N. 2011b. Hard luck: How luck undermines free will and moral responsibility. Oxford: Oxford
University Press.
Levy, N. 2012. Consciousness, implicit attitudes, and moral responsibility. Noûs, 48, 21–​40.
References 247

Levy, N., 2014. Neither fish nor fowl: Implicit attitudes as patchy endorsements. Noûs, 49(4), 800–​
823. doi: 10.1111/​nous.12074.
Levy, N. 2017. Implicit bias and moral responsibility:  Probing the data. Philosophy and
Phenomenological Research. doi: 10.1111/​phpr.12352.
Lewicki, P., Czyzewska, M., & Hoffman, H. 1987. Unconscious acquisition of complex procedural
knowledge. Journal of Experimental Psychology, 13, 523–​530.
Lewis, D. 1973. Causation. Journal of Philosophy, 70(17), 556–​567.
Li, C. 2006. The Confucian ideal of harmony. Philosophy East and West, 56(4), 583–​603.
Limb, C., & Braun, A. 2008. Neural substrates of spontaneous musical performance: An fMRI study
of jazz improvisation. PLoS ONE, 3(2), e1679.
Lin, D., & Lee, R. (Producers), & Lord, P., & Miller, C. (Directors). 2014. The lego movie (motion
picture). Warner Brothers Pictures.
Livingston, R. W., & Drwecki, B. B. 2007. Why are some individuals not racially biased?
Susceptibility to affective conditioning predicts nonprejudice toward blacks. Psychology
Science, 18(9), 816–​823.
Loersch, C., & Payne, B. K. 2016. Demystifying priming. Current Opinion in Psychology, 12, 32–​36.
doi:10.1016/​j.copsyc.2016.04.020.
Machery, E. 2016. De-​Freuding implicit attitudes. In M. Brownstein & J. Saul (Eds.), Implicit
bias and philosophy, Vol. 1:  Metaphysics and epistemology, 104–​ 129. Oxford:  Oxford
University Press.
Macnamara, B. N., Moreau, D., & Hambrick, D.Z. 2016. The relationship between deliberate prac-
tice and performance in sports: A meta-​analysis. Perspectives on Psychological Science, 11(3),
333–​350.
Macpherson, F. 2012. Cognitive penetration of colour experience: Rethinking the issue in light of
an indirect mechanism. Philosophy and Phenomenological Research, 84(1), 24–​62.
Macpherson, F. 2016. The relationship between cognitive penetration and predictive cod-
ing. Consciousness and Cognition. http://​www.sciencedirect.com/​science/​article/​pii/​
S1053810016300496/​.
Madva, A. 2012. The hidden mechanisms of prejudice: Implicit bias and interpersonal fluency. Doctoral
dissertation, Columbia University.
Madva, A. 2016a. Virtue, social knowledge, and implicit bias. In M. Brownstein & J. Saul (Eds.),
Implicit bias and philosophy, Vol. 1: Metaphysics and epistemology, 191–​215. Oxford: Oxford
University Press.
Madva, A. 2016b. Why implicit attitudes are (probably) not beliefs. Synthese, 193(8), 2659–​2684.
Madva, A. 2017. Biased against de-​biasing:  On the role of (institutionally sponsored) self-​trans-
formation in the struggle against prejudice.  Ergo, 4(6). doi: http://dx.doi.org/10.3998/
ergo.12405314.0004.006.
Madva, A., & Brownstein, M. 2016. Stereotypes, prejudice, and the taxonomy of the implicit social
mind. Noûs. doi: 10.1111/​nous.12182.
Mallon, R. 2016. Stereotype threat and persons. In M. Brownstein & J. Saul (Eds.), Implicit bias and
philosophy, Vol. 1: Metaphysics and epistemology, 130–​156. Oxford: Oxford University Press.
Mandelbaum, E. 2011. The architecture of belief: An essay on the unbearable automaticity of believing.
Doctoral dissertation, University of North Carolina.
Mandelbaum, E. 2013. Against alief. Philosophical Studies, 165, 197–​211.
Mandelbaum, E. 2014. Thinking is believing. Inquiry, 57(1), 55–​96.
Mandelbaum, E. 2015a. Associationist theories of thought. In E. Zalta (Ed.), Stanford encyclopedia
of philosophy. http://​plato.stanford.edu/​entries/​associationist-​thought/​.
Mandelbaum, E. 2015b. Attitude, association, and inference:  On the propositional structure of
implicit bias. Noûs, 50(3), 629–​658. doi: 10.1111/​nous.12089.
Mann, D. T. Y., Williams, A. M., Ward, P., & Janelle, C. M. 2007. Perceptual-​cognitive expertise in
sport: A meta-​analysis. Journal of Sport & Exercise Psychology, 29, 457–​478.
Manzini, P., Sadrieh, A., & Vriend, N. 2009. On smiles, winks and handshakes as coordination
devices. Economic Journal, 119(537), 826–​854.
248 References

Martin, J., & Cushman, F. 2016. The adaptive logic of moral luck. In J. Sytsma & W. Buckwalter
(Eds.), The Blackwell companion to experimental philosophy, 190–​202. Hoboken, NJ: Wiley.
Martin, L. L., & Tesser, A. 2009. Five markers of motivated behavior. In G. B. Moskowitz & H. Grant
(Eds.), The psychology of goals, 257–​276. New York: Guilford Press.
Mayo, D. G. 2011. Statistical science and philosophy of science: Where do/​should they meet in
2011 (and beyond)?” Rationality, Markets and Morals (RMM) Special Topic: Statistical Science
and Philosophy of Science, 2, 79–​102.
Mayo, D. G. 2012. Statistical science meets philosophy of science, Part  2:  Shallow versus deep
explorations. Rationality, Markets, and Morals (RMM), Special Topic:  Statistical Science and
Philosophy of Science, 3, 71–​107.
McConahay, J. B. 1986. Modern racism, ambivalence, and the modern racism scale. In J. F.
Dovidio & S. L. Gaertner (Eds.), Prejudice, discrimination, and racism, 91–​125. San Diego,
CA: Academic Press.
McLeod, P. 1987. Visual reaction time and high-​speed ball games. Perception, 16(1), 49–​59.
Meehl, P. E. 1990. Why summaries of research on psychological theories are often uninterpretable.
Psychological Reports, 66, 195–​244.
Mekawi, Y., & Bresin, K. 2015. Is the evidence from racial bias shooting task studies a smoking gun?
Results from a meta-​analysis. Journal of Experimental Social Psychology, 61, 120–​130.
Mendoza, S. A., Gollwitzer, P. M., & Amodio, D. M. 2010. Reducing the expression of implicit
stereotypes:  Reflexive control through implementation intentions. Personality and Social
Psychology Bulletin, 36(4), 512–​523.
Merleau-​Ponty, M. 1962/​2002. The phenomenology of perception. (C. Smith, trans).
New York: Routledge.
Messick, S. 1995. Validity of psychological assessment:  Validation of inferences from persons’
responses and performances as scientific inquiry into score meaning. American Psychologist,
50, 41–​749.
Miller, G. A. 1956. The magic number seven, plus or minus two: Some limits on our capacity for
processing information. Psychological Review, 63, 81–​97.
Miller, G. E., Yu, T., Chen, E., & Brody, G. H. 2015. Self-​control forecasts better psychosocial out-
comes but faster epigenetic aging in low-​SES youth. Proceedings of the National Academy of
Sciences, 112(33), 10325–​10330. doi: 10.1073/​pnas.1505063112.
Millikan, R. 1995. Pushmi-​pullyu representations. Philosophical Perspectives, 9, 185–​200.
Millikan, R. 2006. Styles of rationality. In S. Hurley & M. Nudds (Eds.), Rational animals?, 117–​126.
Oxford: Oxford University Press.
Milner, A. D., & Goodale, M. A. 1995. The visual brain in action. Oxford: Oxford University Press.
Milner, A. D., & Goodale, M. A. 2008. Two visual systems re-​viewed. Neuropsychologia, 46, 774–​785.
Milyavskaya, M., Inzlicht, M., Hope, N., & Koestner, R. 2015. Saying “no” to temptation: Want-​
to motivation improves self-​regulation by reducing temptation rather than by increasing
self-​control. Journal of Personality and Social Psychology, 109(4), 677–​693. doi.org/​10.1037/​
pspp0000045.
Mischel, W., Shoda, Y., & Peake, P. 1988. The nature of adolescent competencies predicted by pre-
school delay of gratification. Journal of Personality and Social Psychology, 54(4), 687.
Mitchell, C. J., De Houwer, J., & Lovibond, P. F. 2009. The propositional nature of human associa-
tive learning. Behavioral and Brain Sciences, 32(2), 183–​198.
Mitchell, J. P., Nosek, B. A., & Banaji, M. R. 2003. Contextual variations in implicit evaluation.
Journal of Experimental Psychology: General, 132, 455–​469.
Monin, B., & Miller, D. T. 2001. Moral credentials and the expression of prejudice. Journal of
Personality and Social Psychology, 81(1), 33.
Monteith, M. 1993. Self-​regulation of prejudiced responses: Implications for progress in prejudice-​
reduction efforts. Journal of Personality and Social Psychology, 65(3), 469–​485.
Monteith, M., Ashburn-​Nardo, L., Voils, C., and A. Czopp, A. 2002. Putting the brakes on preju-
dice: On the development and operation of cues for control. Journal of Personality and Social
Psychology, 83(5), 1029–​1050.
References 249

Montero, B. 2010. Does bodily awareness interfere with highly skilled movement? Inquiry, 53(2),
105–​122.
Moran, T., & Bar-​Anan, Y. 2013. The effect of object–​valence relations on automatic evaluation.
Cognition and Emotion, 27(4), 743–​752.
Moskowitz, G. B. 2002. Preconscious effects of temporary goals on attention. Journal of Experimental
Social Psychology, 38(4), 397–​404.
Moskowitz, G. B., & Balcetis, E. 2014. The conscious roots of selfless, unconscious goals. Behavioral
and Brain Sciences, 37(2), 151.
Moskowitz, G. B., & Li, P. 2011. Egalitarian goals trigger stereotype inhibition: A proactive form of
stereotype control. Journal of Experimental Social Psychology, 47(1), 103–​116.
Moskowitz, G., Gollwitzer, P., Wasel, W., & Schaal, B. 1999. Preconscious control of stereotype
activation through chronic egalitarian goals. Journal of Personality and Social Psychology, 77,
167–​184.
Moskowitz, G. B., Li, P., Ignarri, C., & Stone, J. 2011. Compensatory cognition associated with egali-
tarian goals. Journal of Experimental Social Psychology, 47(2), 365–​70.
Muller, H., & B. Bashour. 2011. Why alief is not a legitimate psychological category. Journal of
Philosophical Research, 36, 371–​389.
Muraven, M., Collins, R. L., & Neinhaus, K. 2002. Self-​control and alcohol restraint:  An initial
application of the self-​control strength model. Psychology of Addictive Behaviors, 16(2), 113.
Nagel, J. 2012. Gendler on alief. Analysis, 72(4), 774–​788.
Nanay, B. 2011. Do we see apples as edible? Pacific Philosophical Quarterly, 92, 305–​322.
Nanay, B. 2013. Between perception and action. Oxford: Oxford University Press.
Neal, D. T., Wood, W., Wu, M., & Kurlander, D. 2011. The pull of the past: When do habits persist
despite conflict with motives? Personality and Social Psychology Bulletin, 1–​10. doi: 10.1177/​
0146167211419863.
Neenan, M., & Dryden, W. 2005. Cognitive therapy. Los Angeles: Sage.
Nesse, R., & Ellsworth, P. 2009. Evolution, emotion, and emotional disorders. American Psychologist,
64(2), 129–​139.
Newen, A. 2016. Defending the liberal-​content view of perceptual experience: Direct social percep-
tion of emotions and person impressions. Synthese. doi:10.1007/​s11229-​016-​1030-​3.
Newman, G., Bloom, P., & Knobe, J. 2014. Value judgments and the true self. Personality and Social
Psychology Bulletin, 40(2), 203–​216.
Newman, G., De Freitas, J., & Knobe, J. 2015. Beliefs about the true self explain asymmetries based
on moral judgment. Cognitive Science, 39(1), 96–​125.
Nieuwenstein, M. R., Wierenga, T., Morey, R. D., Wicherts, J. M., Blom, T. N., Wagenmakers, E. J.,
& van Rijn, H. 2015. On making the right choice: A meta-​analysis and large-​scale replication
attempt of the unconscious thought advantage. Judgment and Decision Making, 10, 1–​17.
Nisbett, R. E., & Wilson, T. D. 1977. Telling more than we can know: Verbal reports on mental
processes. Psychological Review, 84(3), 231.
Noë, A. 2006. Perception in action. Cambridge: Cambridge University Press.
Nosek, B. A., & Banaji, M. R. 2001. The go/​no-​go association task. Social Cognition, 19(6), 161–​176.
Nosek, B. A., Banaji, M. R., & Greenwald, A. G. 2002. Harvesting intergroup implicit attitudes and
beliefs from a demonstration website. Group Dynamics, 6, 101–​115.
Nosek, B., Greenwald, A., & M. Banaji. 2007. The implicit association test at age 7: A methodologi-
cal and conceptual review. In J. A. Bargh (Ed.), Automatic processes in social thinking and behav-
ior, 265–​292. Philadelphia: Psychology Press.
Nussbaum, M. 2001. Upheavals of thought:  The intelligence of emotion. Cambridge:  Cambridge
University Press.
Office of the Attorney General. (1999). The New York City Police Department’s “stop & frisk” prac-
tices: A report to the people of the State of New York. Retrieved from www.oag.state.ny.us/​
bureaus/​civil_​rights/​pdfs/​stp_​frsk.pdf.
Olson, M., & R. Fazio, 2006. Reducing automatically activated racial prejudice through implicit
evaluative conditioning. Personality and Social Psychology Bulletin, 32, 421–​433.
250 References

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.


Science, 349(6251). doi: 10.1126/​science.aac4716.
Orlandi, N. 2014. The innocent eye. Oxford: Oxford University Press.
Ortny, A., Clore, G. L., & Collins, A. 1988. The cognitive structure of emotions. New York: Cambridge
University Press.
Oskamp, S., Harrington, M. J., Edwards, T. C., Sherwood, D. L., Okuda, S. M., & Swanson, D. C.
(1991). Factors influencing household recycling behavior. Environment and Behavior, 23(4),
494–​519.
Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E. 2015. Using the IAT to pre-
dict ethnic and racial discrimination: Small effects of unknown societal importance. Journal of
Personality and Social Psychology, 108, 562–​571.
Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E. 2013. Predicting ethnic and
racial discrimination: A meta-​analysis of IAT criterion studies. Journal of Personality and Social
Psychology, 105, 171–​192.
Otten, S., & Moskowitz, G. B. 2000. Evidence for implicit evaluative in-​group bias: Affect-​biased
spontaneous trait inference in a minimal group paradigm. Journal of Experimental Social
Psychology, 36(1), 77–​89.
Papineau, D. 2015. Choking and the yips. Phenomenology and the Cognitive Sciences, 14(2), 295–​308.
Parks-​Stamm, E., Gollwitzer, P., & Oettingen, G. 2007. Action control by implementation inten-
tions: Effective cue detection and efficient response initiation. Social Cognition, 25, 248–​266.
Pautz, A. 2010. Why explain visual experience in terms of content? In B. Nanay (Ed.), Perceiving the
world, 254–​310. Oxford: Oxford University Press.
Payne, B. K., & Cameron, C. D. 2010. Divided minds, divided morals:  How implicit social cog-
nition underpins and undermines our sense of justice. In B. Gawronski & B. K. Payne
(Eds.), Handbook of implicit social cognition: Measurement, theory, and applications, 445–​461.
New York: Guilford Press.
Payne, B. K., Cheng, C. M., Govorun, O., & Stewart, B. D. 2005. An inkblot for attitudes: Affect
misattribution as implicit measurement. Journal of Personality and Social Psychology, 89(3),
277–​293.
Payne, B. K., & B. Gawronski, 2010. A history of implicit social cognition: Where is it coming from?
Where is it now? Where is it going? In B. Gawronski, and B. Payne (Eds.), Handbook of implicit
social cognition: Measurement, theory, and applications, 1–​17. New York: Guilford Press.
Payne, K., & Lundberg, K. 2014. The affect misattribution procedure: Ten years of evidence on reli-
ability, validity, and mechanisms. Social and Personality Psychology Compass, 8(12), 672–​686.
Peacocke, C. 1999. Being known. Oxford: Oxford University Press.
Peacocke, C. 2004. The realm of reason. Oxford: Oxford University Press.
Pessoa, L. 2008. On the relationship between emotion and cognition. Nature Reviews: Neuroscience,
9, 148–​158.
Peters, K. R., & Gawronski, B. (2011). Are we puppets on a string? Comparing the impact of con-
tingency and validity on implicit and explicit evaluations. Personality and Social Psychology
Bulletin, 37(4), 557–​569.
Peterson, C., & Seligman, M. E. P. 2004. Character strengths and virtues: A classification and handbook.
New York: Oxford University Press/​Washington, DC: American Psychological Association.
Peterson, M.A., & Gibson, B. S. 1991. Directing spatial attention within an object: Altering the func-
tional equivalence of shape descriptions. Journal of Experimental Psychology: Human Perception
and Performance, 17, 170–​182.
Pettigrew, T., & Tropp, L. 2006. A meta-​analytic test of intergroup contact theory. Journal of
Personality and Social Psychology, 90, 751–​783.
Pezzulo, G., Rigoli, F., & Friston, K. 2015. Active inference, homeostatic regulation and adaptive
behavioural control. Progress in Neurobiology, 134, 17–​35.
Plant, E., & Devine, P. 1998. Internal and external motivation to respond without prejudice. Journal
of Personality and Social Psychology, 75(3), 811.
References 251

Plato. 380 bce/​1992. Republic, 443de.


Poldrack, R., Sabb, F. W., Foerde, K., Tom, S. M., Asarnow, R. F., Bookheimer, S. Y., & Knowlton,
B. J. 2005. The neural correlates of motor skill automaticity. Journal of Neuroscience, 25(22),
5356–​5364.
Polonyi, M. 1966/​2009. The tacit dimension. Chicago: University of Chicago Press.
Port, R. F., & Van Gelder, T. 1995. Mind as motion:  Explorations in the dynamics of cognition.
Cambridge, MA: MIT press.
Preuschoff, K., Bossaerts, P., Quartz, S. R. 2006. Neural differentiation of expected reward and risk
in human subcortical structures. Neuron, 5(3), 381–​390.
Prinz, F., Schlange, T., & Asadullah, K. 2011. Believe it or not: How much can we rely on published
data on potential drug targets? Nature Reviews: Drug Discovery, 10, 712–​713. doi: 10.1038/​
nrd3439-​c1.
Prinz, J. 2004. Gut reactions: A perceptual theory of emotion. New York: Oxford University Press.
Prinz, J. 2009. The normativity challenge: Cultural psychology provides the real threat to virtue eth-
ics. Journal of Ethics, 13(2–​3), 117–​144.
Railton, P. 2009. Practical competence and fluent agency. In D. Sobel & S. Wall (Eds.), Reasons for
action, 81–​115. Cambridge: Cambridge University Press.
Railton, P. 2011. Two cheers for virtue, or, might virtue be habit forming? In M. Timmons (Ed.),
Oxford studies in normative ethics, Vol. 1, 295–​330. Oxford: Oxford University Press.
Railton, P. 2014. The affective dog and its rational tale: Intuition and attunement. Ethics 124(4),
813–​859.
Railton, P. 2015. Dual-​process models of the mind and the “Identifiable Victim Effect.” In I. Cohen,
N. Daniels, & N. Eyal (Eds.), Identified versus statistical lives: An interdisciplinary perspective,
24–​41. Oxford: Oxford University Press.
Ramsey, F. P. 1990. Philosophic papers. (D. J. Mellor, Ed.). Cambridge: Cambridge University Press.
Rand, D. G., Greene, J. D., & Nowak, M. A. 2012. Spontaneous giving and calculated greed. Nature,
489(7416), 427–​430.
Ranganath, K. A., Smith, C. T., & Nosek, B. A. 2008. Distinguishing automatic and controlled com-
ponents of attitudes from direct and indirect measurement methods. Journal of Experimental
Social Psychology, 44(2), 386–​396.
Reber, A. S. 1967. Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal
Behavior, 6(6), 855–​863.
Reber, A. S. 1989. Implicit learning and tacit knowledge. Journal of Experimental Psychology: General,
118(3), 219–​235.
Redford, L. & Ratliff, K. 2016. Perceived moral responsibility for attitude-​based discrimination.
British Journal of Social Psychology, 55, 279–​296.
Reed, B. 2013, 26 July. The view from in here. This American Life. Podcast. http://​www.thisameri-
canlife.org/​radio-​archives/​episode/​501/​the-​view-​from-​in-​here/​.
Reynolds, S. M., & Berridge, K. C. 2008. Emotional environments retune the valence of appetitive
versus fearful functions in nucleus accumbens. Nature Neuroscience, 11, 423–​425.
Richeson, J., & Nussbaum, R. 2004. The impact of multiculturalism versus color-​blindness on racial
bias. Journal of Experimental Social Psychology, 40(3), 417–​423.
Richeson, J., & Shelton, J. 2003. When prejudice does not pay: Effects of interracial contact on exec-
utive function. Psychological Science, 14(3), 287–​290.
Richetin, J., Perugini, M., Adjali, I., & Hurling, R. 2007. The moderator role of intuitive versus delib-
erative decision making for the predictive validity of implicit and explicit measures. European
Journal of Personality, 21, 529–​546.
Rietveld, E. 2008. Situated normativity: The normative aspect of embodied cognition in unreflec-
tive action. Mind, 117, 973–​1001.
Rietveld, E., & Paglieri, F. 2012. Bodily intentionality and social affordances in context. In F. Paglieri
(Ed.), Consciousness in interaction: The role of the natural and social context in shaping conscious-
ness, 207–​226. Philadelphia: John Benjamins.
252 References

Rooth, D. O. 2010. Automatic associations and discrimination in hiring:  Real world evidence.
Labour Economics, 17(3), 523–​534.
Rosenthal, R. 1991. Meta-​analytic procedures for social research (Rev. ed.). Newbury Park,
CA: Sage.
Rosenthal, R., & Rubin, D.B. 1982. A simple, general purpose display of magnitude of experimental
effect. Journal of Educational Psychology, 74, 166–​169.
Rowbottom, D. 2007. “In-​between believing” and degrees of belief. Teorema, 26, 131–​137.
Rudman, L. A., & Ashmore, R.D. 2007. Discrimination and the implicit association test. Group
Processes & Intergroup Relations, 10, 359–​372.
Rydell, R. J., & Gawronski, B. 2009. I like you, I  like you not:  Understanding the formation of
context-​dependent automatic attitudes. Cognition and Emotion, 23, 1118–​1152.
Ryle, G. 1949. The concept of mind. London: Hutchinson.
Salzman, C. D., & Fusi, S. 2010. Emotion, cognition, and mental state representation in amygdala
and prefrontal cortex. Annual Review of Neuroscience, 33, 173.
Sarkissian, H. 2010a. Confucius and the effortless life of virtue. History of Philosophy Quarterly,
27(1), 1–​16.
Sarkissian, H. 2010b. Minor tweaks, major payoffs: The problems and promise of situationalism in
moral philosophy. Philosopher’s Imprint, 10(9), 1–​15.
Sartre, J. P. 1956/​1993. Being and nothingness. New York: Washington Square Press.
Scanlon, T. 1998. What we owe each other. Cambridge, MA: Harvard University Press.
Schaller, M., Park, J. J., & Mueller, A. 2003. Fear of the dark:  Interactive effects of beliefs about
danger and ambient darkness on ethnic stereotypes. Personality and Social Psychology Bulletin,
29, 637–​649.
Schapiro, T. 2009. The nature of inclination. Ethics, 119(2), 229–​256.
Scharlemann, J., Eckel, C., Kacelnik, A., & Wilson, R. 2001. The value of a smile: Game theory with
a human face. Journal of Economic Psychology, 22, 617–​640.
Schechtman, M. 1996. The constitution of selves. Ithaca, NY: Cornell University Press.
Schmajuk, N. A., & Holland, P. C. 1998. Occasion setting: Associative learning and cognition in ani-
mals. Washington, DC: American Psychological Association.
Schneider, W. & Shiffrin, R. 1977. Controlled and automatic human information processing.
I: Detection, search and attention. Psychological Review, 84, 1–​66.
Schwartz, N. 2002. Situated cognition and the wisdom of feelings: Cognitive tuning. In L. F. Barrett
& P. Salovey (Eds.), The wisdom of feeling, 144–​166. New York: Guilford Press.
Schwartz, N., & Clore, G. L. 2003. Mood as information: 20 years later. Psychological Inquiry, 14,
296–​303.
Schweiger Gallo, I., Keil, A., McCulloch, K., Rockstroh, B., & Gollwitzer, P. 2009. Strategic automa-
tion of emotion regulation. Journal of Personality and Social Psychology, 96(1), 11–​13.
Schwitzgebel, E. 2002. A phenomenal, dispositional account of belief. Noûs, 36, 249–​275.
Schwitzgebel, E. 2006/​2010. Belief. In E. Zalta (Ed.), Stanford encyclopedia of philosophy. http://​
plato.stanford.edu/​entries/​belief/​.
Schwitzgebel, E. 2010. Acting contrary to our professed beliefs, or the gulf between occurrent judg-
ment and dispositional belief. Pacific Philosophical Quarterly, 91, 531–​553.
Schwitzgebel, E. 2013. A dispositional approach to attitudes: Thinking outside of the belief box.
In N. Nottelmann (Ed.), New essays on belief:  Constitution, content and structure, 75–​99.
Basingstoke: Palgrave MacMillan.
Schwitzgebel, E., & Cushman, F. 2012. Expertise in moral reasoning? Order effects on moral judg-
ment in professional philosophers and non‐philosophers. Mind & Language, 27(2), 135–​153.
Schwitzgebel, E., & Rust, J. 2014. The moral behavior of ethics professors: Relationships among self-​
reported behavior, expressed normative attitude, and directly observed behavior. Philosophical
Psychology, 27(3), 293–​327.
Seligman, M., Railton, P., Baumeister, R., & Sripada, C. 2013. Navigating into the future or driven by
the past. Perspectives on Psychological Science, 8(2), 119–​141.
References 253

Shepherd, J. 2014. The contours of control. Philosophical Studies, 170(3), 395–​411.


Sher, G., 2009, Who knew? Responsibility without awareness. Oxford: Oxford University Press.
Sherman, S. J., Rose, J. S., Koch, K., Presson, C. C., & Chassin, L. 2003. Implicit and explicit attitudes
toward cigarette smoking: The effects of context and motivation. Journal of Social and Clinical
Psychology, 22(1), 13–​39.
Shiffrin, R., & Schneider, W. 1977. Controlled and automatic human information processing.
II: Perceptual learning, automatic attending, and a general theory. Psychological Review, 84,
127–​190.
Shoemaker, D. 2003. Caring, identification, and agency. Ethics, 118, 88–​118.
Shoemaker, D. 2011. Attributability, answerability, and accountability: Towards a wider theory of
moral responsibility. Ethics, 121, 602–​632.
Shoemaker, D. 2012. Responsibility without identity. Harvard Review of Philosophy, 18(1), 109–​132.
Shoemaker, D. 2015. Remnants of character. In J. D’Arms & D. Jacobson (Eds.), Moral psychology
and human agency, 84–​107. Oxford: Oxford University Press.
Shook, N., & Fazio, R. 2008. Interracial roommate relationships an experimental field test of the
contact hypothesis. Psychological Science, 19(7), 717–​723.
Siegel, S. 2010. The contents of visual experience. Oxford: Oxford University Press.
Siegel, S. 2014. Affordances and the contents of perception. In B. Brograard (Ed.), Does perception
have content?, 39–​76. Oxford: Oxford University Press.
Siegel, S., & Byrne, A. 2016. Rich or thin? In B. Nanay(Ed.), Current controversies in philosophy of
perception. New York: Routledge.
Slingerland, E. 2014. Trying not to try: The art and science of spontaneity. New York: Random House.
Sloman, S. A. 1996. The empirical case for two systems of reasoning. Psychological Bulletin,
119(1), 3–​22.
Smart, J. J. C. 1961. Free-​will, praise and blame. Mind, 70(279), 291–​306.
Smith, A. 2005. Responsibility for attitudes: Activity and passivity in mental life. Ethics, 115(2),
236–​271.
Smith, A. 2008. Control, responsibility, and moral assessment. Philosophical Studies, 138, 367–​392.
Smith, A. 2012. Attributability, answerability, and accountability: In defense of a unified account.
Ethics, 122(3), 575–​589.
Smith, C. T., & Nosek, B. A. 2011. Affective focus increases the concordance between implicit and
explicit attitudes. Social Psychology, 42, 300–​313.
Smith, E. 1996. What do connectionism and social psychology offer each other? Journal of Personality
and Social Psychology, 70(5), 893–​912.
Smith, E. R., & Lerner, M. 1986. Development of automatism of social judgments. Journal of
Personality and Social Psychology, 50(2), 246–​259.
Smith, H. 2011. Non-​tracing cases of culpable ignorance. Criminal Law and Philosophy, 5, 115–​146.
Smith, H. 2015. Dual-​process theory and moral responsibility. In R. Clarke, M. McKenna, & A.
Smith (Eds.), The nature of moral responsibility:  New essays, 175–​210. New  York:  Oxford
University Press.
Soeter, M., & Kindt, M. 2015. An abrupt transformation of phobic behavior after a post-​retrieval
amnesic agent. Biological Psychiatry, 8(12), 880–​886.
Solomon, R. 1976. The passions. New York: Doubleday.
Sripada, C. 2015. Self-​expression: A deep self theory of moral responsibility. Philosophical Studies,
173(5), 1203–​1232.
Srivastava, S. 2016, 11 August. Everything is fucked: The syllabus. https://​hardsci.wordpress.com/​
2016/​08/​11/​everything-​is-​fucked-​the-​syllabus/​.
Steering Committee of the Physicians' Health Study Research Group. (1989). Final report on the
aspirin component of the ongoing Physicians' Health Study. New England Journal of Medicine,
321, 129–​135.
Stefanucci, J. K., & Proffitt, D. R. 2009. The roles of altitude and fear in the perception of height.
Journal of Experimental Psychology: Human Perception and Performance, 35(2), 424.
254 References

Sterelny, K. 1995. Basic minds. Philosophical Perspectives, 9, 251–​270.


Sterelny, K. 2001. Explaining culture: A naturalistic approach. Mind, 110(439), 845–​854.
Stewart, B., & Payne, B. 2008. Bringing automatic stereotyping under control:  Implementation
intentions as efficient means of thought control. Personality and Social Psychology Bulletin, 34,
1332–​1345.
Stich, S. 1978. Belief and subdoxastic states. Philosophy of Science, 45(4), 499–​518.
Stoffregen, T. 2003. Affordances as properties of the animal–​environment system. Ecological
Psychology, 15, 149–​180.
Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior.
Personality and Social Psychology Review, 8, 220–​247.
Strack, F., Martin, L. L., & Stepper, S. 1988. Inhibiting and facilitating conditions of the human
smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social
Psychology, 54(5), 768.
Strawson, P. 1962. Freedom and resentment. Proceedings of the British Academy, 48, 187–​211.
Strohminger, N., & Nichols, S. 2014. The essential moral self. Cognition, 131(1), 159–​171.
Stucke, T., & Baumeister, R. 2006. Ego depletion and aggressive behavior:  Is the inhibition of
aggression a limited resource? European Journal of Social Psychology, 36(1), 1–​13.
Sunstein, C. R. 2005. Moral heuristics. Behavioral and Brain Sciences, 28(4), 531–​541.
Sutton, J. 2007. Batting, habit and memory: The embodied mind and the nature of skill. Sport in
Society, 10(5), 763–​786.
Sutton, R. S. 1988. Learning to predict by the method of temporal differences. Machine Learning,
3(9), 44.
Talaska, C. A., Fiske, S. T., & Chaiken, S. 2008. Legitimating racial discrimination: A meta-​analysis
of the racial attitude—​behavior literature shows that emotions, not beliefs, best predict dis-
crimination. Social Justice Research, 21, 263–​296. doi:10.1007/​s11211-​008-​0071-​2.
Terbeck, S., Kahane, G., McTavish, S., Savulescu, J., Cowen, P. J., & Hewstone, M. 2012. Propranolol
reduces implicit negative racial bias. Psychopharmacology, 222(3), 419–​424.
Thaler, R. & Sunstein, C. 2008. Nudge:  Improving decisions about health, wealth, and happiness.
New York: Penguin Books.
Tiberius, V. 2002. Practical reason and the stability standard. Ethical Theory and Moral Practice, 5,
339–​53.
Tierney, H., Howard, C., Kumar, V., Kvaran, T., & Nichols, S. 2014. How many of us are
there? In J. Sytsma (Ed.), Advances in experimental philosophy of mind, 181–​202.
New York: Bloomsbury.
Tietz, J. 2004, 29 November. Fine disturbances. New Yorker.
Tooby, J., & Cosmides, L. 1990. The past explains the present: Emotional adaptations and the struc-
ture of ancestral environments. Ethology and Sociobiology, 11, 375–​424.
Toplak, M., West, R. and Stanovich, K. 2011. The cognitive reflection test as a predictor of perfor-
mance on heuristics-​and-​biases tasks. Memory & Cognition, 39(7), 1275–​1289.
Toppino, T. C. 2003. Reversible-​figure perception: Mechanisms of intentional control. Perception &
Psychophysics, 65, 1285–​1295.
Toribio, J. 2015. Visual experience:  Rich but impenetrable. Synthese, 1–​ 18. doi:10.1007/​
s11229-​015-​0889-​8.
Turvey, E., Shaw, R., Reed, E., & Mace, W. 1981. Ecological laws of perceiving and acting: In reply
to Fodor and Pylyshyn. Cognition, 9, 237–​304.
Valian, V. 1998. Why so slow? The advancement of women. Cambridge, MA: MIT Press.
Valian, V. 2005. Beyond gender schemas:  Improving the advancement of women in academia.
Hypatia 20, 198–​213.
Varela, F. J., & Depraz, N. 2005. At the source of time: Valence and the constitutional dynamics of
affect. Journal of Consciousness Studies, 12, 61–​81.
Vargas, M. 2005. The revisionist’s guide to responsibility. Philosophical Studies, 125(3), 399–​429.
Vazire, S. 2006. It’s the end of the world as we know it . . . and I feel fine. http://​sometimesimwrong.
typepad.com/​wrong/​2016/​02/​end-​of-​the-​world.html/​.
References 255

Velleman, J. D. 2000. On the aim of belief. In J. D. Velleman (Ed.), The possibility of practical reason,
244–​282. Oxford: Oxford University Press.
Velleman, J. D. 2015. Morality here and there. Whitehead Lecture at Harvard University.
Vohs, K. D., & Heatherton, T. F. 2000. Self-​regulatory failure:  A resource-​depletion approach.
Psychological Science, 11(3), 249–​254.
Vohs, K. D., & Baumeister, R. (Eds.). 2011. Handbook of self-​regulation: Research, theory, and applica-
tions. New York: Guilford Press.
Wagenmakers, E. J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., Albohn, D. N.,
et al. 2016. Registered replication report: Strack, Martin, & Stepper (1998). Perspectives on
Psychological Science, 11(6), 917–​928.
Waldron, J. 2014, 9 October. It’s all for your own good [review of Nudge]. New York Review of Books. http://​
www.nybooks.com/​articles/​archives/​2014/​oct/​09/​cass-​sunstein-​its-​all-​your-​own-​good/​.
Wallace, D. F. 1999. Brief interviews with hideous men. New York: Back Bay Books.
Wallace, D. F. 2006. How Tracy Austin broke my heart. In Consider the lobster. New York: Little, Brown.
Wallace, D. F. 2011. The pale king. New York: Back Bay Books.
Wansink, B., Painter, J. E., & North, J. 2005. Bottomless bowls: Why visual cues of portion size may
influence intake. Obesity Research, 13(1), 93–​100.
Washington, N., & Kelly, D. 2016. Who’s responsible for this? Implicit bias and the knowledge con-
dition. In M. Brownstein & J. Saul (Eds), Implicit bias and philosophy: Vol. 2, Moral responsibil-
ity, structural injustice, and ethics, 11–​36. Oxford: University Press.
Watson, G. 1975. Free agency. Journal of Philosophy, 72(8), 205–​220.
Watson, G. 1996. Two faces of responsibility. Philosophical Topics, 24(2), 227–​248.
Webb, T., & Sheeran, P. 2008. Mechanisms of implementation intention effects: The role of goal
intentions, self-​ efficacy, and accessibility of plan components. British Journal of Social
Psychology, 47, 373–​379.
Webb, T., Sheeran, P., & Pepper, A. 2010. Gaining control over responses to implicit attitude
tests:  Implementation intentions engender fast responses on attitude-​incongruent trials.
British Journal of Social Psychology, 51(1), 13–​32.
Wegner, D. 1984. Innuendo and damage to reputation. Advances in Consumer Research, 11, 694–​696.
Wegner, D., Schneider, D., Carter, S., & White, T. 1987. Paradoxical effects of thought suppression.
Journal of Personality and Social Psychology, 53(1), 5–​13.
Weisbuch, M., Pauker, K., & Ambady, N. 2009. The subtle transmission of race bias via televised
nonverbal behavior. Science, 326(5960), 1711–​1714.
Wennekers, A. M. 2013. Embodiment of prejudice: The role of the environment and bodily states.
Doctoral dissertation, Radboud University.
West, R. F., Meserve, R., & Stanovich, K. 2012. Cognitive sophistication does not attenuate the bias
blind spot. Journal of Personality and Social Psychology, 103(3), 506–​519.
Wheatley, T., & Haidt, J. 2005. Hypnotic disgust makes moral judgments more severe. Psychological
Science, 16(10), 780–​784.
Whitehead, A. N. 1911. An introduction to mathematics. New York: Holt.
Wicklund, R. A. & Gollwitzer, P. M. 1982. Symbolic self-​completion. New York: Routledge.
Wieber, F., Gollwitzer, P., Gawrilow, C., Odenthal, G., & Oettingen, G. 2009. Matching principles in
action control. Unpublished manuscript, University of Konstanz.
Wiers, R. W., Eberl, C., Rinck, M., Becker, E. S., & Lindenmeyer, J. 2011. Retraining automatic
action tendencies changes alcoholic patients’ approach bias for alcohol and improves treat-
ment outcome. Psychological Science, 22(4), 490–​497.
Williams, B. 1981. Moral luck:  Philosophical papers, 1973–​ 1980. Cambridge:  Cambridge
University Press.
Wilson, T. D., Lindsey, S., & Schooler, T. Y. 2000. A model of dual attitudes. Psychological Review,
107, 101–​126.
Winkielman, P., Berridge, K. C., & Wilbarger, J. L. 2005. Unconscious affective reactions to masked
happy versus angry faces influence consumption behavior and judgments of value. Personality
and Social Psychology Bulletin, 31(1), 121–​135.
256 References

Witt, J. K., & Proffitt, D. R. 2008. Action-​specific influences on distance perception:  A role for
motor simulation. Journal of Experimental Psychology: Human Perception and Performance, 34,
1479–​1492.
Witt, J. K., Proffitt, D. R., & Epstein, W. 2004. Perceiving distance:  A role of effort and intent.
Perception, 33, 570–​590.
Wittenbrink, B., Judd, C. M., & Park, B. 2001. Spontaneous prejudice in context: Variability in auto-
matically activated attitudes. Journal of Personality and Social Psychology, 81(5), 815.
Wittgenstein, L. 1966. Lectures and conversations on aesthetics, psychology and religious belief.
Oxford: Blackwell.
Wolitzky-​Taylor, K. B., Horowitz, J. D., Powers, M. B., & Telch, M. J. 2008. Psychological approaches
in the treatment of specific phobias:  A meta-​ analysis. Clinical Psychology Review, 28,
1021–​1037.
Woodward, J., & Allman, J. 2007. Moral intuition: Its neural substrates and normative significance.
Journal of Physiology–​Paris, 101(4), 179–​202.
Wright, C. 2014. On epistemic entitlement. II: Welfare state epistemology. In E. Zardini & D. Dodd
(Eds.), Scepticism and perceptual justification, 213–​247. New York: Oxford University Press.
Wu, W. 2014a. Against division:  Consciousness, information, and the visual streams. Mind &
Language, 29(4), 383–​406.
Wu, W. 2014b. Attention. New York: Routledge.
Wu, W. 2015. Experts and deviants: The story of agentive control. Philosophy and Phenomenological
Research, 93(1), 101–​126.
Wylie, C. D. 2015. “The artist’s piece is already in the stone”: Constructing creativity in paleontology
laboratories. Social Studies of Science, 45(1), 31–​55.
Xu, K., Nosek, B., & Greenwald, A. G., 2014. Psychology data from the race implicit association test
on the project implicit demo website. Journal of Open Psychology Data, 2(1), p.e3. doi: http://​
doi.org/​10.5334/​jopd.ac.
Yancy, G. 2008. Black bodies, white gazes: The continuing significance of race. Lanham, MD: Rowman
& Littlefield.
Yarrow, K., Brown, P., & Krakauer, J. 2009. Inside the brain of an elite athlete: The neural processes
that support high achievement in sports. Nature Reviews: Neuroscience, 10, 585–​596.
Zajonc, R. B. 1984. On the primacy of affect. American Psychologist, 39, 117–​123.
Zeigarnik, B. 1927/​1934. On finished and unfinished tasks. In W. D. Ellis & K. Koffka (Eds.), A
source book of Gestalt psychology, 300–​314. Gouldsboro, ME: Gestalt Journal Press.
Zheng, R. 2016. Attributability, accountability and implicit attitudes. In M. Brownstein & J. Saul
(Eds.), Implicit bias and philosophy: Vol. 2, Moral responsibility, structural injustice, and ethics,
62–​89. Oxford: Oxford University Press.
Zhou, H., & Fishbach, A. 2016. The pitfall of experimenting on the web: How unattended selective
attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social
Psychology 111(4), 493–​504.
Zimmerman, A. 2007. The nature of belief. Journal of Consciousness Studies, 14(11), 61–​82.
INDEX

accidents, responsibility for, 13–​14 Balcetis, Emily, 49


affordances, 36–​42, 46–​48, 115n23 Bar, Moshe, 44–​46
Agassi, Andre, 183 Bar-​Anan, Yoav, 78–​84, 215–​217
agency, 103, 116, 141–​145, 189 Barden, Jamie, 68
fluent, 157 Bargh, John, 23–​24, 49
alief, 85–​97 Barrett, Lisa Feldman, 43–​46, 111
RAB content of, 85–​91 Barrington, Keith, 143
Allman, John, 43n17, 47n28 basic actions, 143, 166n15
allostatic load, 181 Bateson, Melissa, 191
Allport, Gordon, 185 Baumeister, Roy, 11, 181
Alquist, Jessica, 181 belief attribution, 73–​82
This American Life, 101 belief, 72–​81
Amodio, David, 6, 219–​221 action-​guiding account of, 73–​81
amygdala, 111 context-​relative, 73–​81
Apfelbaum, Evan, 172 contradictory, 73–​81
approach/​avoidance, 93, 168, 184–​185, 197 identity-​protective, 174–​176
aretaic appraisals, 22, 103–​104, 120–​122, 132–​140 indeterminacy of, 73–​81
Arpaly, Nomy, 10, 102–​103, 144, 154–​160, 164, truth-​taking account of, 73–​81
204–​205 belief-​discordant behavior, 13, 86–​90, 117
Asgari, Shaki, 149, 197 Benes, Barton, 12–​13, 33–​42, 48, 67, 72–​73
associations benevolent sexism, 71
activation of, 68–​71, 90–​94, 169–​171, 194–​199, Bennett, Jonathan, 159–​160
217–​230 Bird, Larry, 142
content-​driven change in, 70–​72 Bjorklund, Fredrik, 71
associative principles, 69–​72 blind spot bias, 127n6, 175
and counterconditioning, 70–​71 Bottomless Bowl experiment, 191–​192
and extinction, 70–​71 bounded rationality, 14, 174n26
attention regulation, 187–​191 Bourdieu, Pierre, 65n2
attitude-​behavior consistency, 175–​176, 219 Bratman, Michael, 109, 116
attributability, 103–​122, 127–​150 Briñol, Pablo, 170–​171
care-​based theory of, 106–​122 Brody, Gene, 181
judgment-​based accounts of, 111–​112, 145–​149 Bryan, William, 17–​18
voluntaristic accounts of, 111–​112 Buchsbaum, Daphna, 166–​167
attribution errors, 167
Augustine, St., 16
autonomy, 157, 191–​200, 209 Cameron, Daryl, 10, 131–​137, 222
awareness conditions, 43–​44, 65–​66, 103–​112, caring, affective account of, 110–​116
123, 131, 139–​146 and the rational-​relations view, 145–​149
and Levy’s Consciousness Thesis, 140–​144 Carlsson, Rickard, 71

257
258 Index

Carruthers, Peter, 82 distance-​standing, 6–​7


Cesario, Joseph, 61–​62, 84, 192–​194, 230 dorsal visual stream, 166n15
Chan, David, 155–​156 Dretske, Fred, 114–​115
Chemero, Anthony, 37 Dreyfus, Hubert, 18, 38–​39
choice blindness, 172 dual-​system psychology, 16, 32, 86–​87
chronic accessibility of concepts, 218 Associative-​Propositional Evaluation, 80, 222
Churchill, Winston, 231 explicit vs. implicit mental states, 15–​16
Clark, Andy, 33, 51 Meta-​Cognitive Model, 222
Clinton, Hillary, 22–​23 Motivation and Opportunity as Determinants,
A Clockwork Orange, 179 219n10, 222
closure, 77 Reflective Impulse Model, 219n10
co-​activation, 20–​21, 31, 61–​64, 85–​94, 120 dual-​task experiments, 143
cognitive behavioral therapy, 71–​72, 168 Duckworth, Angela, 180
and exposure therapy, 71–​72 dynamic feedback, 49, 68, 122
cognitive depletion, 171–​172
cognitive dissonance, 72, 141
balance effects of, 72 ecological control, 149n35
cognitive penetration, 40–​44 Egan, Andy, 76–​78
Cognitive Reflection Test, 175 ego depletion, 11
compensation effect, 71 embodied cognition, 185
confabulation, 17, 172–​176 emotion, 29–​32, 42–​46, 70–​71, 73–​75, 107–​122,
confirmation bias, 167–​168 130, 140, 178, 180–​184, 190–​196
Confucius, 10, 184 primary vs. secondary, 110–​116
consciousness, global workspace theory of, 141 somatic vs. cognitive theories of, 112–​116
consequential validity, 223 epigenetic aging, 181
consistency, individual, 77, 103, 175, 198 epistemic/​moral intersection, 62
contact hypothesis, 185 Ericsson, K. Anders, 182–​184
context regulation, 191–​195, 198–​205 evaluative conditioning, 70–​71, 149, 198
context sensitivity, 35–​36, 84, 180–​201 evaluative stance, 35, 134
contiguity, 69–​72 Exhaust (in theories of visual phenomenology),
core relational themes, 114–​120 38–​42, 58
Correll, Joshua, 60–​61, 120–​127 explicit monitoring theory, 143
Costanza, George, 162 expressive rationality, 174n26
counterstereotypes, 45, 70, 149, 184–​200
the creepiness worry, 179
Currie, Greg, 88, 91 Fazio, Russell, 19, 185
Cushman, Fiery, 13–​14, 32, 51–​55, 175 fight vs. flight words, 61–​62, 230
Finn, Huckleberry, 102, 137, 154, 158–​159
and sympathy, 159
D’Cruz, Jason, 23, 153–​176 Firestone, Chaz, 41
Dasgupta, Nilanjana, 68, 149, 197 Flannigan, Natasha, 45
de Ridder, Denise T. D., 180–​181 flow, 7–​8
decision-​making, evaluative, 49–​58 Force Majeure, 164n13
model-​based, 53–​56 Forscher, Patrick, 225
model-​free, 56–​58 fragmented mind, 77–​78
and decision trees, 50–​54 FTBA relata, 20–​21, 31–​65, 91–​97, 116–​117
deep self, 133–​137 specifically, alleviation, 48–​63
alternatively, the whole self, 108n16, 134n18 specifically, behavior, 46–​78
mosaic conception of the, 135–​138, 206 specifically, feature, 32–​42
unity of, 133–​137 specifically, tension, 42–​46
deliberation, 110–​116, 153–​176, 205–​209 strength of, 43
Dennett, Daniel, 177–​195 valence of, 42–​43
stances, 177–​209
Descartes, 16, 110
Devine, Patricia, 197–​200, 219–​225 Gallo, Inge Schweiger, 190
dispositional accounts of attitudes, 4–​6, 81–​84 Gawronski, Bertram, 61, 80, 192–​194, 200
Index 259

Gemmel, Bruce, 207 implicit stereotypes vs. implicit evaluations, 45,


Gendler, Tamar Szabó, 12–​13, 64–​65, 60–​61, 93, 220–​222
73–​74, 85–​94 implicit vs. indirect measures, 212n4
Gibran, Kahlil, 10 implicit, senses of, 15–​19
Gibson, Bradley, 41 impulsivity, 10–​12, 209
Gibson, J. J., 36–​37 in principle sensitivity, 146–​149
and ecological psychology, 40 indirect psychological measures
The Gift of Fear, 1–​5 Affect Misattribution Procedure, 195, 215
Gilbert, Daniel, 51 evaluative priming, 83, 215
Ginsborg, Hannah, 94–​97 and eye trackers, 60
Gladwell, Malcom, 10, 182–​183 Go/​No-​go Association Task, 215
Glaser, Jack, 61 Implicit Association Test, 2, 36, 60–​62, 68, 79,
Gollwitzer, Peter, 188, 191 83–​84, 93, 126n5, 169n16, 170, 194–​201,
good true self theory, 136–​137 211–​231
Goodale, Melvyn, 34, 40n13 and predictive validity, 218–​224
Goodman, Steven, 228–​229 and process-​dissociation techniques,
Gopnik, Allison, 167 197–​198
Guinote, Ana, 195 and the Quadruple Process Model, 171,
197n22
sequential priming, 19, 222
habit, 9, 17–​18, 23, 64–​65, 88, 139, 157, 168, 225 shooter bias, 2–​3, 60, 80, 119–​120, 126,
and the habit stance, 179–​209 188, 199
and habitudes, 64–​65 and test-​retest reliability, 217–​218
vs. habitus, 65n2, 157 inferential impoverishment, 78–​81, 94
Hacker, Peter, 157 inferential promiscuity, 78–​81, 91
halo effect, 71 innate architecture, 64, 86, 95–​96
Hamilton, Holly, 61 institutional change, 193n17
Harter, Noble, 17–​18 insular cortex, 47n28
Haslanger, Sally, 208–​209 internal disharmony, 13
Hawley, Patricia, 209 interpersonal fluency, 8–​9, 53, 133, 138–​139
Hayes, John, 183 interracial interactions, 172, 224–​225
Herman, Barbara, 201–​204 intervention, 119–​120, 169–​176, 177–​180
Hu, Xiaoqing, 79 context regulation as, 191–​195
Huang, Julie, 23–​24, 49 durability of, 196–​201
Huebner, Bryce, 53–​54, 58, 73–​74 generality of, 196–​201
Hume, David, 12, 22, 69, 87 pre-​commitment as, 187–​191
intuition, 1–​14, 65n5, 202–​205
inverse akrasia, 10, 102
Ichino, Anna, 88–​89 Inzlicht, Michael, 226–​228
identification theories, 106 Iowa Gambling Task, 88–​89
if-​then plans. See implementation intentions Ito, Tiffany, 70–​71
imperatival perception, 20, 32–​41, 46,
58–​60, 190n13
implementation intentions, 80, 187–​191, 198–​199 James, William, 179
implicit attitudes Jaworska, Agnieska, 107–​116, 128–​131
associative vs. propositional debate about, 80–​81 jazz, 88, 144
consciousness of, 103, 131–​133 Johansson, Petter, 172–​173
control over, 103, 131–​133 Jonas, Kai, 61–​62, 230
dispositional view of, 217 Jordan, Michael, 68
form-​sensitivity of, 79–​81 Joseph Harp Correctional Center, 101–​104
malleability of, 179–​196
personal vs. subpersonal, 121–​122
implicit bias, 2–​4, 14, 21–​23, 46, 58–​63, 66–​67, Kant, Immanuel, 33, 159, 201–​205
76–​82, 95–​97, 103, 119–​120, 131–​136, Kawakami, Kerry, 184–​187, 197, 204, 225
147–​148, 167–​176, 184–​191, 195–​199, Kim, Kimberly, 141–​147
208–​209, 224–​225, 229–​230 knowledge-​how, 17
260 Index

Knowles, Megan, 9n19 moral luck, 13–​14, 137, 175, 205


Korsakoff ’s syndrome, 45 moral myopia, 13–​14
Kriegel, Uriah, 87 moral sentimentalism, 155
Krishnamurthy, Meena, 209 Moran, Tal, 76–​80
Kubota, Jennifer, 70–​71 MOSAIC Threat Assessment System, 2
Moskowitz, Gordon, 48–​49
motivated numeracy, 173
Lai, Calvin, 224–​225 motivated reasoning, 174–​176
Lazarus, Richard, 114 motivation for debiasing, 195
learning Motivational Support Account, 123n3
associative reward, 44 Müller-​Lyer illusion, 75–​76
feedback-​error, 184 multinomial modeling, 71n16, 171
implicit, 18–​19
Pavolvian mechanisms for, 54n41
prediction-​error models of, 49–​56 Nagel, Jennifer, 88–​91
temporal difference reinforcement, 54 Neal, David, 192
Lebrecht, Sophie, 45 Necker cube, 41
Ledecky, Katie, 207 neo-​Lockean approach to identity, 109, 116
Leder, Garson, 169 New York City Subway Hero, 9–​10, 165
The Lego Movie, 5 Newman, Barnett, 5–​6
Leibniz, Gottfried Wilhelm, 16 Newman, George, 136–​137
Levy, Neil, 65n5, 70, 78–​80, 117n27, Nike organisms, 24
121n35, 123–​126, 134, 140–​144, Nisbett, Richard, 17, 172–​173
147, 170, 201 nudge, 191–​193
loaded gun case, 105–​106, 140

Obama, Barack, 3, 8–​9, 54–​55, 138–​139,


Machery, Edouard, 83–​84, 216–​217 143–​147
Macnamara, Brooke, 184 occasion setter, 118, 194
Madva, Alex, 73, 79–​81, 185 Olson, Kristina, 227
Mallon, Ron, 121–​122 omission, patterns of, 125–​127
Mandelbaum, Eric, 69–​72, 80–​81, 170 Open Science Collaboration, 226
Mann, David, 166n15 orbitofrontal cortex, 44
marshmallow experiments, 10–​11 Oswald, Frederick, 218–​223
Martin, Justin, 14
Martin, Trayvon, 3
memory, 111, 116, 169 Papineau, David, 143, 166n15
episodic, 169 Payne, Keith, 10, 46, 199, 222
working, 77, 140–​141 Payton, Walter, 7, 21, 141
Mendoza, Saaid, 198 perception
mental habits, 64 action-​first, 33–​34
Merleau-​Ponty, Maurice, 38 rich properties of, 37–​38
Messick, Samuel, 223 spectator-​first, 33–​34
micro-​expression, 1–​3, 55 theories of, 3–​42, 208
micro-​valence, 45 perceptual unquiet, 20, 29–​30, 42
Mill, John Stuart, 159–​160 Peterson, Christopher, 181
Miller, George, 18 Peterson, Mary, 41
Miller, Gregory, 181 phronesis, 9, 21, 155
Millikan, Ruth, 33 Plato, 10, 13, 16
Milner, David, 33–​34, 40n13 Polonyi, Michael, 17
Milyavskaya, Marin, 195 practical salience, 35–​36
minimal group paradigm, 96n47 practice, 9, 139, 144, 163–​166, 225
Mischel, Walter, 10–​11 rote, 182–​187, 191
Mitchell, Jason, 68 pre-​commitment, 148–​149, 187–​191
Modern Racism Scale, 220n11 prefrontal cortex, 110–​112
moral credit, 201–​205 Pre-​Incident Indicators, 1–​4
Index 261

primitive normativity, 94–​97 Shiffrin, Richard, 18–​19


vs. naïve normativity, 95n46 Shift (in theories of visual
Prinz, Jesse, 112–​120 phenomenology), 40–​41
Proffitt, Dennis, 40–​41 Shoemaker, David, 108–​115, 128–​137,
propranolol, 168–​169 147–​148
prosocial behavior, 8, 191 Shook, Natalie, 185
psychological explanations of discrimination, shooter bias, 2–​3, 60, 80, 119–​120, 126, 188,
limits of, 208 199, 206
Skywalk cases, 12–​13, 46, 57, 67–​78, 86, 89–​90,
94–​95, 103
Railton, Peter, 7, 42–​43, 51n34, 77n28, 154–​157, Slingerland, Edward, 10
164–​165, 202–​203 smile, 163–​164
Rand, David, 8 Duchenne, 163–​164
rational relations, 145–​149 Pan Am, 163–​164
Ratliff, Kate, 132 Smith, Angela, 111, 145–​149
reasons responsiveness, 117n27 Smith, Holly, 133–​134
Reber, Arthur, 18 Society for the Improvement of Psychological
Redford, Liz, 132 Science, 226
Reed, Brian, 101–​102 somatic neural markers, 184
regress problems, 23, 156–​158, 164–​165 soul, tripartite account of the, 10
renewal effects, 192–​193 Spinozan Belief Fixation, 80–​81
replication crisis, 225–​230 spontaneity, 153–​176
p-​hacking, 225 deliberation-​volatility of, 161–​167
Resource Computation Model, 230 Sripada, Chandra, 124–​125, 135, 146
responsibility, 123–​152 Sterelny, Kim, 23, 33
and accountability, 129–​131 Stewart, Brandon, 46, 199
and answerability, 129, 145–​146 Stich, Stephen, 78
and being vs. holding responsible, stimulus-​response reflexes, 66–​69
127–​131 subdoxastic state, 78
Rickey, Branch, 143 Sunstein, Cass, 193
Rietveld, Erik, 47n29
Rivers, Phillip, 142
Robinson, Jackie, 143 tacit knowledge, 17–​18
Rooth, Dan-​Olof, 61, 221 Tennenbaum, Richie, 59
Rust, Johnathan, 175–​176 ten-​thousand hour rule, 182–​183
Ryle, Gilbert, 17 Thaler, Richard, 193
The Big Five personality traits, 203–​204
thought, affect-​ladeness of, 111–​116
Sarkissian, Hagop, 163, 184 Tietz, Jeff, 29–​30, 42
Sartre, Jean-​Paul, 57 Toppino, Thomas, 41
scaffolding, 54 trait-​based accounts. See dispositional
Scanlon, Thomas, 145 accounts
Schaller, Mark, 68, 229 Transyouth Project, 227
Schapiro, Tamar, 34, 64
Schneider, Walter, 18–​19
Scholl, Brian, 41 utilization behavior, 35
seemings, 75, 148
self-​control, 11, 180–​200
selfish goals, 23–​24 Vazire, Simine, 231n30
self-​regulation, 11, 180–​200 virtue ethics, 9–​10, 203–​205
Ulysses-​type strategies for, 148–​149, Vohs, Kathleen, 11
187–​191 von Economo neurons, 43
Self-​Regulation of Prejudice model, 198
self-​trust, 209
Seligman, Martin, 55–​56, 181 Wallace, David Foster, 7, 57
Sheeran, Paschal, 188, 190n13 “watch the ball” heuristic, 165–​166
262 Index

Watching Eyes Effect, 191 Woodward, James, 43n17


Webb, Thomas, 190n13 wu-​wei, 10, 161n6
Whitehead, Alfred North, 18
Wieber, Frank, 189–​190
Wiers, Reinout, 197 Yarrow, Kielan, 53
Wilde, Oscar, 162
Williams, Bernard, 164
Wilson, Timothy, 17, 172 Zeigarnik Effect, 57
Witt, Jessica, 40–​41 Zimmerman, Aaron, 56, 73

You might also like