You are on page 1of 3

B r i e f c o m m u n i c at i o n s

Predicting visual stimuli on the statistical significance (P < 0.05, two-tailed t test, n = 8; Fig. 1b).
Notably, video clips that subjects rated as being more evocative (in
basis of activity in auditory cortices terms of how much sound they implied; Supplementary Fig. 3) were
discriminated more reliably by the classifier. This trend was present
Kaspar Meyer, Jonas T Kaplan, Ryan Essex, Cecelia Webber, in all subjects except for Subject 6 (who attributed equal ratings to
Hanna Damasio & Antonio Damasio all video clips, thus precluding the analysis) and yielded a significant
correlation at the group level (P = 0.012; Fig. 1c). The classifier’s
Using multivariate pattern analysis of functional magnetic performance was indistinguishable from chance when it was trained
resonance imaging data, we found that the subjective experience and tested on the mean activity that the clips induced in the mask,
of sound, in the absence of auditory stimulation, was associated indicating that prediction performance did not result from overall
with content-specific activity in early auditory cortices in humans. differences in signal level (Supplementary Results and Discussion).
As subjects viewed sound-implying, but silent, visual stimuli, Control experiments showed that prediction performance in the audi-
© 2010 Nature America, Inc. All rights reserved.

activity in auditory cortex differentiated among sounds related tory target region was lower than in primary visual cortex (which was
to various animals, musical instruments and objects. These to be expected given the visual nature of the stimuli), but higher than
results support the idea that early sensory cortex activity reflects in several control regions (Supplementary Results, Discussion and
perceptual experience, rather than sensory stimulation alone. Supplementary Fig. 4).
Using the same cross-validation procedure, we next asked whether
In everyday life, we often hear a sound in the mind’s ear when no the classifier would discriminate animal from instrument stimuli,
sound is delivered to the ear itself. This may occur when we deliber- animal from object stimuli and instrument from object stimuli
ately recall the auditory aspects of a memory or, more automatically, when trained and tested on all three stimuli of a category (Fig. 2).
when we perceive stimuli that imply sound through a nonauditory
modality. We asked whether sounds perceived in the mind’s ear are a 0.8
0.7
associated with stimulus-specific representations in early auditory
Classifier performance

0.6
cortices. If this is the case, multivariate pattern analysis (MVPA) of 0.5
activity in early auditory cortices should allow prediction of which 0.4
sound-implying, but entirely silent, video clips subjects see. 0.3
While lying in the scanner, our subjects (n = 8) watched muted 0.2
5-s video clips depicting events that implied sound (Supplementary 0.1

Videos 1–3). A sparse-sampling scanning procedure allowed us to 0


1/2 1/4 1/6 1/8 2/3 2/5 2/7 2/9 3/5 3/7 3/9 4/6 4/8 5/6 5/8 6/7 6/9 7/9
present the clips during scanner silence (Supplementary Fig. 1). We 1/3 1/5 1/7 1/9 2/4 2/6 2/8 3/4 3/6 3/8 4/5 4/7 4/9 5/7 5/9 6/8 7/8 8/9
used nine different stimuli that were grouped into three categories: b Anim Instr Obj c 0.70
animals (a howling dog, a mooing cow and a crowing rooster), musical 1 2 3 4 5 6 7 8 9
Classifier performance

Dog Vase
1
instruments (a violin being played, a bass being played and a piano key Anim 2 0.65
Cow
3 Piano
being struck) and objects (a chainsaw cutting into a tree trunk, a glass 4
Rooster
0.60 Coins Chainsaw
vase shattering on the ground and a handful of coins being dropped Instr 5
6 Violin
Bass
into a glass). The clips in the more loosely defined object category 7 0.55
Obj 8
were chosen because we expected them to evoke sounds that were very 9
different from those evoked by the animal and the instrument clips. P > 0.05 P < 0.01
0.50
4.5 5.0 5.5 6.0 6.5 7.0
The region of interest consisted of a restrictive bilateral mask on the P < 0.05 P < 0.001
Subjective rating
supratemporal plane that only contained areas that are reliably consid-
Figure 1  Classifier performance on pair-wise discriminations among
ered to be unimodal auditory cortices (Supplementary Fig. 2). individual stimuli. (a) Prediction performance of the classifier, averaged
We first used MVPA to perform all possible pair-wise discrimina- across all subjects, for all pair-wise discriminations (n = 36) among the
tions among the nine stimuli (n = 36). For each discrimination task, nine video clips. Stimulus key: 1, rooster; 2, cow; 3, dog; 4, violin;
a classifier algorithm was trained on seven of eight functional runs 5, piano; 6, bass; 7, vase; 8, chainsaw; 9, coins. Chance level is 0.5.
(21 trials per stimulus) and tested on the remaining run (three trials (b) Levels of significance for pair-wise discriminations among all nine
stimuli (two-tailed t tests, n = 8). (c) Scatter plot of the subjective rating
per stimulus; Supplementary Methods). This procedure was repeated
of a video clip (averaged across subjects) on the x axis and averaged
eight times, using each run as the test run once. Averaged across all classifier performance on pair-wise discriminations involving the same clip
subjects, the classifier performed above the chance level of 0.5 for all on the y axis. There was a significant positive correlation between the two
36 discriminations (Fig. 1a); in 26 discriminations, the result reached parameters (r2 = 0.614, P = 0.012).

Brain and Creativity Institute, University of Southern California, Los Angeles, California, USA. Correspondence should be addressed to A.D. (damasio@usc.edu).

Received 26 January; accepted 18 March; published online 2 May 2010; doi:10.1038/nn.2533

nature neuroscience  VOLUME 13 | NUMBER 6 | JUNE 2010 667


b r i e f c o m m u n i c at i o n s

Figure 2  Classifier performance on pair-wise discriminations among 1.0 Animals versus instruments
categories. Bars represent classifier performance for each of the three 0.9 Animals versus objects
pair-wise categorical discrimination tasks for each individual subject. 0.8 Instruments versus objects

Classifier performance
S1 indicates Subject 1, S2 indicates Subject 2, etc. 0.7
0.6
0.5
Prediction accuracies, averaged across all subjects, were 0.620 0.4
(P = 9.0 × 10−3) for animals versus instruments, 0.730 (P = 6.4 × 10−5) 0.3

for animals versus objects and 0.630 (P = 2.3 × 10−5) for instruments 0.2
0.1
versus objects (Fig. 3). Further analyses corroborated the finding of 0
categorical discrimination (Supplementary Results, Discussion, and S1 S2 S3 S4 S5 S6 S7 S8

Supplementary Figs. 5 and 6).


Four subjects participated in an additional experiment in which experience than to the physical properties of the stimuli presented3–6.
they were exposed to auditory stimuli that matched the sounds Furthermore, when subjects imagine visual objects in the complete absence
implied by the video clips. The classifier performed better on audi- of perceptual input, primary visual cortices are activated and appear to
tory than on visual data, both when discriminating among categories specifically represent the contents of the subjects’ visual experience7–9.
(Fig. 3) and among individual stimuli (Supplementary Fig. 7). Our Activity in primary visual cortices has also been shown to correlate with
results are comparable with those of previous studies applying MVPA stimuli that are kept active in working memory10,11. Although previous
to the auditory cortices1,2. We also asked whether the classifier, after studies have established that early auditory cortices can be activated dur-
being trained on video trials, would be able to successfully discrimi- ing auditory imagery12,13, auditory hallucinations14 and the perception
nate the corresponding audio trials and vice versa. If this were the of implied sound15, the content specificity of such activations has not yet
© 2010 Nature America, Inc. All rights reserved.

case, we could conclude that the auditory activity pattern induced been demonstrated. Our findings suggest that, just as in the visual and
by a video clip implying a certain sound is similar to the pattern somatosensory modalities, activity at the earliest stages of cortical auditory
induced by the sound itself. Our results did not permit this conclu- processing correlates specifically with the experience of sound reported by
sion, although there was a hint at successful cross-modal classification the subjects, rather than with the actual auditory environment alone, as the
between animal and objects stimuli (results were obtained by training latter was entirely silent during the presentation of the video clips.
the algorithm on video trials and testing it on audio trials; the reverse
Note: Supplementary information is available on the Nature Neuroscience website.
procedure yielded comparable results; see Supplementary Results,
Discussion and Supplementary Fig. 8 for additional analyses). Acknowledgments
Our data indicate that hearing sounds in the mind’s ear is associated H.D. and A.D. are supported by the Mathers Foundation and the US National
with content-specific activity in very early auditory cortices. The degree Institutes of Health (5P50NS019632-25).
of specificity of the auditory representations is suggested by our ability to
AUTHOR CONTRIBUTIONS
discriminate not only among categories of video clips, but among indi- K.M. conceived and designed the study with input from the other authors, conducted
vidual clips as well. The relationship between the subjective experience of functional magnetic resonance imaging (f MRI), participated in data analysis and
sound and activity in early auditory cortices is underscored by the positive wrote the manuscript together with A.D. J.T.K. advised on study design, conducted
correlation between the subjects’ ratings of the video clips and classifier fMRI, performed the data analysis and wrote part of the supplementary information.
R.E. prepared stimuli, wrote the stimulus presentation script, conducted fMRI,
performance; when a subject reported that a clip evoked more sound in traced anatomical masks, participated in data analysis and conducted literature
his/her mind, that clip was ­classified more reliably in that subject, suggest- review. C.W. prepared stimuli, traced anatomical masks and conducted literature
ing that the neural activity pattern it had induced in early auditory cortices review. H.D. supervised the design of the anatomical masks and provided conceptual
was more distinct. Our results further indicate that the representations advice. A.D. supervised the project and advised K.M. on the preparation of the
manuscript. All authors discussed the results and their implications and commented
of sounds in early auditory cortices respect categorical boundaries. Our
on the study and manuscript preparation at all stages.
findings do not allow the conclusion that the activity patterns induced in
early auditory cortices by analogous real and implied sounds are similar. COMPETING FINANCIAL INTERESTS
It should be kept in mind, however, that hearing a sound in the mind’s ear The authors declare no competing financial interests.
is by no means the same as actually hearing a comparable sound. Given
Published online at http://www.nature.com/natureneuroscience/.
the difference between these two mental events, it is not surprising that Reprints and permissions information is available online at http://www.nature.com/
their neural counterparts may be different as well. reprintsandpermissions/.
There is growing evidence for an involvement of early sensory cortices
in the conscious experience of sight and touch. For example, in percep- 1. Formisano, E., De Martino, F., Bonte, M. & Goebel, R. Science 322, 970–973 (2008).
2. Staeren, N., Renvall, H., De Martino, F., Goebel, R. & Formisano, E. Curr. Biol.
tual illusions, activity in primary visual and somatosensory cortices has 19, 498–502 (2009).
been shown to correspond more closely to the subjects’ visual or haptic 3. Macknik, S.L. & Haglund, M.M. Proc. Natl. Acad. Sci. USA 96, 15208–15210 (1999).
4. Watkins, S., Shams, L., Tanaka, S., Haynes, J.-D. & Rees, G. Neuroimage 31,
1247–1256 (2006).
Figure 3  Classifier performance Trained/tested on visual data 5. Chen, L.M., Friedman, R.M. & Roe, A.W. Science 302, 881–885 (2003).
1.0
on categorical discriminations Trained/tested on auditory data 6. Blankenburg, F., Ruff, C.C., Deichmann, R., Rees, G. & Driver, J. PLoS Biol. 4,
0.9
e69 (2006).
Classifier performance

for visual, auditory and cross- 0.8 Trained on visual data/tested on auditory data
7. Kosslyn, S.M., Thompson, W.L., Kim, I.J. & Alpert, N.M. Nature 378, 496–498
0.7
modal classification. Bars 0.6 (1995).
represent average prediction 0.5 8. Slotnick, S.D., Thompson, W.L. & Kosslyn, S.M. Cereb. Cortex 15, 1570–1583 (2005).
performance for each of the 0.4 9. Thirion, B. et al. Neuroimage 33, 1104–1116 (2006).
three pair-wise categorical 0.3 10. Serences, J.T., Ester, E.F., Vogel, E.K. & Awh, E. Psychol. Sci. 20, 207–214 (2009).
0.2 11. Harrison, S.A. & Tong, F. Nature 458, 632–635 (2009).
discriminations for visual (black 0.1 12. Zatorre, R.J. & Halpern, A.R. Neuron 47, 9–12 (2005).
bars), auditory (dark gray bars) 0
13. Kraemer, D.J.M., Macrae, C.N., Green, A.E. & Kelley, W.M. Nature 434, 158 (2005).
Animals Animals Instruments
and cross-modal (light gray bars) versus versus versus 14. Dierks, T. et al. Neuron 22, 615–621 (1999).
classification. instruments objects objects 15. Calvert, G.A. et al. Science 276, 593–596 (1997).

668 VOLUME 13 | NUMBER 6 | JUNE 2010  nature neuroscience


Copyright of Nature Neuroscience is the property of Nature Publishing Group and its content may not be copied
or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.
However, users may print, download, or email articles for individual use.

You might also like