Professional Documents
Culture Documents
Predicting visual stimuli on the statistical significance (P < 0.05, two-tailed t test, n = 8; Fig. 1b).
Notably, video clips that subjects rated as being more evocative (in
basis of activity in auditory cortices terms of how much sound they implied; Supplementary Fig. 3) were
discriminated more reliably by the classifier. This trend was present
Kaspar Meyer, Jonas T Kaplan, Ryan Essex, Cecelia Webber, in all subjects except for Subject 6 (who attributed equal ratings to
Hanna Damasio & Antonio Damasio all video clips, thus precluding the analysis) and yielded a significant
correlation at the group level (P = 0.012; Fig. 1c). The classifier’s
Using multivariate pattern analysis of functional magnetic performance was indistinguishable from chance when it was trained
resonance imaging data, we found that the subjective experience and tested on the mean activity that the clips induced in the mask,
of sound, in the absence of auditory stimulation, was associated indicating that prediction performance did not result from overall
with content-specific activity in early auditory cortices in humans. differences in signal level (Supplementary Results and Discussion).
As subjects viewed sound-implying, but silent, visual stimuli, Control experiments showed that prediction performance in the audi-
© 2010 Nature America, Inc. All rights reserved.
activity in auditory cortex differentiated among sounds related tory target region was lower than in primary visual cortex (which was
to various animals, musical instruments and objects. These to be expected given the visual nature of the stimuli), but higher than
results support the idea that early sensory cortex activity reflects in several control regions (Supplementary Results, Discussion and
perceptual experience, rather than sensory stimulation alone. Supplementary Fig. 4).
Using the same cross-validation procedure, we next asked whether
In everyday life, we often hear a sound in the mind’s ear when no the classifier would discriminate animal from instrument stimuli,
sound is delivered to the ear itself. This may occur when we deliber- animal from object stimuli and instrument from object stimuli
ately recall the auditory aspects of a memory or, more automatically, when trained and tested on all three stimuli of a category (Fig. 2).
when we perceive stimuli that imply sound through a nonauditory
modality. We asked whether sounds perceived in the mind’s ear are a 0.8
0.7
associated with stimulus-specific representations in early auditory
Classifier performance
0.6
cortices. If this is the case, multivariate pattern analysis (MVPA) of 0.5
activity in early auditory cortices should allow prediction of which 0.4
sound-implying, but entirely silent, video clips subjects see. 0.3
While lying in the scanner, our subjects (n = 8) watched muted 0.2
5-s video clips depicting events that implied sound (Supplementary 0.1
Dog Vase
1
instruments (a violin being played, a bass being played and a piano key Anim 2 0.65
Cow
3 Piano
being struck) and objects (a chainsaw cutting into a tree trunk, a glass 4
Rooster
0.60 Coins Chainsaw
vase shattering on the ground and a handful of coins being dropped Instr 5
6 Violin
Bass
into a glass). The clips in the more loosely defined object category 7 0.55
Obj 8
were chosen because we expected them to evoke sounds that were very 9
different from those evoked by the animal and the instrument clips. P > 0.05 P < 0.01
0.50
4.5 5.0 5.5 6.0 6.5 7.0
The region of interest consisted of a restrictive bilateral mask on the P < 0.05 P < 0.001
Subjective rating
supratemporal plane that only contained areas that are reliably consid-
Figure 1 Classifier performance on pair-wise discriminations among
ered to be unimodal auditory cortices (Supplementary Fig. 2). individual stimuli. (a) Prediction performance of the classifier, averaged
We first used MVPA to perform all possible pair-wise discrimina- across all subjects, for all pair-wise discriminations (n = 36) among the
tions among the nine stimuli (n = 36). For each discrimination task, nine video clips. Stimulus key: 1, rooster; 2, cow; 3, dog; 4, violin;
a classifier algorithm was trained on seven of eight functional runs 5, piano; 6, bass; 7, vase; 8, chainsaw; 9, coins. Chance level is 0.5.
(21 trials per stimulus) and tested on the remaining run (three trials (b) Levels of significance for pair-wise discriminations among all nine
stimuli (two-tailed t tests, n = 8). (c) Scatter plot of the subjective rating
per stimulus; Supplementary Methods). This procedure was repeated
of a video clip (averaged across subjects) on the x axis and averaged
eight times, using each run as the test run once. Averaged across all classifier performance on pair-wise discriminations involving the same clip
subjects, the classifier performed above the chance level of 0.5 for all on the y axis. There was a significant positive correlation between the two
36 discriminations (Fig. 1a); in 26 discriminations, the result reached parameters (r2 = 0.614, P = 0.012).
Brain and Creativity Institute, University of Southern California, Los Angeles, California, USA. Correspondence should be addressed to A.D. (damasio@usc.edu).
Figure 2 Classifier performance on pair-wise discriminations among 1.0 Animals versus instruments
categories. Bars represent classifier performance for each of the three 0.9 Animals versus objects
pair-wise categorical discrimination tasks for each individual subject. 0.8 Instruments versus objects
Classifier performance
S1 indicates Subject 1, S2 indicates Subject 2, etc. 0.7
0.6
0.5
Prediction accuracies, averaged across all subjects, were 0.620 0.4
(P = 9.0 × 10−3) for animals versus instruments, 0.730 (P = 6.4 × 10−5) 0.3
for animals versus objects and 0.630 (P = 2.3 × 10−5) for instruments 0.2
0.1
versus objects (Fig. 3). Further analyses corroborated the finding of 0
categorical discrimination (Supplementary Results, Discussion, and S1 S2 S3 S4 S5 S6 S7 S8
case, we could conclude that the auditory activity pattern induced been demonstrated. Our findings suggest that, just as in the visual and
by a video clip implying a certain sound is similar to the pattern somatosensory modalities, activity at the earliest stages of cortical auditory
induced by the sound itself. Our results did not permit this conclu- processing correlates specifically with the experience of sound reported by
sion, although there was a hint at successful cross-modal classification the subjects, rather than with the actual auditory environment alone, as the
between animal and objects stimuli (results were obtained by training latter was entirely silent during the presentation of the video clips.
the algorithm on video trials and testing it on audio trials; the reverse
Note: Supplementary information is available on the Nature Neuroscience website.
procedure yielded comparable results; see Supplementary Results,
Discussion and Supplementary Fig. 8 for additional analyses). Acknowledgments
Our data indicate that hearing sounds in the mind’s ear is associated H.D. and A.D. are supported by the Mathers Foundation and the US National
with content-specific activity in very early auditory cortices. The degree Institutes of Health (5P50NS019632-25).
of specificity of the auditory representations is suggested by our ability to
AUTHOR CONTRIBUTIONS
discriminate not only among categories of video clips, but among indi- K.M. conceived and designed the study with input from the other authors, conducted
vidual clips as well. The relationship between the subjective experience of functional magnetic resonance imaging (f MRI), participated in data analysis and
sound and activity in early auditory cortices is underscored by the positive wrote the manuscript together with A.D. J.T.K. advised on study design, conducted
correlation between the subjects’ ratings of the video clips and classifier fMRI, performed the data analysis and wrote part of the supplementary information.
R.E. prepared stimuli, wrote the stimulus presentation script, conducted fMRI,
performance; when a subject reported that a clip evoked more sound in traced anatomical masks, participated in data analysis and conducted literature
his/her mind, that clip was classified more reliably in that subject, suggest- review. C.W. prepared stimuli, traced anatomical masks and conducted literature
ing that the neural activity pattern it had induced in early auditory cortices review. H.D. supervised the design of the anatomical masks and provided conceptual
was more distinct. Our results further indicate that the representations advice. A.D. supervised the project and advised K.M. on the preparation of the
manuscript. All authors discussed the results and their implications and commented
of sounds in early auditory cortices respect categorical boundaries. Our
on the study and manuscript preparation at all stages.
findings do not allow the conclusion that the activity patterns induced in
early auditory cortices by analogous real and implied sounds are similar. COMPETING FINANCIAL INTERESTS
It should be kept in mind, however, that hearing a sound in the mind’s ear The authors declare no competing financial interests.
is by no means the same as actually hearing a comparable sound. Given
Published online at http://www.nature.com/natureneuroscience/.
the difference between these two mental events, it is not surprising that Reprints and permissions information is available online at http://www.nature.com/
their neural counterparts may be different as well. reprintsandpermissions/.
There is growing evidence for an involvement of early sensory cortices
in the conscious experience of sight and touch. For example, in percep- 1. Formisano, E., De Martino, F., Bonte, M. & Goebel, R. Science 322, 970–973 (2008).
2. Staeren, N., Renvall, H., De Martino, F., Goebel, R. & Formisano, E. Curr. Biol.
tual illusions, activity in primary visual and somatosensory cortices has 19, 498–502 (2009).
been shown to correspond more closely to the subjects’ visual or haptic 3. Macknik, S.L. & Haglund, M.M. Proc. Natl. Acad. Sci. USA 96, 15208–15210 (1999).
4. Watkins, S., Shams, L., Tanaka, S., Haynes, J.-D. & Rees, G. Neuroimage 31,
1247–1256 (2006).
Figure 3 Classifier performance Trained/tested on visual data 5. Chen, L.M., Friedman, R.M. & Roe, A.W. Science 302, 881–885 (2003).
1.0
on categorical discriminations Trained/tested on auditory data 6. Blankenburg, F., Ruff, C.C., Deichmann, R., Rees, G. & Driver, J. PLoS Biol. 4,
0.9
e69 (2006).
Classifier performance
for visual, auditory and cross- 0.8 Trained on visual data/tested on auditory data
7. Kosslyn, S.M., Thompson, W.L., Kim, I.J. & Alpert, N.M. Nature 378, 496–498
0.7
modal classification. Bars 0.6 (1995).
represent average prediction 0.5 8. Slotnick, S.D., Thompson, W.L. & Kosslyn, S.M. Cereb. Cortex 15, 1570–1583 (2005).
performance for each of the 0.4 9. Thirion, B. et al. Neuroimage 33, 1104–1116 (2006).
three pair-wise categorical 0.3 10. Serences, J.T., Ester, E.F., Vogel, E.K. & Awh, E. Psychol. Sci. 20, 207–214 (2009).
0.2 11. Harrison, S.A. & Tong, F. Nature 458, 632–635 (2009).
discriminations for visual (black 0.1 12. Zatorre, R.J. & Halpern, A.R. Neuron 47, 9–12 (2005).
bars), auditory (dark gray bars) 0
13. Kraemer, D.J.M., Macrae, C.N., Green, A.E. & Kelley, W.M. Nature 434, 158 (2005).
Animals Animals Instruments
and cross-modal (light gray bars) versus versus versus 14. Dierks, T. et al. Neuron 22, 615–621 (1999).
classification. instruments objects objects 15. Calvert, G.A. et al. Science 276, 593–596 (1997).