You are on page 1of 28

This article was downloaded by: [b-on: Biblioteca do conhecimento online UA] On: 01 May 2013, At: 12:35

Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Cognitive Neuropsychology
Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/pcgn20

Dissociations among functional subsystems governing melody recognition after right-hemisphere damage
Willi R. Steinke , Lola L. Cuddy & Lorna S. Jakobson Published online: 09 Sep 2010.

To cite this article: Willi R. Steinke , Lola L. Cuddy & Lorna S. Jakobson (2001): Dissociations among functional subsystems governing melody recognition after right-hemisphere damage, Cognitive Neuropsychology, 18:5, 411-437 To link to this article: http://dx.doi.org/10.1080/02643290125702

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-andconditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Q1427CN2199
COGNITIVE NEUROPSY CHOLOGY, 2001, 18 (5), 411437

DISSOCIATIONS AMONG FUNCTIONAL SUBSYSTEMS


GOVERNING MELODY RECOGNITION AFTER RIGHT-HEMISPHERE DAMAGE
Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013
Willi R. Steinke and Lola L. Cuddy
Queens University at Kingston, Canada

Lorna S. Jakobson
University of Manitoba, Winnipeg, Manitoba, Canada

This study describes an amateur musician, KB, who became amusic following a right-hemisphere stroke. A series of assessments conducted post-stroke revealed that KB functioned in the normal range for most verbal skills. However, compared with controls matched in age and music training, KB showed severe loss of pitch and rhythmic processing abilities. His ability to recognise and identify familiar instrumental melodies was also lost. Despite these deficits, KB performed remarkably well when asked to recognise and identify familiar song melodies presented without accompanying lyrics. This dissociation between the ability to recognise/identify song vs. instrumental melodies was replicated across different sets of musical materials, including newly learned melodies. Analyses of the acoustical and musical features of song and instrumental melodies discounted an explanation of the dissociation based on these features alone. Rather, the results suggest a functional dissociation resulting from a focal brain lesion. We propose that, in the case of song melodies, there remains sufficient activation in KBs melody analysis system to coactivate an intact representation of both associative information and the lyrics in the speech lexicon, making recognition and identification possible. In the case of instrumental melodies, no such associative processes exist; thus recognition and identification do not occur.

INTRODUCTION
This study is concerned with the recognition of familiar melodies, both song melodies (melodies learned with lyrics) and instrumental melodies (melodies learned without lyrics). We report what we believe to be the first recorded instance of a dissociation between recognition of song versus instrumental melodies secondary to brain injury.

We present our observations on a 64-year-old amateur musician, KB, who suffered a righthemisphere stroke, and on controls matched for age and level of music training. Despite evidence of impaired performance on a variety of musical tasks, KB displayed remarkably spared recognition of familiar song melodies even when these melodies were presented without accompanying lyrics.

Requests for reprints should be addressed to Lola L. Cuddy, Department of Psychology, Queens University, Kingston, Ontario K7L 3N6, Canada (Email: cuddyl@psyc.queensu.ca). We thank KB and his wife for their invaluable cooperation. We thank Dr I. Peretz for thoughtful and constructive comments on an earlier draft of this paper. Dr Sylvie Hbert and Dr Peretz provided ideas and insights that inspired the present discussion. We acknowledge detailed and helpful advice from two anonymous referees. Dr Jill Irwin and members of the Acoustical Laboratory at Queens University provided support and assistance. The research was supported by a Medical Research Council of Canada Studentship to the first author, and by research grants from the Natural Sciences and Engineering Research Council of Canada to the second and third authors. 2001 Psychology Press Ltd http://www.tandf.co.uk/journals/pp/02643294.html

DOI:10.1080/02643290042000198

411

STEINKE, CUDDY , JAKOBSON

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

The results address two interrelated issues in music neuropsychology. The first issue concerns the processes and functions governing melody recognition. Consideration of this issue leads to a secondthe nature of the integration of melody and speech in the acquisition, retention, and recognition of songs. To address these issues, we draw upon research results from neurologically intact populations (for reviews, see Dowling & Harwood, 1986; Krumhansl, 1990, 1991) and reports of selective loss and sparing following brain damage (for reviews, see Marin & Perry, 1999; Patel & Peretz, 1997; Peretz, 1993).

Melody recognition
Peretz (1993, 1994) and others (e.g., Zatorre, 1984) have argued against a simple division of music and language each assigned to separate cerebral hemispheres. Rather, both music and language are themselves viewed as divisible into components which may or may not be shared. Division of music in the Western tonal system would include melody, with which this paper is primarily concerned, and also dynamics, grouping, harmony, timbre, and so forth. Within the present concern with melody, there are at least two further components. Musical melodies can be characterised by a particular set of pitch relations and temporal relations among their notes. It has been suggested that the processing of pitch relations and of temporal relations contributes separately to melody recognition (Peretz & Kolinsky, 1993), a suggestion supported by research with neurologically intact (e.g., Monahan & Carterette, 1985; Palmer & Krumhansl, 1987a, b) and compromised participants (e.g., Peretz & Kolinsky, 1993). Listeners may recognise melodies given pitch or temporal relations alone, although they are able to recognise far more melodies from pitch than from temporal relations (Hbert & Peretz, 1997; White, 1960). There may also be differences in the neural substrates for processing pitch and temporal relations, with the right hemisphere more strongly implicated for pitch, but the evidence to date is not conclusive (Marin & Perry, 1999).

The issue is complicated by indications that each source of information may influence the processing of the other (see, for example, Bigand, 1993; Boltz, 1989; Boltz & Jones, 1986). As pointed out by Kolinsky (1969), the same pattern of pitches can be perceived as two different melodies if the temporal pattern is changed. (Consider, for example, that the first seven pitches of Rock of Ages and Rudolph, the Red-Nosed Reindeer are identical.) Thus, the extent to which pitch and temporal effects are additive or interact with one another remains in question. What does appear to be increasingly clear is that the pitch processing involved in melody recognition is separable from verbal abilities (Peretz, 1993; Peretz, Belleville, & Fontaine, 1997; Polk & Kertesz, 1993) and other nonmusic cognitive abilities (Koh, Cuddy, & Jakobson, in press; Steinke, Cuddy, & Holden, 1997). The relation between pitch processing ability and music training is modest and does not account for the dissociation from cognitive abilities (Koh et al.; Steinke et al.).

Song
The case of song processing poses a somewhat different problem, one that necessitates consideration of text in addition to pitch and temporal components. A song, by definition, consists of integrated melody and speech, or text. Yet how or where this integration is achieved is not known. Processing of song and memory for song has received relatively little attention in the neuropsychological literature.
Song is a universal form of auditory expression in which music and speech are intrinsically related. Song thus represents an ideal case for assessing the separability or commonality of music and language. ... In song memory, the musical and the linguistic component may be represented in some combined or common code. (Patel & Peretz, 1997, p. 206)

According to Peretz (1993), melody recognition involves the activation of a stored melody in a melody lexicon. The lexicon may be activated by either or both of two separate kinds of information, resulting from the analysis of pitch and temporal features, respectively. In addition, lyrics and song titles may

412

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

provide a third route through which song melodies (but not instrumental melodies) are recognised. Others have questioned whether the words and melodies of songs are processed and stored independently and have reported an integration effect (Crowder, Serafine, & Repp, 1990; Samson & Zatorre, 1991; Serafine, Crowder, & Repp, 1984). The experiments in these studies typically employed a forced-choice recognition memory paradigm. Listeners were asked to study and then to recognise melody, lyrics, or both under conditions where melody and lyrics either matched or did not match. Matches between melody and lyrics resulted in higher recognition rates than mismatches. Recognition of melody and lyrics was facilitated for the original song pairing the two components as opposed to a new song mixing equally familiar components. Similar results, using the same paradigm, have been obtained by Morrongiello and Roes (1990) and by Serafine, Davidson, Crowder, and Repp (1986). Peretz et al. (1994, p.1298; see also Patel & Peretz, 1997) have pointed out that the use of twoalternative forced-choice paradigms may limit the generality of these findings. They suggest that listeners may have formed integrated representations of melody and lyrics in memory to facilitate recognition of novel songs when unfamiliar, interfering melodies and lyrics appeared among the response choices. In contrast, for the representation of wellknown songs, a strategy of encoding melody and text separately may be more effective since, in everyday (i.e., nonexperimental) situations, a given melodic line will typically carry different lyrics or verses. Peretz et al. (1994) conclude that for familiar songs encoding melody and lyrics independently would be parsimonious and efficient (p. 1298). In related research, Besson, Fata, Peretz, Bonnel, and Requin (1998) asked professional musicians to listen to well-known opera excerpts ending with semantically congruous or incongruous words sung either in or out of key. Eventrelated potentials recorded for the co-occurrence of semantic and harmonic (musical) violations were well predicted by an additive combination of the results recorded for each violation alone. The authors argue for the view that semantic and har-

monic violations are processed independently. On the other hand, Patel (1998) has reported eventrelated potential data supporting the sharing of linguistic and musical resources where syntactic relations are violated. These recent findings illustrate the complexity of the systems engaged in the acquisition and representation of song.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Purpose of the present report


In the present report, we describe the case of KB, an amateur musician who demonstrates a highly unusual type of amusia. This case provides an opportunity to address questions about the organisation of melody recognition and the relationship of verbal processing to nonverbal processing in song recognition. Following the case report of KB and a description of general procedures, we report a series of test findings in three experiments. In Experiment 1, we describe the results of tests of pitch perception, rhythm perception, rhythm production, and melody recognition abilities for KB and for a group of neurologically intact controls. Through this testing, we uncovered a remarkable and intriguing dissociation in KB. Although he showed profound deficits in the music tests, he was nonetheless able to recognise the melodies of well-known songs played without lyrics. The prerequisite for successful recognition was that KB had learned the melodies with lyrics premorbidly. Instrumental melodies readily recognised by controls could not, in contrast, be reliably recognised by KB. In Experiment 2, we report the results of additional tests designed to examine the sparing of song memory in greater detail. We replicated the difference in recognition of song versus instrumental melodies for KB. Analyses of the acoustical and musical features of both types of melody led us to discount an explanation of the difference on the basis of these features alone. Experiment 3 addresses the question of whether KB has retained the ability to learn new melodies. The results of this experiment indicated some residual ability to recognise novel melodies learned with lyrics.
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

413

STEINKE, CUDDY , JAKOBSON

CASE REPORT: KB
KB is a right-handed male with 13 years of formal education, including completion of Grade 10, Police College, and a number of additional training courses. He worked as a policeman, prison guard, and process server (of official court documents) until the time of his retirement at age 55. KB reported a moderately extensive background in music. He played trumpet and drums for approximately 3 years in high school. After high school, he sang both in a barbershop quartet and in a number of amateur operettas for about 10 years in total. KB reported limited ability to read music notation and stated that he typically learned his parts by ear. His spouse reports that KB often sang at home and had a collection of classical and popular records that he played regularly. She stated that he had a fine singing voice. Unfortunately no recordings of KBs singing exist.

were undertaken at the time of this event, and recent CT scans did not reveal the presence of an old infarct.

Initial neuropsychological assessment


During the seventh through tenth weeks after his stroke in 1994, KBs intellectual and emotional functioning was assessed by a staff psychologist with a standard neuropsychological test battery (see Table 1). Test results from the Wechsler Adult Intelligence Scale-Revised (WAIS-R) (Wechsler, 1981) revealed a Full Scale IQ in the normal range, but a significantly lowered Performance IQ score. In order to obtain an estimate of premorbid IQ, the North American Adult Reading Test (NAART) (Blair & Spreen, 1989) was administered. The consistency between estimated premorbid Verbal IQ and postmorbid Verbal IQ scores suggest that KBs stroke left his verbal abilities intact. Nonverbal abilities, in contrast, were impaired, as suggested by scores on the Performance subscale of the WAISR. KBs low score on the Ravens Coloured Progressive Matrices (Raven, 1965), a nonverbal test used to measure general intellectual functioning, also indicated a decline in nonverbal abilities.
Table 1. Demographic data and initial neuropsychological assessment Age (years) Sex Education (years) Wechsler Adult Intelligence Scale (Revised) Wechsler Memory Scale-Form 1 Trail Making Test Ravens Coloured Progressive Matrices Test Rey-Osterrieth Complex Figure Test House-Tree -Person Test Wisconsin Card Sort Boston Diagnostic Aphasia Examination North American Adult Reading Test (NAART) (estimated premorbid WAIS-R IQ) Identification of musical instruments Identification of nonspeech sounds 64 Male 13 Full Scale IQ = 92 Verbal IQ = 103 Performance IQ = 80 Memory quotient = 109 <25th percentile <25th percentile

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Neurological history
KB was admitted to hospital in July 1994, at age 64, complaining of left-sided paralysis and speech production difficulties. His speech impairment resolved after a few days but his left-sided weakness has persisted. KB underwent a series of CT scans, the first shortly after admission, and the others 6, 8, and 12 months later. These scans revealed evidence of focal damage in the right fronto-parietal area (see Figure 1), and to a lesser extent in the right cerebellum (both in the posterior-inferior aspect, and in the superior cerebellar peduncle-pontine region). A lacunar infarct was also noted in the right lenticular nucleus (G. Bearalot, personal communication, February, 2000). In addition to focal damage, diffuse brain atrophy consistent with age was noted, although it is highly unlikely that these latter changes could account for KBs amusia (B. Pearse, personal communication, April, 1996). KBs previous neurological history included a self-report of a mild stroke suffered in 1968 when he was 38 years of age. According to the patient and his wife, this event resulted in an isolated, transient memory impairment from which KB recovered completely. No objective diagnostic examinations

<25th percentile <25th percentile

Indicative of RH stroke No evidence of aphasia Full Scale IQ = 107 Verbal IQ = 107 Performance IQ = 108 13/17 correctly identified 28/29 correctly identified

414

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Figure 1. Transverse CT scans obtained from KB 12 months post-stroke. Note that the right side of the brain is shown on the left in each image. The four images, labelled ad, proceed inferiorly in 10 mm increments. The scans show focal damage in the right fronto-parietal lobe (see text for details).

On the Wechsler Memory Scale-Form 1 (Stone, Girdner, & Albrecht, 1946) KB also scored in the normal range, supporting clinical observation and KBs self-report that memory was unimpaired following the stroke. However, on a number of other tests considered sensitive to brain damage, described below, KBs performance was poor (below the 25th percentile). His low score on the Trail Making Test indicated difficulties in sequencing of symbols, and alternating between alphabetic and numeric symbols. The Wisconsin Card Sort (Heaton, 1981) requires abstraction ability to determine correct sorting principles as well as mental flexibility to accommodate to shifting crite-

rion sets; poor performance is associated with frontal lobe damage. Low scores on the Rey-Osterrieth Complex Figure Test (Rey, 1941) and the HouseTree-Person Test are frequently seen after right posterior damage. KBs performance on the House-Tree-Person Test, in particular, was marked by adoption of a piecemeal approach, perseverative tendencies, left-sided neglect, working over, and much detail. Signs of emotional lability and impulsivity were also noted. KB reported no history of hearing loss and no changes in auditory acuity post-stroke. He showed intact ability to identify nonspeech, environmental sounds (compact disc recording Dureco 1150582;
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

415

STEINKE, CUDDY , JAKOBSON

see Table 1). In addition, when tested with a recorded set of solo instrumental pieces he was able to identify 13 of 17 musical instruments correctly (including piano, harp, trumpet, french horn, tuba, piccolo, flute, oboe, clarinet, double bass, cello, viola, and violin). His misidentifications included labeling both a bassoon and a saxophone as an oboe, mistaking a guitar for a piano, and being unable to name a harpsichord. The results of the Boston Diagnostic Aphasia Examination (BDAE) (Goodglass & Kaplan, 1983) revealed that KB did not meet the criteria for a diagnosis of aphasia. However, the BDAE did reveal impairments in speech prosody, singing, and rhythm reproduction. KB was judged to have an obvious lack of melody of speech in his spontaneous speech (which both the patient and his wife confirmed was not evident prior to his stroke). Deficiencies in singing and rhythm production were also noted in a subtest of the BDAE that required KB to sing a familiar melody of his own choosing and repeat a set of four rhythms tapped by the examiner. During the period of KBs 10-week, in-patient rehabilitation programme, all stroke patients on the rehabilitation unit of the hospital were being screened by the first author on tests designed to assess sensitivity to tonality in music. The initial set of tests carried out with KB revealed the presence of amusia. Prior to testing, KB had not complained of any musical deficits; nor did he voice any complaints after hearing and rating the melodies in the tests of tonal sensitivity. It was only after singing aloud that KB described himself as sounding flat. Subsequently, however, KB reported that music no longer had meaning for him, and it was noted that he no longer listened to his record collection and chose not to attend scheduled music activities in his residential care facility.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

hearing loss participated in Experiments 1 and 2. Participants age and musical background as singers approximated those of KB. They were recruited from local choirs and vocal ensembles. The ages of the control participants ranged from 59 to 71 years, with an average of 66 years. All were presently participating in singing activities, and most reported singing or playing musical instruments for much of their lives. With the exception of one participant who had been giving piano lessons for approximately 10 years, none derived any income from music activities. Because none of the participants made a living from playing or singing music, all were considered to be amateur musicians. Controls had a mean of 15 years of formal education (range of 1020 years). The Shipley Institute of Living Scale (Shipley, 1953) was used to obtain an estimate of overall intelligence. Using a conversion factor, the mean Shipley score for controls was estimated to equal a WAIS-R Full Scale IQ score of 119.9 (range 105133).

Materials and methods


All music perception tests, with the exception of those in the University of Montreal Musical Test Battery (provided on cassette tape by I. Peretz), were constructed and recorded in the Acoustical Laboratory of the Department of Psychology, Queens University. Musical tones were synthesised timbres, created by a Yamaha TX81Z synthesiser controlled by a Packard Bell computer running Finale music-processing software (Anderson, 1996). Two exceptions were the Probetone Melody test (Experiment 1) for which the synthesiser was controlled by a Zenith Z-248 computer running DX-Score software (Gross, 1981), and the Chord Progressions test (Experiment 1), for which chords were created on a Yamaha TX802 synthesiser controlled by an Atari 1040ST computer running Notator music-processing software (Lengeling, Adam, & Schupp, 1990). To provide variety for participants, different musical timbres were used for different tests. The names of all music tests, and the timbres used, are given in Table 2.

GENERAL PROCEDURES Controls


Ten males and 10 females, all right-handed, with no reported history of neurological disease or major

416

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Table 2. Music tests used in Experiments 1, 2, and 3: Name and timbre Name of test Timbre

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

University of Montreal Musical Test Battery Contour Discrimination test Synthesised Piano Scale Discrimination test " Interval Discrimination test " Rhythm Contour test " Metric test " Familiarity Decision test " Incidental Memory test " Tests of basic pitch perception and memory a Pitch Discrimination test Grand piano (A1) a Pitch Memory test New electro (A12) Tests of tonality a Probe-tone Melody test Wood piano (A15) a Familiar Melodies test Pan floot (B12) a Novel Melodies test Pan floot (B12) b Chord Progressions test Sine -wave components Tests of rhythm perception and production a Rhythm Same/Different test Electro piano (A11) c Metronome Matching test Metronome Tests of melody recognition and learning a Incremental Melodies test Electric grand (A5) a Famous Melodies test New electro (A12) a Television Themes test Grand piano (A1) Mismatched Songs test Voice (W. Steinke) c a Learning test Electro piano (A11)
a

Name and factory preset designation of Yamaha TX81Z synthesizer. b Produced by Yamaha TX802 synthesizer. c Test not administered to control participants.

Stimuli were recorded onto Maxell XL II-S cassette tapes using a Panasonic Rx -DT707 portable tape player. A Dixon DM1178 microphone was used for recording rhythmic reproductions (Experiment 1) and sung materials (Experiment 3). Test stimuli were reproduced by the tape player in free field at a comfortable loudness, as determined by each participant (about 55 to 70 dB SPL). KB was tested in several different locationsan interview room at the stroke rehabilitation unit of a local hospital, his bedroom at a skilled nursing facility, and his family residence. Test conditions were uniformly quiet across test locations, with no obtrusive background noises present. Test sessions lasted approximately 30 minutes each and were conducted over a period of 16 months, beginning in August 1994 when KB was 64 years old. KB completed the tests in Experiment 1 first, followed by the tests in

Experiment 2 and Experiment 3. All test instructions were read to KB, and all responses were recorded by the experimenter. KB was given as much time as needed to ask questions following practice trials (where applicable), to clarify instructions whenever necessary, and to complete tasks. It may be noted that KB was a cooperative and wellmotivated participant throughout testing. KB displayed a good-natured willingness to continue listening and responding to test trials even when it was apparent to both KB and the experimenter that his sensitivity to test stimuli was severely compromised and that he had resorted to guessing. Controls were tested in a quiet room in the Psychology Department at Queens University. They were most often tested individually, but occasionally in groups of two to three, depending on the particular experiment and scheduling demands. Test sessions lasted 2 to 3 hours each and were conducted over 6 months. Unless otherwise indicated, all 20 controls provided data for each of the tests described below. The order of music tests presented in Experiments 1 and 2 was counterbalanced across controls, with three restrictions. The first restriction was that the seven tests in the University of Montreal Musical Test Battery were always presented in the same order and during the same test session with no other tests intervening. This set of tests was considered as a single unit for purposes of counterbalancing. The second restriction concerned time constraints. When insufficient time remained in a test session to complete the next scheduled test, a shorter test was substituted. The third restriction concerned number of controls present. When two or three controls were present during the same session, tests which required individual recording or taping of responses were not administered and other tests were substituted. Prior to testing, written consent was obtained from each control. Demographic data were collected first, followed by Shipley tests (Shipley, 1953) and the music tests. Published test protocols were followed for administration of the Shipley tests. Participants were given written instructions for the music tests, and were given as much time as needed to read the instructions, to become familiar
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

417

STEINKE, CUDDY , JAKOBSON

with the response sheets (where applicable), to ask questions after completion of practice trials (where applicable), and to complete the tasks. No feedback was given following test trials, but instructions were clarified whenever necessary. Participants were verbally debriefed at the conclusion of the testing. In lieu of individual compensation, a donation was made to the church or organisation from which participants were recruited.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

ups and downs in pitch), and sensitivity to scales, intervals, and tonality, part of the grammar of music. Tests of rhythm perception and production assessed sensitivity to discrimination of temporal alterations to auditory sequences, sensitivity to meter (or periodic accenting) of a sequence, and motor control of tempo, or rate of events. Finally, participants ability to recognise well-known melodies, and their capacity for incidental learning of novel melodies, was assessed.

EXPERIMENT 1: TESTS OF BASIC MUSICAL ABILITIES


Experiment 1 administered basic tests of auditory and musical skills to KB and controls. Pitch perception was assessed in different ways, reflecting different subcomponents of musical pitch processing. Pitch tests measured simple pitch discrimination, pitch memory, sensitivity to changes in pitch contour of a novel melody (changes to the sequences of
Table 3. Title, source, and description of tests used in Experiment 1 Title Tests of pitch discrimination Pitch Discrimination Pitch Memory Contour Discrimination Scale Discrimination Interval Discrimination Tests of tonality Probe-tone Melody Familiar Melodies Novel Melodies Chord Progressions Tests of rhythm perception Rhythm Same-Different Rhythm Contour Metric Tests of rhythm reproduction Metronome Matching Rhythm Tap-Back Tests of melody recognition Familiarity Decision Incremental Melodies Incidental Memory
a b

Tests
The 17 tests administered are listed in Table 3 with the source and a brief description of each test. Detailed descriptions of 12 tests can be found in two published reports (Ligeois-Chauvel, Peretz, Baba, Laguitton, & Chauvel, 1998; Steinke et al., 1997). Detailed procedures for five tests developed for the present experiment may be found in Appendix A.

Source New b S,C,&H c UM UM UM S,C,&H S,C,&H S,C,&H S,C,&H New UM UM New New UM New UM
a

Description Discriminate pitch height of two tones Rate a test tone as present/not present in a preceding sequence of tones Detect within-scale contour alteration Detect outside-scale interval alteration Detect within-scale interval alteration Rate 12 chromatic test tones on degree of fit with preceding tonal melody Rate endings of familiar song melodies varying in level of tonality Rate endings of novel melodies varying in level of tonality Rate degree of expectancy in chord progressions varying in level of tonality Discriminate standard from same or altered comparison rhythm Detect alterations to duration of a single note in a novel melody Classify novel melodies as march or waltz Tap in time with beats produced by metronome Repeat rhythms tapped by examiner Classify melodies as novel or familiar Identify well-known melodies after presentation of 2 notes, 3 notes, 4 notes, etc. Recognition of novel melodies previously heard in UM test battery

Test constructed for present study. Test from Steinke, Cuddy, and Holden (1997). c Test from University of Montreal Musical Test Battery (Ligeois-Chauvel et al., 1998).

418

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Results
Results of the tests for KB and the present controls are summarised in Table 4. Where available, data from controls from two previous studies are included. Controls from Steinke et al. (1997) were 22 participants who, though younger than KB (age range 2040), were also amateur musicians. Controls from Ligeois-Chauvel et al. (1998) were 24 participants (mean age 32 years) for whom the age range (1472 years) included the age of KB and the present controls. These participants, however, had less musical experience; only two participants reported a few years of music training. Despite these differences, the controls from the present and

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

previous studies yielded similar test results, with remarkably similar within-group ranks for mean accuracy as a function of test type. Table 4 reveals KB performed poorly on or failed to complete most tests of musical pitch perception. The Pitch Discrimination test was the only such test where KB was moderately successful, although on average his score fell below the range of scores obtained by the controls. Further inspection of the data revealed that, whereas controls did equally well across the octave range tested, KB improved across the octave range. For KB, accuracy in the range E1 to E3 was 37.5%, but if one tone of the pair was above E3, and the other below, accuracy rose to 60%. If both tones were above E3, accuracy was

Table 4. Results of tests of pitch discrimination and memory, tonality, rhythm perception and reproduction, and melody recognition (Experiment 1) Score b,c KB Controls Other controls 95.6 70.8 84.8 85.3 82.3 (75.0100) (63.981.9) (73.3100) (70.0100) (70.0100) Not administered b 76.0 (56.984.7) c 91.0 (70.0100) c 94.6 (86.7100) c 90.6 (70.0100) .74 b .78 b .81 b .64
b a

Test

Tests of discrimination and memory Pitch Discrimination 67.3 Pitch Memory x d Contour Discrimination 43.3 d Scale Discrimination 50.0 d Interval Discrimination 50.0 Tests of tonality Probe-tone Melody x Familiar Melodies x Novel Melodies x Chord Progressions x Tests of rhythm perception Rhythm Same/Different 73.3 d Rhythm Contour 43.3 d Metric 63.3 Tests of rhythm reproduction Metronome Matching 0.0 Rhythm Tap-Back 16.6 Tests of melody recognition Familiarity Decision test 80.0 Incremental Melodies test 4 (25) d Incidental Memory test 53.3
a

.66 (.13.93) .74 (.55.94) .79 (.20.91) .58 (.23.81) 96.5 (86.7100) 94.2 (80.0100) 85.6 (53.3100) Not administered 93.3 (83100) 96.0 (80100) 4 (29) 83.3 (56.6100)

(.25.90) (.36.88) (.69.92) (.11.77)

Not administered c 92.2 (76.7100) c 82.2 (66.796.7) Not administered Not administered 98.0 (90100) Not administered c 88.3 (73.3100)
c

All scores are percentage correct (KB) and mean percentage correct (controls), except scores on tests of tonality, which are mean Spearman rank-order correlations of participant ratings with music-theoretic predicted levels of tonality, and scores on the Incremental Melodies test, which are median number of notes needed for identification. Ranges are presented in parentheses. x means KB admitted to guessing and failed to complete test. b Data from Steinke, Cuddy, and Holden (1997). c Data from Ligeois-Chauvel et al. (1998). d Score is not significantly different from chance. COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

419

STEINKE, CUDDY , JAKOBSON

82.3%, a score well above chance and within the range of scores obtained by the controls.1 Limited sparing was also noted for the Rhythm Same/Different test, which assessed the ability to discriminate rhythmic changes. KBs score on the Rhythm Same/Different test was above chance, although well below the range of control scores. Although he failed to show incidental memory for melodies he had been exposed to during the assessment (the Incidental Memory test), KB performed surprisingly well on two additional tests. In the first, Familiarity Decision test, participants were asked to classify melodies as familiar or unfamiliar. Familiar melodies included both well-known song and instrumental melodies; unfamiliar melodies were novel. On the Familiarity Decision test, KB stated that 7 of the 10 well-known melodies were familiar, and 9 of the 10 novel melodies were unfamiliar. On average, controls judged 97% of the well-known melodies as familiar and 95% of the novel melodies as unfamiliar. This level of performance from KB was quite remarkable, given his loss of pitch and rhythm processing. Even more remarkable, the results of the Familiarity Decision test suggested that KB was able to recognise the melody lines of well-known songs, but not wellknown instrumental pieces. KB recognised all five of the well-known songs in the experimental set on the basis of their melodies alone. He also recognised two instrumental pieces, the Wedding March and the William Tell Overture, but volunteered that he had at one time learned lyrics for these melodies. He failed to recognise the opening theme from Beethovens Fifth Symphony and Ravels Bolero (both of which were highly familiar to agematched controls), and recognised the Blue Danube Waltz simply as a waltz. The second test of melody recognition on which KB performed well was the Incremental Melodies test. KB and controls correctly identified all 10 song melodies, requiring a median number of only 4 notes to do so (range for KB, 35 notes; range for controls 29 notes).
1

Discussion
The results of this initial set of tests were consistent with those of previous studies (e.g., Peretz, 1994) in which neurological damage has been shown to result in loss or impairment of (previously intact) abilities to discriminate pitch and rhythm, process tonal relationships, and recognise familiar melodies. KBs pitch processing abilities were shown to be somewhat more impaired than two of the three amusic patients described in detail by Peretz and colleagues (CN and GL) and comparable to a third, IR (e.g., Peretz, 1996; Peretz et al., 1997; Peretz et al., 1994). Most pertinently, CN and GL demonstrated marked dissociations between processing of pitch and rhythm (intact) and processing of tonality (impaired). KB and IR, in contrast, were impaired on both pitch and rhythm tasks, yet KB was able to recognise and identify familiar melodies, while IR was above chance when categorising familiar and unfamiliar melodies. In the present study, controls matched for age and music experience were able to complete all of the tests successfully, though they did not score as high on these tests as younger controls from previous experiments. Nevertheless their scores were consistently higher than KBsa result indicating that ageing alone cannot account for the losses displayed by KB. In contrast to his verbal skills, which appeared to be largely intact, KB experienced problems with pitch processing and with the perception and reproduction of rhythms. It is likely that these profound problems in basic music processing contributed to KBs failure to show incidental learning of novel melodies. In all, the testing suggested a limited sparing in only two areasfirst, in the ability to distinguish two notes on the basis of pitch height, and second in the ability to discriminate two simple rhythmic patterns presented without melody. What is intriguing is the observation that, despite his profound loss of pitch processing abilities and sense of tonality, and his obvious rhythmic perception and reproduction problems, KB demonstrated a preserved ability to recognise and name

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

By American convention, the numeral following the note name refers to octave location. C4 is middle C (262 Hz), C3 is the octave below C4, C5 the octave above, and so forth.

420

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

7 of the 10 well-known melodies included in the Familiarity Decision test. Moreover, the results of the Incremental Melodies test suggested that in KB, as in controls, recognition of familiar song melodies was achieved almost immediatelyafter the first few notes had been presented. The possibility that this surprising preservation of melodic recognition ability in KB might apply only to song melodies and not to instrumental melodies motivated the next set of tests.

Kaminska, Java, Clarke, & Mayer, 1990; Peretz, Baba, Lussier, Hbert, & Gagnon, 1995). The first goal of the Famous Melodies test was to document in KB selective sparing of song melody recognition abilities. The second goal was to compare recognition for familiar song versus instrumental melodies within the same sample of older adults. Materials and methods The Famous Melodies test consisted of a set of 39 instrumental and 68 song melodies selected to be familiar to a listener of KBs age and cultural background. Song melodies were defined as those originally written with lyrics, most commonly heard as songs on radio or recordings, and most commonly sung to lyrics by amateur singers such as KB and the controls. Instrumental melodies were not associated with lyrics and were defined as melodies originally written as instrumental melodies, most commonly heard as instrumental melodies on radio or recordings, and most commonly played as instrumental melodies by amateur musicians. The melodies were drawn from several sources, including World-famous piano pieces (1943), Jumbo: The childrens book (Johns, 1980), The book of worldfamous music: Classical, popular, and folk (Fuld, 1995) and the Readers Digest treasury of popular songs that will live forever (Simon, 1982). Also included in the test were eight novel melodies, composed by the first author in the style of the familiar melodies. Each melody was limited to the opening phrase and presented as a monophonic melody line, with rhythm, original key, and original tempo left intact. The overall range for song melodies was G3 to A4, and for instrumental melodies was F#3 to A4. The most frequent notes for both types of melody were C4 and D4. Melodies ranged from 735 s in duration. Notes were edited to sound equally loud. The set of 115 song, instrumental, and novel melodies was presented in a single random order to all participants, in the context of a larger set of melodies used for another experiment. After hearing each melody, KB and controls were asked to indicate whether they recognised it or not. If the melody was recognised, they were then asked to
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

EXPERIMENT 2: MELODY RECOGNITION AND IDENTIFICATION


Experiment 2 reports the results of three tests of melody recognition and identification. Recognition is defined as the judgement that an item or event (in this case a melody) has been previously encountered (Mandler, 1980). Identification, in addition, requires correctly naming the title or lyrics of the melody. The Famous Melodies test sampled song and instrumental melodies from popular, folk, and classical genres. The Television Themes test sampled song and instrumental melodies from theme music drawn from television programmes. The Mismatched Songs test explored recognition and identification of familiar song melodies when the melodies were accompanied by mismatched lyrics.

Famous Melodies test


A number of studies support the widely held notion that adults of all ages have a remarkable capacity to remember familiar songs (Bartlett & Snelus, 1980; Halpern, 1984; Hbert & Peretz, 1997), famous and obscure classical instrumental themes (Java, Kaminska, & Gardiner, 1995), and television themes (Maylor, 1991). Two previous studies, both with university students, have contrasted recognition memory for song and instrumental melodies. Both showed slightly higher recognition rates for song over instrumental melodies (Gardiner,

421

STEINKE, CUDDY , JAKOBSON

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

identify the melody by stating lyrics, title, or any other identifying information. When a melody was recognised, controls were asked to rate the degree of familiarity they had with the melody on a scale of 1 10. A 1" indicated very little familiarity, while a 10" indicated that the melody was highly familiar to the participant. KB was not asked to rate degree of familiarity because pilot-testing revealed he was unable to discriminate levels of familiarity. A 3 s pause was inserted between melodies on the tape, but participants were instructed to pause the tape for a longer period between trials if needed. All responses were recorded by the first author. Results and discussion Results are shown in Figure 2. For controls, melody recognition was very high overall for both sets of melodies. They recognised slightly but significantly more song melodies than instrumental melodies, t(19) =5.22, p< .001, and identified more song melodies than instrumental melodies, t(19) = 22.20, p < .001. Both sets of melodies were judged to be highly familiar, with the average familiarity

rating for song melodies higher than that of instrumental melodies: 9.4 vs. 8.7 on the 10-point scale, t(19) = 4.16, p < .001. False recognition of novel melodies occurred on 25% of trials, and false identification occurred only once. Novel melodies were assigned an average familiarity rating of 4.8 on the 10-point scale. Similar to controls, KBs recognition and identification of song melodies was very high (with 88% of song melodies being recognised, and 75% being correctly identified). In marked contrast to controls, however, he was able to recognise only 7 (18%) of the 39 instrumental melodies as familiar. Four of these could not subsequently be identified. Interestingly, KB reported having previously learned comic lyrics to the remaining three instrumental melodies, and was able to supply these lyrics. KB did not recognise any of the novel melodies. Results from this test lend support to our earlier impression that there exists in KB a dissociation between the ability to recognise melodies learned with lyrics (i.e., songs) and those learned as instrumental pieces.

Figure 2. Mean percentage song (N = 68), instrumental (N = 39), and novel melodies (N = 8) recognised and identified (+SE) by controls (N = 20) and KB on the Famous Melodies test. * Indicates KB obtained a score of 0%.

422

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Television Themes test


The goal of the Television Themes test was to replicate the results of the Famous Melodies test with a different source of musical materials of a similar musical style.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Materials and methods The test sampled theme music from serialised television programmes, drawn from The TV Fake Book (Leonard, 1995). Themes were shortened and edited to a monophonic melody line, in a manner identical to that described above for the Famous Melodies test, and were recorded on tape. A pilot set of themes was tested with five controls. The 81 themes included in the pilot set were originally written as either songs (N = 36) or instrumentals (N = 45) and were typically heard only during viewing of the television programmes. The five controls were asked to indicate whether they were familiar with or had previously heard each theme. Themes not recognised by at least three of the five controls were discarded from the final experimental set. The final set included 24 vocal themes and 21 instrumental themes. The overall range for both vocal themes and instrumental themes was F3 to G4, with C4 the most frequent note. The task for KB and the remaining 15 controls was to indicate whether each melody was recognised or not, and, if recognised, to identify the melody by stating the title, lyrics, or any identifying information that came to mind. Participants were not told in advance that the set of melodies consisted of themes from television programmes.

Figure 3. Mean percentage song ( N = 24) and instrumental ( N = 21) themes recognised and identified (+SE) by controls ( N = 20) and KB on the Television Themes test. * Indicates KB obtained a score of 0%.

Results and discussion As shown in Figure 3, performance scores were generally lower for the Television Themes test than for the Famous Melodies test. In a previous report by Maylor (1991), elderly participants recognised between 68 and 92% of a pool of television themes sometimes or regularly watched (not categorised as song and instrumental themes). Maylors participants were able to identify as many as 50.4% of mel-

odies from recent television programmes that were regularly watched. The lower scores obtained from the present controls might be attributable, in part, to the fact that controls reported having watched only 63% of the television shows on average (range 3598%). However, KB and his spouse reported that he had seen all the programmes represented in the test. Nevertheless, an important result of the Famous Melodies test was replicated: KB showed marked impairment in his ability to recognise instrumental themes (none of the controls recognised as few instrumental themes as KB) while his recognition of song themes, in contrast, was shown to be relatively preserved. As in the Famous Melodies test, controls in the present study recognised significantly more song themes than instrumental themes (73% vs. 55%), t(19) = 6.74, p < .001. Correct identifications were also more common for song themes than for instrumental themes (18% vs. 9%), t(19) = 4.93, p< .001. KBs ability to recognise song themes was markedly superior to his ability to recognise instrumental themes (46% vs. 5%). In addition, although he correctly identified 21% of the song themes, he was unable to identify the one instrumental theme that he had recognised (the theme from Bonanza).
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

423

STEINKE, CUDDY , JAKOBSON

Mismatched Songs test


The results of the Famous Melodies test and the Television Themes test together lend support to the notion that songs are processed and/or stored differently from instrumental melodies. One suggestion that has been made is that the words and melodies of songs are processed and stored in a partially or fully integrated form (Crowder et al., 1990; Samson & Zatorre, 1991; Serafine et al., 1984). If lyrics and melodies are stored in an integrated form, then one might predict that a task involving recognition of melody in the presence of mismatched lyrics should prove difficult for both KB and controls. This prediction motivated the next test. The main question examined in the Mismatched Songs test was whether KB and controls could recognise song melodies when they were sung, not to their original lyrics, but to either a set of novel lyrics or to the lyrics of other, highly familiar songs.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Materials and methods For this test, a set of songs was developed consisting of: (a) novel lyrics set to familiar melodies (N =5); (b) lyrics of familiar songs set to novel melodies (N =5); or (c) familiar lyrics set to familiar-but-mismatched melodies (e.g., My Bonnie sung to the tune of Swing Low) (N = 6). Novel melodies and lyrics were composed by the first author with the intent to preserve the style of the familiar songs. The overall range of the songs was G3 to C5, the authors comfortable singing range. Twenty-one different familiar songs were used. Eleven contributed melody; 10 contributed lyrics: and 1 song contributed both. Two of the songs that contributed lyrics had been found to be familiar to KB and controls in pilot testing. The remaining 19 songs were drawn from the set of 68 songs in the Famous Melodies test according to the following criteria: First, they had all been recognised by KB in that test; second, they were all amenable to mismatching of melody and lyrics; and third, they had all been recognised and rated as highly familiar by the majority of controls. (Fifteen of the melodies had been recognised by all 20 controls, and the remaining four were recognised by 19 controls. The

average familiarity rating was 9.71, with 10 melodies receiving the maximum familiarity rating.) In the interest of time, the familiar songs used in this test were not presented to participants in their original versions. It was evident that recognizing the melodies presented with their original lyrics would be a trivial task for all participants, including KB, who had previously demonstrated recognition of the song melodies and had correctly provided the lyrics. Only the opening melodic and lyric phrases were used. All phrases were sung by the first author and recorded on tape in a single random order. Participants were instructed to listen carefully to both melody and lyrics, to state whether the melody was recognised or not, to state whether the lyrics were recognised or not, and to identify melody and lyrics if recognised. Each trial was played once, and presentation was self-paced. Responses were scored for the correct detection of the mismatch and the correct specification of the type of mismatch (e.g., familiar lyrics set to an unfamiliar tune). Following the test, controls were shown a list of the titles for songs used and asked to verify their familiarity with each one. In addition, they were asked to describe the strategies employed to recognise the melody and lyrics. Results and discussion Table 5 provides percentage detection of mismatch and percentage recognition for lyrics and melodies, presented separately, for KB and controls. In all three conditions, controls were able to detect the presence of a mismatch on virtually every trial: i.e., there was no effect of Type of Mismatch on detection performance: F (2, 57) = 0.62, MSE = 33.37, p =.54, yielding an overall detection accuracy of 98.1% (a value not significantly different from 100%, c 2 (59) = 21.58). Controls accurately recognised familiar melodies and familiar lyrics, and rejected novel melodies and novel lyrics. Thus, and most important, controls were able to specify the precise nature of the mismatch in all three conditions. For controls, recognition of familiar melodies was not impaired by the presence of mismatched

424

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Table 5. Results of the Mismatched Songs test

Type of mismatch Novel lyrics with familiar melody (N = 5) Familiar lyrics with novel melody (N = 5) Familiar lyrics with familiar-but-mismatched melody (N = 6)

Mismatch (% detected) KB Controls 0 100 100 99.0 (80100) 97.0 (80100) 98.3 (83100)

Lyrics (% recognised) KB Controls 0 100 100 3.0 (020) 100 99.2 (83100)

Melody (% recognised) KB Controls 0 0 0 99.0 (80100) 12.0 (060) 97.5 (83100)

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Control participant scores are means. Ranges are presented in parentheses.

lyrics. Identification, however, was affected. The identification rate for the 11 familiar melodies in the Famous Melodies test (when no lyrics were presented) was 86.4%. When familiar melodies in the present test were sung with novel lyrics, identification dropped to 71%, paired-sample t(19) = 2.42, p = .025; when they were sung with the lyrics from other, familiar songs, identification dropped to 74%, paired-sample t = 2.13, p < .05. Each control reported trying different strategies over the course of the 16 trials. Two strategies were reported by all. The first strategy was to attend to both lyrics and music at the same time as the song was unfolding. The second strategy was to pay attention to either words or music, make a decision as to familiarity, and then switch to the other. The third strategy, reported by five controls, was to make a deliberate decision on the familiarity of the lyrics first, and then, upon completion of the song, replay the melody in their minds before making a decision regarding its familiarity and identity. KB, like the controls, was able to distinguish between novel and familiar lyrics and could also tell that familiar lyrics were not accompanied by their original melodies. On these two judgements he obtained 100% accuracy. However, unlike the controls, KB displayed a total loss of recognition of the familiar melodies that he had previously recognised in the Famous Melodies test. Thus he was unable to determine whether familiar lyrics were accompanied by a novel or familiar melody, and was unable to recognise familiar melodies sung to novel lyrics. The presence of competing lyrics interfered with KBs previously demonstrated ability to recognise song melodies.

Summary
The results from this series of tests indicate that, relative to age-matched controls, KB demonstrates a dissociation between the preserved ability to recognise and identify song melodies and the inability to recognise and identify instrumental melodies. In the first two tests, song recognition was shown to be relatively well preserved, while instrumental melody recognition was severely impaired. The third test, however, revealed some problems with KBs song recognition. Relative to controls, he experienced difficulty recognising familiar song melodies in the presence of competing lyrics. Because he was able to detect instances in which familiar lyrics were accompanied by the wrong melody, we may infer that some melodic information was being processed. It is also possible that his ability to detect such mismatches was aided by the presence of temporal differences and stress differences imposed on lyric syllables when sung to different melodies. However, his inability to specify whether the melody heard on those trials was novel or familiar suggests that his melody recognition was seriously impaired under these conditions.

POST-HOC ANALYSIS OF FAMILIAR MELODIES


One possible explanation for KBs differential performance with song and instrumental melodies is that the melodies differed in musical characteristics (e.g., in the number and type of musical intervals which might, in the case of songs, be limited by the
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

425

STEINKE, CUDDY , JAKOBSON

range of the human voice). To examine this possibility, a number of post-hoc comparisons were carried out on the materials presented in the Famous Melodies test.

Measures of musical characteristics


Pitch measures for each melody included the number of note onsets, the rate of presentation of notes, the range in semitones between the highest and lowest notes, the average size in semitones of the interval between successive notes, and the ratio of number of direction changes to number of note onsets. Direction changes were defined as pitch reversals, i.e., the change from an upward frequency movement to a downward frequency movement and vice-versa. The total number of direction changes was tallied and divided by the total number of note onsets. The resultant value may be interpreted as a measure of contour complexity which controls for differences in total numbers of note onsets. Values closer to 1.0 and 0.0 indicate more or less frequent pitch reversals, respectively. Rhythm and meter measures for each melody included the total number of different note durations, the percentage of notes accounted for by the two most frequent durations, and the meter, duple or triple. The tonal strength of the pitch distributions for each melody was determined by a key-finding algorithm (Krumhansl & Schmuckler, cited in Krumhansl, 1990). Correlations were obtained between the distribution of pitches in each sequence and the standardised tonal hierarchy for each of the 24 major and minor keys. The standardised tonal hierarchies for C major and C minor are reported in Krumhansl and Kessler (1982), and the set of probe-tone values are given in Krumhansl (1990, p. 30). Values for each of the other keys were obtained by orienting the set to each of the different tonic notes. For each sequence the highest correlation so obtained was selected to represent the tonal strength of the distribution. Analyses of melodic expectancy characteristics determined the extent to which melodies conformed to certain bottom-up principles outlined in Narmours (1990) Implication-Realisation Theory

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

of melodic expectancy. The principles, as outlined and coded by Krumhansl (1995), were Registral Direction, Intervallic Difference, Proximity, Registral Return, and Closure. An algorithm based on the Krumhansl coding (Russo & Cuddy, 1996) was used to obtain fulfillment scores on each principle for each melody. The algorithm computed, for each successive interval beyond the first, whether or not the interval fulfilled the expectancy created by the previous interval according to the specified principle. The fulfillment score for each principle was the ratio of the number of fulfilled intervals to the total number of successive intervals, minus the first. Scores of 0 indicated no conformance to the principle and scores of 1 indicated complete conformance.

Results
Results of the analyses of musical characteristics are shown in Table 6. Two-sample t-tests were applied
Table 6. Musical characteristics of Famous Melodies test: Pitch, rhythm and meter, tonal strength, and melodic expectancy Song Instrumental Melodies Melodies Mean (SD) Mean (SD) Pitch Number of note onsets* 41.2 (16.23) 50.4 (22.8) Presentation rate (# of notes/s)*** 2.3 (0.55) 3.2 (1.65) Range (semitones)*** 12.5 (2.76) 16.5 (5.01) Interval size (semitones)** 2.4 (0.52) 2.8 (0.99) Ratio of number of direction 0.43 (0.14) 0.44 (0.11) changes to number of notes Rhythm and meter Number of different note durations* 4.9 (1.39) 4.3 (1.70) Percentage of notes accounted for by two most frequent durations* 79.6 (11.2) 84.3 (15.3) Melodies in duple meter (total) 44 20 Melodies triple meter (total) 24 19 Tonal strength 0.77 (0.14) 0.72 (0.15) Melodic expectancy Registral direction 0.41 (0.13) 0.43 (0.12) Intervallic difference 0.77 (0.11) 0.76 (0.11) Registral return 0.44 (0.13) 0.42 (0.16) Proximity 0.62 (0.13) 0.55 (0.20) Closure 0.58 (0.12) 0.57 (0.11) *p < .05; **p < .01; ***p < .001.

426

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

to all comparisons between song and instrumental melodies, with the exception of the metric measure. This measure involved a frequency count, so a chisquare test was applied. The results indicate that song and instrumental melodies did not differ on musical characteristics thought to reflect pitch structure and to influence melodic organizationtonal strength and melodic expectancy. Nor did they differ in contour complexity or distribution of duple and triple meter. Instrumental melodies, however, were found to differ from song melodies on six other musical characteristics. Instrumental melodies were slightly longer, faster in presentation rate, larger in pitch range and average interval size, and less complex on two rhythmic measures. The question therefore arises whether these characteristics might be responsible for KBs poorer recognition of instrumental versus song melodies. To address this question we categorised the melodies into subsets, two for each of the six characteristics. For one subset, the range of measures was below the median for that characteristic; for the other, the range was above the median. Extreme values were dropped so that for each subset the mean for the song melodies did not differ significantly from the mean for the instrumental melodies. Means and standard deviations for the musical characteristics for each subset are given in the first two columns of Table 7. Next we calculated for each subset KBs recognition score for the song melodies and the instrumental melodies. These scores are given in the third and fourth columns of Table 7, which report the ratio of number of melodies recognised to the number of melodies retained in the subset. It can be seen that for every subset the recognition score is higher for song than for instrumental melodies. All differences were significant (by tests of proportions) beyond the .0002 level. Thus for the six musical characteristics, the difference in recognition between song and instrumental melodies held for subsets of melodies selected to be statistically equivalent. There is therefore no support for an account of KBs difficulty with instrumental melodies based on these musical characteristics.

Table 7. Musical characteristics and KB recognition scores for selected subsets from the Famous Melodies test Mean (SD) Song Instrumental Melodies Melodies Recognition Score Song Instrumental Melodies Melodies

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Number of note onsets 1439 30.5 (6.4) 30.1 (4.8) 33/38 4/15 4098 53.7 (17.9) 59.2 (12.6) 27/30 3/22 Presentation rate (# of notes/s) 0.982.49 1.87 (0.03) 2.07 (0.32) 38/42 4/11 2.50 4.73 3.12 (0.56) 3.24 (0.60) 22/26 2/24 Range (semitones) 413 10.8 (1.79) 10.5 (2.83) 36/42 2/9 1416 14.5 (0.80) 15.1 (0.99) 20/22 3/10 Interval size (semitones) 0.962.4 2.02 (0.29) 1.92 (0.46) 35/39 4/15 2.53.7 2.87 (0.31) 3.03 (0.37) 25/29 3/17 Number of different note durations 34 3.7 (0.48) 3.5 (0.51) 21/25 4/15 58 5.7 (1.19) 6.0 (0.71) 39/43 3/17 Percentage of notes accounted for by two most frequent durations 5082 72.2 (0.09) 67.3 (0.09) 34/38 2/15 8397 88.9 (0.03) 91.1 (0.04) 26/30 2/13 For the selected melody subsets in this table, means for musical characteristics do not differ significantly between song and instrumental melodies. Recognition scores for song melodies are consistently and significantly higher than for instrumental melodies, p < .0002.

This conclusion can be bolstered by logical arguments as well. Given that KB and the controls were typically able to achieve song melody recognition within the first few notes (Experiment 1Incremental Melodies), differences in the average number of notes provided by a melody may not be relevant to the song/instrumental melody difference. Moreover, the less rhythmic complexity of the instrumental melodies might have simplified the recognition task for KB relative to song melodies, but such was not the case. In summary, the posthoc analyses did not reveal a consistent set of differences in musical characteristics that could explain the sparing of song recognition for KB with marked and selective failure of instrumental melody recognition.
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

427

STEINKE, CUDDY , JAKOBSON

EXPERIMENT 3: LEARNING OF SONG AND INSTRUMENTAL MELODIES


Three findings reported above suggest that, during their initial learning, songs (with lyrics) and instrumental melodies are processed and stored in different ways: (1) the remarkable dissociation between song and instrumental melody recognition observed in KB; (2) the consistently higher rates of recognition and correct identification for song as opposed to instrumental melodies reported for control participants in Experiment 2; and (3) the fact that the only instrumental melodies correctly identified by KB in the Famous Melodies test were those to which he had, at one point, learned comic lyrics. Although KBs capacity to learn new instrumental melodies had been tested incidentally in Experiment 1 and found to be severely impaired (see results of the Incidental Memory test), his capacity for new song learning was unclear. In the experiment described below, we examined the possibility that the learning of new melodies might be possible for KB if those melodies were presented to him in the context of songs with lyrics. It was hypothesised that KB would be able learn both words and melody for novel songs over time, but would not be able to learn novel melodies sung to la.

Materials and methods


Initially, KBs ability to learn novel melodies was examined using a paradigm adapted from Samson and Zatorre (1992). In this paradigm, a set of stimuli is presented, followed by pairs of items. Each pair contains one stimulus from the set, and one novel stimulus or foil. The task is to state which member of the pair was presented in the original set. The original set is then presented again, and once more followed by pairs of items, this time with different foils. The procedure is repeated until a designated recognition criterion is reached. Shortterm learning is measured by the number of repetitions needed to reach criterion. For our study, comparisons were made between participants ability to learn two different sets of novel melodies: one set

sung with lyrics, and the other sung to la. Pilot testing indicated that, even after several modifications to simplify the procedure, KB was unable to learn either set of test materials. At this point, a new procedure was introduced, one that allowed extensive exposure to the learning materials over many months. For this test of KBs ability to learn melodies, a set of 12 novel melodies (of similar length and style) were composed by the first author and recorded on tape, in random order, as instrumental pieces with a piano timbre within the range G3 to C4. The melodies were written in the style of North American folk songs, and as such were highly tonal, nonmodulating diatonic melodies outlining simple harmonic progressions, with regular phrase structures and recurring rhythmic and melodic motifs. Next, a second tape of these materials was created in which four of the melodies were sung to novel lyrics, four were sung to la, and four were left in their original, instrumental versions. The sung renditions were performed and recorded while the vocalist was listening to the piano versions of each melody through headphones. In this way differences in intonation and timing between the sung and instrumental versions were minimised. This latter tape (which included the sung renditions) was played for KB on 26 occasions, approximately once a week over a 6-month period. KBs instruction on each occasion was simply to listen to the tape. At the end of the 6-month period, KBs ability to recognise the test materials in the exposure set was assessed. The original instrumental versions of the 12 test melodies were combined with a set of 36 additional control melodies. The 36 control melodies consisted of 12 well-known song melodies, 12 familiar melodies that had been melodically altered (included as pilot materials for later experiments), and 12 novel instrumental melodies (foils). The complete set of 48 melodies (each played in the same piano timbre used for the four instrumental melodies in the learning phase of the experiment) was presented to KB in a single, random order. His task was to indicate after each trial whether he recognised the melody, and if so to provide a title, lyrics, or any other associations that came to mind.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

428

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Figure 4. Percentage of Learning test melodies recognised by KB. * Indicates KB obtained a score of 0%.

Results and discussion


KB recognised all 12 well-known song melodies played in their original versions, and 4 of 12 familiar song melodies that had been melodically altered. He also reported that the melody line was familiar for two of the four songs from the exposure set which had been presented to him in the context of songs with lyrics over the preceding 6-month period. None of the remaining 10 test items, and none of the 12 novel instrumental foils, were recognised. The data are summarised in Figure 4. KBs inability to perform the short-term learning task most likely resulted from his documented pitch and rhythmic processing deficits. Despite these deficits, KB was able to demonstrate a limited ability to learn new melodies, with repeated exposure over a lengthy period, provided that these melodies were presented in the context of songs with lyrics.

GENERAL DISCUSSION
We have reported a dissociation between song and instrumental melody recognition for KB, an amateur musician who suffered a right-hemisphere stroke with fronto-parietal involvement. Results from a wide array of tests indicated preserved gen-

eral intelligence and language skills, sparing of recognition of environmental sounds and musical instruments, and limited sparing of simple perceptual judgements of pitch height and rhythmic pattern. Overall, however, KBs musical deficits were sufficiently severe to warrant a diagnosis of amusia. KBs difficulties strongly implicate musical memoryboth for discrimination and recognition of novel melodies and for recognition/identification of familiar instrumental melodies. Sparing of song melody recognition and identification in the presence of severe musical loss is therefore the most striking finding of this report. Two potential accounts of differences between song and instrumental melodies are inadequate to explain the observed dissociationone based on musical features, the other on relative familiarity. Post hoc tests of musical features revealed certain differences between song and instrumental melodies but, when the features were statistically controlled, scores remained superior for song melody as opposed to instrumental melody recognition. Three findings contraindicate relative familiarity. First, although controls familiarity ratings for the Famous Melodies test favoured song over instrumental melodies (as did recognition and identification rates in both the Famous Melodies and the Television Themes tests) the difference was not large compared to the difference shown by KB. Given the similarity between the musical background of KB and the controls, it seems likely that he, too, would have been very familiar with both types of melodies. Second, familiarity alone cannot explain KBs inability to recognise familiar songs presented with competing lyrics (Mismatched Songs test). Third, only melodies presented with lyrics were learned by KB, despite his equivalent exposure to and thus familiarity with melodies sung to la and instrumental melodies ( Experiment 3). Our arguments against accounts based on differences in musical characteristics and relative familiarity of song and instrumental melodies would be stronger if another patient was found demonstrating superior recognition and identification of familiar instrumental, as opposed to song, melodies (i.e., if a double dissociation was documented). In the meantime, however, it is instructive to consider
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

429

STEINKE, CUDDY , JAKOBSON

alternate explanations for the dissociation observed in KB. In the following discussion we propose that during song acquisition the temporal contiguity of lyrics and melody results in an elaborated representation with special properties. We draw upon the association -by-contiguity hypothesis (Crowder et al., 1990), a class of associative models put forward by Stevens, McAuley, and Humphreys (1998), and Peretzs (1993) model of melody recognition. An associationist position (Crowder et al., 1990; Stevens et al., 1998) is one of several put forth to account for the integration effect (e.g., Samson & Zatorre, 1992; Serafine et al., 1984). As noted earlier, the paradigm for the integration effect engages novel songs and the results may not directly generalise to well-known songs. Nevertheless, the associationist position may contribute to an explanation of KBs dissociation. The association-bycontiguity hypothesis suggests that temporal contiguity suffices for association; thus, melody and text are connected in memory, hence they act as recall cues for each other, yet each is stored with its independent integrity intact (Crowder et al., 1990, p. 474). The class of associative model termed conjunctive representation by Stevens et al. (1998) is compatible; they suggest that melody and text are represented both by item information for the separate components and relational information for their pairing. The contiguous presentation of melody and lyrics in song may result in a particularly rich store of relational information. In the case of instrumental music the relational information is less salient because such contexts as the title of the piece are not temporally contiguous with the melody. Next, in line with Peretz (1993), we propose how such notions might be implemented in the case of KB. When a normal listener hears a familiar song, two distinct but interconnected systems are engaged. One, the melody analysis system, leads to activation of a stored template of the melody in a dedicated tune input lexicon. The other, the speech analysis system, leads to activation of the stored templates of individual words in a dedicated speech input lexicon. Activation of one or both lexicons generates recognition in the listener, a sense of familiarity. Moreover, repeated coincidental acti-

vation of the two lexicons during song learning allows for the establishment of direct links between the two representations. Activation in one system influences the level of activity in the other system, producing, through a process of spreading activation, recognition and identification of song. We will argue that this proposal has considerable explanatory power for many of our present findings. First, it could account for the observation that controls found song melodies easier to recognise and identify than instrumental pieces, overall. Note that, according to this scheme, during the processing of a familiar instrumental melody there would be no activation of the speech analysis system nor of the speech input lexicon. Thus, the network of information relating to the instrumental piece would be less elaborate than that for a familiar song, given that it would not include lyrics or concepts associated with those lyrics. With less elaboration, it may be inferred that recognition is less likely. Second, the present proposal could account for the severe disruption to KBs basic music abilities (Experiment 1), the remarkable sparing of his ability to recognise the melody lines of familiar songs (Experiment 2), and his limited residual capacity for learning new songs (Experiment 3). The loss of KBs basic music abilities reflects extensive damage to his melody analysis system. Exposure to a familiar song melody, however, may result in just enough activation to generate a simple, degraded, melody template. Recall that in Experiment 1 KB showed limited sparing in two domains: first, in his ability to distinguish two notes in the mid- to high-frequency ranges on the basis of pitch height; and, second, in his ability to discriminate simple rhythmic patterns. These residual capacities might provide the basis for the creation of this simple melody template. This template, while not containing a rich, detailed and highly accurate mapping of musical events, might nonetheless produce a level of activation sufficient to influence the level of activity in KBs declarative memory and in his speech input lexicon, producing recognition. KBs demonstrated though severely limitedability to learn new song melodies could be explained through repeated activation of a similar pathway of associative links.

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

430

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

Finally, the proposal can address the difficulty experienced by KB during the Mismatched Songs test (Experiment 2). The fact that he did not suffer from a language impairment suggests that his speech analysis system, his speech input lexicon, and the representations of word meanings in his declarative memory were intact. Given this, it is not surprising that he was able to specify accurately whether the lyrics he heard were novel or familiar across trials. As well, the activation of the representation of the lyrics to a familiar song would create, through activation of associative links between the two lexicons and memory for relational information, a set of melodic expectancies. If these expectancies were inconsistent with the simple melody template generated by KB in response to the incoming melody, the result would be the observed pattern of correct detection of a mismatch. KBs inability to recognise the melody correctly in this situation might reflect the effects of interference created in the speech input lexicon between two competing patterns of activationa strong pattern produced through exposure to the lyrics, and a weaker pattern produced through spread of activation from the crude melody template in the melody input lexicon. Controls, being able to exploit the resources of their intact melody analysis systems and to activate a detailed representation of the melodies, would have been able to overcome any such interference and would achieve recognition of both lyrics and melodies. Although presentation of novel lyrics would not lead to a set of melodic expectancies, it would lead to activation of the representations of individual words and their related meanings. Again, this pattern of activation would be inconsistent with that produced in response to the melody. Moreover, in the case of KB, it would far outweigh any activation produced by the simple melody template, thereby making melody recognition difficult or impossible. For KB, song recognition has been spared even though the melody analysis system has been damaged. However, this damage is not complete: KB has residual ability to generate simple, crude, representations of familiar melodies. In the case of song melodies, there is sufficient activation in the melody analysis system to co-activate an intact repre-

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

sentation of both relational information and of the lyrics in the speech lexicon, making recognition and identification possible. In the case of instrumental melodies, no such associative processes exist (unless, of course, as noted earlier, KB has previously associated words to the instrumental melody). In the absence of sufficient relational information and tightly connected associative links between melody and speech lexicons, recognition and identification of instrumental melodies does not occur.
Manuscript received 28 May 1999 Revised manuscript received 15 August 2000 Revised manuscript accepted 19 September 2000

REFERENCES
Anderson, C. (1996). Finale music notation software [Computer Software]. Eden Prairie, MN: Coda Music Technology. Bartlett, J.C., & Snelus, P. (1980). Lifespan memory for popular songs. American Journal of Psychology, 93 , 551560. Besson, M., Fata, F., Peretz, I., Bonnel, A.-M., & Requin, J. (1998). Singing in the brain: Independence of lyrics and tunes. Psychological Science, 9, 494498. Bigand, E. (1993). The influence of implicit harmony, rhythm and musical training on the abstraction of tension-relaxation schemas in tonal musical phrases. Contemporary Music Review, 9, 123137. Blair, J.R., & Spreen, O. (1989). Predicting premorbid IQ: A revision of the National Adult Reading Test. The Clinical Neuropsychologist, 3, 129136. Boltz, M. (1989). Rhythm and good endings: Effects of temporal structure on tonality judgments. Perception and Psychophysics, 46 , 917. Boltz, M., & Jones, M.R. (1986). Does rule recursion make melodies easier to reproduce? If not, what does? Cognitive Psychology, 18, 389431. Crowder, R.G., Serafine, M.L., & Repp, B.H. (1990). Physical interaction and association by contiguity in memory for the words and melodies of songs. Memory and Cognition, 18, 469476. Dowling, W.J., & Harwood, D.L. (1986). Music cognition . Orlando, FL: Academic Press. Fuld, J.F. (1995). The book of world-famous music: Classical, popular, and folk. New York: Dover. Gardiner, J.M., Kaminska, Z., Java, R.I., Clarke, E.F., & Mayer, P. (1990). The Tulving-Wiseman law and the
COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

431

STEINKE, CUDDY , JAKOBSON

recognition of recallable music. Memory and Cognition, 18 , 632637. Goodglass, H., & Kaplan, E. (1983). Boston Diagnostic Aphasia Examination (BDAE). Philadelphia: Lee & Febiger. Distributed by Psychological Assessment Resources, Odessa, FL. Gross, R. (1981). DX-Score [Computer Software]. Rochester, NY: Eastman School of Music. Halpern, A.R. (1984). Organization in memory for familiar songs. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 496512. Heaton, R.K. (1981). A manual for the Wisconsin Card Sorting Test. Odessa, FL: Psychological Assessment Resources. Hbert, S., & Peretz, I. (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory and Cognition, 25 , 518 533. Java, R.I., Kaminska, A., & Gardiner, J.M. (1995). Recognition memory and awareness for famous and obscure musical themes. European Journal of Cognitive Psychology, 7, 4153. Johns, M. (Ed.). (1980). Jumbo: The childrens book (3rd ed.). Miami Beach, FL: Hansen House. Koh, C.K., Cuddy, L.L, & Jakobson, L.S. (in press). Associations and dissociations between music training, tonal and temporal processing, and cognitive skills. Proceedings of the New York Academy of Science: Biological Foundations of Music. Kolinsky, M. (1969). Barbara Allen: Tonal versus melodic structure, Part II. Ethnomusicology , 13, 173. Krumhansl, C.L. (1990). Cognitive foundations of musical pitch. New York: Oxford University Press. Krumhansl, C.L. (1991). Music psychology: Tonal structures in perception and memory. Annual Review of Psychology, 42, 277303. Krumhansl, C.L. (1995). Music psychology and music theory: Problems and prospects. Music Theory Spectrum, 17, 53-80. Krumhansl, C.L., & Kessler, E.J. (1982). Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychological Review , 89, 334368. Lengeling, G., Adam, C., & Schupp, R. (1990). C-Lab Notator SL/Creator SL (version 3.1) [Computer software]. Hamburg, Germany: C-Lab Software GmbH. Leonard, H. (1995). The TV fake book. Milwaukee, WI: Hal Leonard Corporation. Ligeois-Chauvel, C., Peretz, I., Baba, M., Laguitton, V., & Chauvel, P. (1998). Contribution of different cortical areas in the temporal lobes to music processing. Brain , 121 , 18531867.

Mandler, G. (1980). Recognizing: The judgement of previous occurrence. Psychological Review, 89 , 334 368. Marin, O.S.M., & Perry, D.W. (1999). Neurological aspects of musical perception. In D. Deutsch (Ed.), The psychology of music (2nd ed., pp. 653724). New York: Academic Press. Maylor, E.A. (1991). Recognizing and naming tunes: Memory impairment in the elderly. Journals of Gerontology , 46 , P207217. Monahan, C.B., & Carterette, E.C. (1985). Pitch and duration as determinants of musical space. Music Perception , 3, 132. Morrongiello, B.A., & Roes, C.L. (1990). Childrens memory for new songs: Integration or independent storage of words and tunes? Journal of Experimental Child Psychology, 50 , 2538. Narmour, E. (1990). The analysis and cognition of basic melodic structures. Chicago: University of Chicago Press. Palmer, C., & Krumhansl, C.L. (1987a). Independent temporal and pitch structures in determination of musical phrases. Journal of Experimental Psychology: Human Perception and Performance, 13, 116126. Palmer, C., & Krumhansl, C.L. (1987b). Pitch and temporal contributions to musical phrase perception: Effects of harmony, performance timing, and familiarity. Perception and Psychophysics, 41, 505518. Patel, A.D. (1998). Syntactic processing in language and music: Different cognitive operations, similar neural resources? Music Perception, 16, 2742. Patel, A.D., & Peretz, I. (1997). Is music autonomous from language? A neuropsychological appraisal. In I.E. Delige & J.E. Sloboda (Eds.), Perception and cognition of music (pp. 191215). Hove, UK: Psychology Press. Peretz, I. (1993). Auditory agnosia: A functional analysis. In S. McAdams & E. Bigand (Eds.), Thinking in sound: The cognitive psychology of human audition (pp. 199230). Oxford: Clarendon Press/Oxford University Press. Peretz, I. (1994). Amusia: Specificity and multiplicity. In I. Delige (Ed.), Proceedings of the 3rd International Conference on Music Perception and Cognition (pp. 37 38). Lige, Belgium: European Society for the Cognitive Sciences of Music. Peretz, I. (1996). Can we lose memory for music? A case of music agnosia in a nonmusician. Journal of Cognitive Neuroscience, 8, 481496. Peretz, I., Baba, M., Lussier, I., Hbert, S., & Gagnon, L. (1995). Corpus dextraits musicaux: Indices relatifs la familiarit, lge dacquisition et aux vocations

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

432

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

verbales [A repertory of music extracts: Indicators of familiarity, age of acquisition, and verbal associations]. Canadian Journal of Experimental Psychology, 49 , 211238. Peretz, I., Belleville, S., & Fontaine, S. (1997). Dissociations entre musique et language aprs atteinte crbrale: Un nouveau cas damusie sans aphasie [Dissociation between music and language following cerebral haemorrhage: Another instance of amusia without aphasia]. Canadian Journal of Experimental Psychology, 51, 354368. Peretz, I., & Kolinsky, R. (1993). Boundaries of separability between melody and rhythm in music discrimination: A neuropsychological perspective. Quarterly Journal of Experimental Psychology, 46A, 301325. Peretz, I., Kolinsky, R., Tramo, M., Labrecque, R., Hublet, C., Demeurisse, G., & Belleville, S. (1994). Functional dissociations following bilateral lesions of auditory cortex. Brain , 117 , 12831301. Polk, M., & Kertesz, A. (1993). Music and language in degenerative disease of the brain. Brain and Cognition, 22 , 98117. Raven, J.C. (1965). Guide to using the coloured progressive matrices. London: H.K. Lewis. Rey, A. (1941). Lexamen psychologique dans les cas dencephalopathy traumatique [Psychological examination in cases of traumatic encephalopathy]. Archives de Psychologie, 28, 286340. Russo, F.A., & Cuddy, L.L. (1996). Predictive value of Narmours principles for cohesiveness, pleasingness, and memory of Webern melodies. In I. Delige (Ed.), Proceedings of the 3rd International Conference on Music Perception and Cognition (pp. 439443). Lige, Belgium: European Society for the Cognitive Sciences of Music. Samson, S., & Zatorre, R.J. (1991). Recognition memory for text and melody of songs after unilateral temporal lobe lesion: Evidence for dual encoding. Journal of Experimental Psychology: Learning, Memory, and Cognition , 17, 793804. Samson, S., & Zatorre, R.J. (1992). Learning and retention of melodic and verbal information after unilateral temporal lobectomy. Neuropsychologia , 30, 815826. Serafine, M.L., Crowder, R.G., & Repp, B.H. (1984). Integration of melody and text in memory for songs. Cognition , 16, 285303. Serafine, M.L., Davidson, J., Crowder, R.G., & Repp, B.H. (1986). On the nature of melody-text integration in memory for songs. Journal of Memory and Language, 25, 123135. Shipley, W.C. (1953). Shipley Institute of Living scale for measuring intellectual impairment. In A.Weider

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

(Ed.), Contributions toward medical psychology: Theory and psychodiagnostic methods (pp. 751756). New York: The Ronald Press Company. Simon, W.L. (Ed.). (1982). Readers Digest popular songs that will live forever. Pleasantville, NY: Readers Digest Association Inc. Steinke, W.R., Cuddy, L.L., & Holden, R.R. (1997). Dissociation of musical tonality and pitch memory from nonmusical cognitive abilities. Canadian Journal of Experimental Psychology, 51, 316334. Stevens, K., McAuley, J.D., & Humphreys, M.S. (1998). Relational information in memory for music: The interaction of melody, rhythm, text, and instrument. Noetica: Open Forum [on-line], 3(8) . Available: http:/ /www.cs.indiana.edu/Noetica/OpenForumIssue8/ Stevens.html. Stone, C.P., Girdner, J., & Albrecht, R. (1946). An alternate form of the Wechsler Memory Scale. Journal of Psychology, 22, 199206. Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence Scale (Rev. ed.). New York: The Psychological Corporation. White, B. (1960). Recognition of distorted melodies. American Journal of Psychology, 7, 253258. World Famous Piano Pieces. (1943). Toronto, Canada: Frederick Harris. Zatorre, R.J. (1984). Musical perception and cerebral function: A critical review. Music Perception, 2, 196 221.

APPENDIX A Experiment 1: Test procedures


Pitch Discrimination test
For the Pitch Discrimination test, a set of 55 pairs of piano tones was generated, 5 for each of 11 intervals ranging in size from unison to the octave. Each pair consisted of two pitches, each of 1 s duration, separated by a silent interval of 1 s duration. A 4 s silence separated trials. Each of the five trials for each interval type was played in a different octave range, from E1 to F7. For each of the 50 nonunison intervals, the pitch direction of the second note, higher or lower, was randomly determined. The total set of 55 interval pairs was recorded in a single random order. The task was to state whether the second note was higher than, lower than, or the same pitch as the first note. Three practice trials were provided, one for each of the three conditions: second note higher, lower, and the same as the first note. COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

433

STEINKE, CUDDY , JAKOBSON

Rhythm Same/Different test


The purpose of the Rhythm Same/Different test was to obtain a measure of KBs ability to discriminate changes within rhythms. A set of 15 rhythmic patterns were constructed, each designed to be easily apprehended by normal listeners. The following construction rules were obeyed. First, the number of notes in each pattern was limited (range = 27 notes; mean = 4.6 notes). Second, each pattern contained only one or two different note durations, and only four note durations were employed in total (0.25, 0.5, 0.75 and 1 s durations, corresponding to eighth, quarter, dotted quarter, and half notes respectively). Six patterns were isochronous and the remaining nine each contained two durations from the above set. Third, a regular beat structure was imposed on the patterns. Notes tended to fall on the beat, with no syncopations. An altered version of each rhythmic pattern was also generated, intended to be easily discriminable from the original. Altered versions were constructed by either lengthening or shortening the duration of one note in the group of notes (or figure) within each rhythm. Each pattern was then randomly paired with either the original or altered version of the pattern, yielding a set of 15 pairs. A second set of 15 pairs was then created by pairing each pattern with the version not used in the first set. The two sets were recorded on cassette tape in a single random order, yielding 30 trials. The patterns in each pair were separated by 1 s of silence; pairs were separated by 3 s. All notes were the same pitch (A4 = 440 Hz), and were recorded at the same loudness level. Participants were instructed to listen carefully to each pair of rhythms and to indicate whether the two rhythms were the same or different.

judged whether KB matched each metronome rate. Judges were also asked to characterise KBs tapping for each metronome rate in terms of how closely he was able to approximate the metronome rate. Following independent listenings of the tape, raters were found to be in 100% agreement on KBs performance, in terms of both matching and approximation judgements.

Rhythm Tap-Back test For the Rhythm Tap-Back test, KB and five controls were

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

Metronome Matching test

asked to reproduce a set of six short, simple rhythms. Five of the rhythms were constructed in the same manner as described above for the Rhythm Same/Different test. The remaining rhythm was identified to KB, prior to presentation, as the rhythm from the saying Shave and a haircuttwo bits, which KB was known to be familiar with. The experimenter, using a pencil on a tabletop, first tapped each rhythm. The pencil was then handed to the participant and the rhythm was tapped back. This procedure was repeated for each rhythm. All experimenter and participant rhythms were recorded onto tape. Two sets of judgements were made on the rhythms. First, three independent raters with high music training listened to the tape recording for each participant. Each rater assigned a similarity rating, on a 10-point scale, to each pair of experimenter/participant rhythms. A 1" indicated no similarity, and a 10" indicated that the rhythms sounded identical. For the second set of judgements, each rater listened to an edited tape on which each individual rhythm tapped by the experimenter and by each participant was reproduced in a single random order. Each rater transcribed each of the rhythms into music notation. The transcriptions of each pair of experimenter/participant rhythms were then compared.

The Metronome Matching test was not administered to controls. An isochronous sequence of beats was produced by a metronome set in turn to five different rates or tempos, from moderately slow to fast (as indicated on the metronome). KB was asked to imitate the sound of the metronome by tapping on a tabletop with a pencil held in his right hand. The five metronome settings chosen, in beats per minute (bpm), were 120 bpm, 160 bpm, 200 bpm, 260 bpm, and 320 bpm. The corresponding times in seconds between beat onsets were 0.5 s, 0.37 s, 0.3 s, 0.23 s, and 0.19 s. At each setting approximately 10 beats were sounded on the metronome, and then the metronome was stopped. At this point KB was instructed to tap a pencil at the same rate as the beats produced by the metronome. He was instructed to listen carefully to the metronome before beginning to tap, and to continue to tap until asked to stop by the experimenter. The metronome and his tapping were recorded onto tape. The first author and two independent raters (each highly trained musicians) subsequently listened to the tape and

Incremental Melodies test


The Incremental Melodies test included the following 10 song melodies: Swannee River, Row Your Boat, Home on the Range, Red River Valley, Oh Susannah, Song of the Volga Boatmen, Hail Hail, Bicycle Built for Two, Auld Lang Syne, and Irish Jig. (Irish Jig was included as a song melody because KB had previously learned comic lyrics for this tune.) Each melody was recorded onto cassette tape starting with the first two notes, then the first three notes, first four notes, and so on, until the complete version of the song was played in its original tempo and key. A 2 s pause separated each version of the melody. Participants were instructed to listen to each trial in succession, until they were able to identify each melody correctly by stating the title or the lyrics. At this point the trials for that melody were discontinued, and the number of notes required for identification was recorded. Melodies were recorded in a single random order.

434

COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

MELODY RECOGNITION

APPENDIX B Song and instrumental melodies used in famous melodies test


Source page # a b c Johns Fuld W -F 167 522 306 312 280 558 588 443 370 371

Title and composer Instrumental melodies 1. Chant san ParolesP. Tchaikovsky 2. The Sorcerers ApprenticeP. Dukas 3. Irish Washerwoman JigTrad 4. Minute Waltz, Op 64 No. 1F. Chopin 5. Hungarian Dance #5J. Brahms 6. Largo, New World Symphony, Op. 95A. Dvorak 7. Traumerei Op 15, No. 7R. Schumann 8. Prelude, Op 28, No. 7F. Chopin 9. Minuet in G, Series 18, No. 12L. van Beethoven 10. Minuet (Don Giovanni)W.A. Mozart 11. Joy of My Soul, Cantata 147J.S. Bach 12. The EntertainerScott Joplin 13. Prelude in C (Welltempered Clavichord)J.S. Bach 14. The Stars and Stripes ForeverJohn Philip Sousa 15. Baby Elephant WalkHenry Mancini 16. Simple ConfessionFrancis Thome 17. Anvil ChorusG. Verdi 18. Spring Song (#30 Lieder ohne Worte)F. Mendelssohn 19. Nocturne, Op 9, No. 2F. Chopin 20. March, Nutcracker Suite P. Tchaikovsky 21. Flower SongGustav Lange 22. BarcarolleJ. Offenbach 23. Theme, 1st Mvmt, Piano Con. #1P. Tchaikovsky 24. Semper FidelisJohn Philip Sousa 25. La Cumparsita TangoG. Rodriguez 26. Mexican Hat DanceF. Patrichala 27. Pizzicato (Sylvia)L. Delibes 28. Fr EliseL. van Beethoven 29. Dance of the Sugar Plum Fairies, Nutcracker SuiteP. Tchaikovsky 30. Chicken ReelJoseph M. Daly 31. The Harmonious BlacksmithG.F. Handel 32. Thunder and Blazes (Circus March)Julius Fucik 33. The Blue DanubeJohann Strauss, Jr. 34. Waltz in A Flat, Op 39, No. 15J. Brahms 35. Andante (Orpheus)C.W. von Gluck 36. The Girl with the Flaxen HairClaude Debussy 37. Can Can PolkaJ. Offenbach 38. Music Box DancerF. Mills 39. Valse Lento (Coppelia)L. Delibes Song melodies 1. Ode to JoyL. van Beethoven 2. Blue Tail Fly (Jimmy Crack Corn)Trad 3. Home on the RangeTrad 4. Red River ValleyTrad 5. In the Good Old SummertimeEvans and Shield 6. It Had to be YouJones and Kahn

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

101

32 24 63 138 110 94 7 83

116 38 501

535 35 524 392 392 101 127 182 489 366 430 241 169 102 39 81 613 144 78

58 272 210

58

313 93 282 373 147

172 610

48 62 387 50

59 21 616 563 312 273 457 300

86 233 113

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

435

STEINKE, CUDDY , JAKOBSON

APPENDIX B (continued)
Source page # a b c Johns Fuld W -F 430 52 78 61 26 9 203 206 507 520 261 117 231 399 546 646 155 259 581 638 298 500 22 209 95 42 661 106 659 438

Title and composer 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. SmilesCallahan and Roberts Song of the Volga BoatmenTrad Hail, Hail the Gangs All HereTrad Aura Lee (Love Me Tender)Trad For Hes A Jolly Good FellowTrad Ach! Du Lieber Augustin (Have You Ever Seen a Lassie)Trad Swing Low, Sweet ChariotTrad While Strolling in the Park One DayEd Haley By the Light of the Silvery MoonEdwards and Madden GreensleevesTrad Smile AwhileEgans and Whiting When Irish Eyes Are SmilingOlcott and Graff Im Looking over a Four-Leaf CloverWoods and Dixon Silent NightMohr and Gruber Four Strong WindsIan Tyson The Flowers that Bloom in the SpringGilbert and Sullivan Cockles and MusselsTrad The Yellow Rose of TexasTrad April ShowersSilvers and DeSylva Im a Yankee Doodle DandyGeorge M. Cohan d Land of Hope and Glory Music by E. Elgar (Pomp and Circumstance) Heart and SoulLoesser and Carmichael Hush Little BabyTrad Scarborough FairTrad My Wild Irish RoseC. Olcott Puff the Magic DragonYarrow and Lipton On the Sunny Side of the StreetFields and McHugh There Is a Tavern in the TownTrad e English Country Gardens Trad Carolina in the MorningDonaldson and Kahn Blowin in the WindBob Dylan Bicycle Built for Two (Daisy Bell)Harry Dacre Blow the Man DownTrad My BonnieTrad The Band Played OnWard and Palmer Happy Days are Here AgainYellen and Ager Paddle Your Own CanoeTrad Fascinating RhythmGeorge and Ira Gershwin Row, Row, Row Your BoatTrad ShenandoahTrad Im Popeye the Sailor ManSammy Lerner f American Patrol Music by W. Meacham, words by Edgar Leslie As Time Goes ByH. Hupfeld Rock of AgesTrad Cielito LindoCarlos Fernandez In My Merry OldsmobileGus Edwards Michael Row the Boat AshoreTrad In the Shade of the Old Apple TreeWilliams and Van Alstyne Swanee River (Old Folks at Home)S. Foster COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

415

87 82 385

33

572 187

97 2 248

188 146 381 123 268

56 234 98 72 14 111 345 327 99 83 329 37 475

97 469 121 299

407

436

MELODY RECOGNITION

Downloaded by [b-on: Biblioteca do conhecimento online UA] at 12:35 01 May 2013

56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68.
a b

Oranges and LemonsTrad Tea for TwoCaesar and Youmans Ta-ra-ra Boom Der-eHenry Sayers Auld Lang SyneTrad On Top of Old SmokeyTrad Therell Be A Hot Time in the Old Town TonightHayden and Metz On Moonlight BayMadden and Wenrich I Gave My Love A CherryTrad I Left My Heart in San FranciscoCross and Cory When Johnny Comes Marching Home AgainTrad Oh SusannahS. Foster When the Saints Go Marching InTrad Dixie LandDan Emmet
rd

36 97 411 239 26 81 17 212 123 388 39 572 570 115 416 278

639 404 641 196

In Johns, M. (Ed.). (1980). Jumbo: The childrens book (3 ed). Miami Beach: Hansen House. Opening theme listed in Fuld, J.F. (1995). The book of world-famous music: Classical, popular, and folk. NY: Dover. c In World Famous Piano Pieces. (1943). Toronto: Frederick Harris. d Hymn, lyrics: Land of hope and glory, mother of the free [Source: Fuld, J.F. (1995). The book of world-famous music: Classical, popular, and folk. NY: Dover]. e Earliest source of this melody was as a song from Quakers Opera in 1728; later known as the song Vicar of Bray; later popularised as Country Gardens by P. Grainger in 1919 [Source: Fuld, J.F. (1995). The book of world-famous music: Classical, popular, and folk. NY: Dover]. New title English Country Gardens and lyrics written by R. Jordan in 1962 [Source: Lax, R., & Smith, F. (1989). The great song thesaurus (2nd ed.). New York: Oxford University Press]. f Song by E. Leslie, We Must Be Vigilant was sung to this melody in 1942 and during World War II [Source: Fuld, J.F. (1995). The book of world-famous music: Classical, popular, and folk. NY: Dover].

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

437

You might also like