You are on page 1of 20

LAB TEN

REVIEW
Understand how parallel processing can be emulated in multitrack
programs and why you would use this process.
Understand the file organization capabilities of the Audio
Region List in ProTools for displaying information and for working
with audio files on your hard drive.
Know how to record automation data in ProTools.

ASSIGNMENT FOUR: CREATIVE PROJECT TWO


The second creative project (but fourth assignment) can be either a
continued exploration of musique concrte (Project 4A), but in stereo,
or a soundscape composition (Project 4B).

PROJECT 4A:
STEREO ACOUSMATIC COMPOSITION
Assignment
Create a three- to five-minute stereo composition based entirely on
musique concrte style manipulations of source material. Do not
include your source files!
or

PROJECT 4B:
SOUNDSCAPE COMPOSITION
Assignment
Create a four- to six-minute stereo composition in the soundscape
genre. You might consider it as an aural dreamscape a
musical/sound accompaniment to an imaginary visual.

209

Lab Ten

Due: Week 14 (Exam Week)


Upload your file to WebCT.

PROCEDURE
Using techniques practiced in previous assignments, create a threeto five-minute (Project 4A) or four- to six-minute (Project 4B)
composition. Because this is a stereo piece, you will need to
consider stereo placement and panning during the final mix.
In addition to panning, you will need to consider
reverberation, equalization, and other signal processing during the
mix as well.

EVALUATION
Evaluation will be based upon:

technical issues (lack of distortion, no extraneous clicks, clean


recordings, etc.)

electroacoustic elements (the placement of sounds in stereo


field and/or foreground/background, use of layers, appropriate processing, etc.)

creative elements (balance between unity and variation,


development of material, structure/form).

THE ASSIGNMENT, EXPLAINED


This final assignment lets you continue to explore the musique
concrte style in the abstract composition that you created in
Assignment Three or try something different by creating a
soundscape. The main difference between the two versions is that
musique concrte manipulates objects abstractly, without taking into
account any referential quality that the sound object may have,
while the soundscape composition presents sounds that have a
referential context.
In either case, your composition must be in stereo; therefore,
panning (placing a sound in the left/right stereo field) will come
into account. All the other considerations that went into
Assignment Three, including sound design, gesture creation, and
compositional approaches, should be considered in this assignment
as well. The previous project was an exercise in musique concrte
and less emphasis was placed on creative aspects; this assignment,
however, should take into account overall form, unity, and
variation.
210

Lab Ten

For example, it was difficult to create a successful ending for


Assignment Three, because the one- to two-minute duration
focused more on a single idea, a study in technique. This time, you
will need to think seriously about how the piece begins, where it
goes, and how it ends.
There are no limits to the source material you can use, nor on its
duration. However, the caveats outlined in Assignment Two still
remain regarding your choice of sounds. In the previous project, I
wanted you to concentrate your efforts in getting the most variation
out of a limited amount of material. Although this is an excellent
compositional technique and method, for this project, I want you to
concentrate upon creating interesting music.
There is a difference in length between the two assignments
because soundscape composition usually includes longer source
materialactual soundscape recordings rather than shorter sound
objects. As such, it is easier to create a longer composition.

ABOUT PROJECT 4B
Project 4B should not be considered a traditional soundscape
composition for the following reasons:

First, you will most likely not have access to field recording
equipment. Soundscape composition obviously involves
source material from actual soundscapes. There are a number
of shorter soundscape recordings included in the Soundscapes
and Sound Materials CD for use in this project, but they are
somewhat limited in comparison to those used by the
soundscape composers discussed in the Study Guide.

Second, soundscape composition ideally involves limited


involvement and/or manipulation on the part of the
composer. However, for the purposes of this course, declaring
a five-minute recording of downtown Vancouver at night as
your finished project would not demonstrate to me that you
have understood and mastered the techniques of this course!

Therefore, the emphasis in Project 4B should not be on


acoustic ecology but on sound design and signal processing.
One suggestion that you might consider in this project is to
create an aural dreamscapea musical/sound accompaniment to
an imaginary visual. This is not a requirement for this project, but a
proposal for a direction in which you might begin to work. I make
this suggestion because I recognize that many of you may never
have created any music before, and the unlimited possibilities of
composition may be rather intimidating. Therefore, if you have a
211

Lab Ten

specific concept to explore, you can focus immediately upon


creative ways of fulfilling the assignment.
The reason I suggest a dreamscape is that in dreams, we (or at
least I) have a vaguely realistic experience (being in a specific place,
doing specific things, interacting with specific people); what makes
it dreamlike is often the sudden and unrelated changes. Therefore,
blending the realistic elements (using referential sounds) with
surrealistic changes, unusual combinations and alterations (using
composition and signal processing) make this an ideal concept with
which to display your electroacoustic composition skills!
You can begin this project by considering a variety of scenes
for which you will create the accompanying sound design. Or,
instead of cinematic scenes, perhaps you will use more general
concepts that are referential.
Some possible scenes and concepts to consider:

walking through various spaces/scenes

hearing voices

a crowd cheering (other than at a sporting event)

a room full of birds

watching the Indy Car Race

a voice that alternates between being in the same room as


the listener and being on a telephone

a recognizable sound or voice that becomes demonic

riding a piece of data on the Internet, etc. (the possibilities


are almost endless).

One possible way to structure the project is as a series of


happenings and/or settings through which you are taking the
listener. Because it is a soundscape, the listener should recognize
the scenes. But because it is possibly a dreamscape, the connection
between the scenes should be quick and unexpected, and the
relationship between them need not be obvious. A possible
scenario would be walking down a long hallway and opening
doors that leads to unusual and unrelated places (a forest, a nightclub, an Amish town, a bus ride, etc.). Such a scenario includes an
overall form (the concept itself), a possible beginning and ending as
well as interludes (walking in the hallway), sudden changes
(opening and closing of the doors), unity (the walking and
opening/closing doors as a through-line), and variation (the length
of time spent in each room, the contents of each room).

212

Lab Ten

A good deal of the time you spend working on this project


may focus on precompositional planning and design. For example,
deciding upon what scenes or concepts to portray, what sounds can
suggest these scenes or concepts, and how you can move between
them may take up the majority of your time.
The Importance of Signal Processing
I have suggested that you can structure Project 4B as a series of
different scenes; however, a composition that simply fades between
different soundscape recordings will not be considered a worthy
final project. What will truly make an assignment shine (and get
that good mark!) is considering how to make it dreamlike and/or
surrealistic. And the way to do that is to consider signal processing
as an integral part of the design.
How can you process the sounds dynamically so that they suggest
quick and unexpected changes? How can you combine sounds in
unexpected ways?

Just as we use signal processing to create new sounds from


concrete sound objects in musique concrte, signal processing should
be used in this soundscape composition to make sounds evolve,
change, and combine with other sounds. The practical ways to do
so will be discussed later in this lab.
Create unusual, unexpected, and interesting transitions
transformationsthe more dreamlike, the better the mark.

and

MARKING
The mark will be broken down in the following way:
Technical Proficiency (10%)
This mark is simply for avoiding any extraneous clicks and pops
and distortion in your sounds. The first two often result from
discontinuities in editing, but more often they arise from the simple
lack of volume envelopes within your multitrack program.
Remember that creating a region from a longer region can cause a
click if the attack is not smooth. Clicks also tend to occur when you
try to create a rhythmic loop or longer gesture; again, the careful
use of volume envelopes can help here.

213

Lab Ten

Distortion can result from mono bounces (avoid these!) and


certain feedback processes, such as delays. Unfortunately, once a
sound contains distortion, you cannot fix the signal; you have to
redo the process with more careful control over the levels.
Quality of Sounds (25%)
This portion is broken down into the choice of sounds (10%) and the
signal processing (15%). What constitutes an interesting sound was
discussed in Lab Six, while what constitutes appropriate signal
processing has been discussed many times throughout the labs.
There is no rule that states the more signal processing, the better;
nor is there one that states create lots of weird sounds through
excessive signal processing. Instead, listen to the sounds and decide
what type of processing the sounds themselves suggest and what
type of processing is required within the composition itself.
Electroacoustic Quality (40%)
This component carries the heaviest weighting since most of the
course has been devoted to these elements. Under this general
heading, these three areas are considered in the marking: good use
of layers (15%); consideration of spatial placement and movement,
both left to right (stereo) and front to back (reverb) (13%); and
dynamic range (12%).
The use of layers has also been discussed many times; these
can be divided by type of activity or sound, and frequency range,
or function. Also consider varying the number of layers throughout the piece.
Spatial considerations now figure prominently: not only using
panning to place a sound in the left-to-right stereo field but also
moving sounds within the field. Reverb should also be used to
contrast sounds that are close to the listener and those that are
further away.
And finally, use dynamic range as a variable so that the whole
piece isnt at the same volume levels. Contrasting loud sounds and
sections with extremely quiet ones explores the potential of digital
audios excellent dynamic range.
Creativity (25%)
Lastly, does your work hold the listeners attention? Interest and
wit constitutes fifteen per cent of this total, and the creation of
gestures receives the remaining ten per cent. The former is most
easily achieved through a balance of unity and variation. The latter
is the technique that will allow you to avoid many pitfalls (outlined
in detail in the last lab) by using sounds to create longer phrases.
214

Lab Ten

SOME GENERAL HINTS


Many of the ideas covered in Assignment Three apply equally to
this final project:

Think in layers: perhaps a loop as background with other


material as additional layers.

Think about frequency ranges of various sounds as well as


density (number of different sounds at any given time plus
their frequency ranges). Having several sounds within the
same frequency range will cause interference and masking,
and it will make it difficult for the listener to hear the different
sounds.

Plan out the structure, the macro envelope and the kinds of
sounds you wish to use. This suggestion applies to both
Projects 4A and 4B; working away from the computer is a
successful method of organizing the piece by avoiding the
temptation of getting caught up in the sounds.

Take care in the mixing; the dynamic shape can have a great
impact upon the success of your work.

STEREO AND PANNING


Pan is short for panoramic, and in most cases, it involves the
placement of a monophonic signal into a stereophonic field by
altering the relative amplitude of a signal between the two output
channels.
We have already discussed the fact that most listening
environments are stereo, with a separate signal in each of the left
and right channels. In the case of most multitrack programs, a mono
track, without any control over panning, still creates two audio
channels. In such cases, the program sends an equal amount of the
signal to both channels; the resulting sound created by two
identical signals in both speakers will be heard as monophonic.
Whether the monophonic source is a track within an audio
program or a microphone connected to a mixer, if it sends an
unequal amount of the signal to either the left or the right output
channel, the result will be a shift in the apparent position of the
sound between the speakers. If more signal is sent to the right
channel, then the sound will appear to originate more to the right,
for example. This method actually results in a phantom image, in
which the sound appears to originate between the two speakers. If
listened to on headphones, these sounds will seem to originate
inside the head.
215

Lab Ten

It should be pointed out that this method of stereo imaging is


not a natural one; we do not hear sounds in the natural world from
two discrete sources with amplitude differences between them.
Stereo imaging in our brain is a much more complex phenomenon
that involves several factors, including time delay, filtering, and
amplitude differences. If a sound source is to the left of us, our left
ear will receive the waveform before the right ear willthis time
delay is crucial in our spatial perception. Furthermore, the
waveform that has reached the right ear may have traveled through
our headsthe change that results in the signalfiltering of high
frequencieswill also be used by our brains in determining spatial
location. Lastly, the slight amplitude difference between the left
and right earscaused by the waveforms loss of energy owing to
the further distance traveledis only minimally responsible for our
stereo imaging.
A lot of research has been done in stereo imaging recently,
and techniques have been developed that take much of this
research into account. There are programs and plug-ins available
that will take a monophonic sound and place it in a stereo field
through filtering and time delays; and there are others that will
take a stereo image and enhance it to make it seem even wider.
Panning in audio programs
Like most digital and analogue mixers, ProTools, Audacity, and
Audition use only amplitude difference to create stereo imaging.
(There are, however, an increasing number of plug-ins available
that create more complex panning search for ambisonic, for
example.) Making a session/project stereo is a simple matter of
moving the pan slider of a track out of its default centre position.

TASK: MOVING A MONO SOURCE


WITHIN THE STEREO FIELD
Setting Up a Session/Project

In a new session/project, import, as a mono file, track 10 Key


Jingle from the Soundscapes and Sound Materials CD.

Place it into a mono track, and begin playback.

Locate the pan slider, and move the slider while the track is
still playing.

You should immediately notice that the sound is moving


between your two speakers. If you have correctly connected your
216

Lab Ten

system, then moving the pan slider to the left will move the sound
to the left, and moving it right will shift the second right.
Automating Panning
ProTools and Audition allow for the automation of the panning
process. Unfortunately, Audacity does not.

Display panning information in the track display.

The track will display panning automation in a similar way to


volume. Unlike volume, however, the vertical scale represents
left/right.

Create panning automation using the tools available, either


through breakpoints or drawing (via a pencil tool).

Establishing a relationship between the waveform and the


panning automation is something with which you should experiment. Recording the panning automation while you are listening to
your session can result in a very natural stereo movement.

Not much correlation between panning and waveform.

Extensive correlation between panning and waveform.


The lower track displays automation that was created based
upon the underlying waveform data that can be seen. Each new
event (shown as a sudden onset transient, or increase in amplitude)
has a unique panning position. After this initial attack, the panning
is then either constant or moving.

217

Lab Ten

BOUNCING TRACKS THAT USE PANNING


Every track that you create should now contain panning information.

Thus, you will now be creating stereo sessions. Setting panning in


Audacity will also create stereo Audacity projects you will simply
not be able to use dynamic panning.
What happens when you bounce tracks that now include
panning - for example, if you want to create a gesture that can be
later imported back into your session/project?

TASK: BOUNCE A PANNED TRACK

In a new session/project, import (as mono files) from the


Soundscapes and Sound Materials CD tracks 10 (Key Jingle), 20
(Briefcase), and 32 (City Ambience) - all of which were
used in previous labs.

Place Briefcase into a mono track, and create the following


panning automation for the six events in the region:

Centre
Left
Right to left moving
Fifty per cent right
Fifty per cent left
Right/left

Briefcase, with the panning automation described above.

218

Bounce this mono file that includes automation - making sure


to create a stereo file on the bounce then import the bounced
stereo file back into your session/project.

Place the imported file (the file may have been split into two
mono files on import) into a Stereo track adjacent to the
original panned mono track.

Compare the two tracks to each other.

Lab Ten

Notice that the first event appears equally in both tracks,


while the second sound event appears only in the left track. This is
logical because the panning automation for the first event placed it
equally in the left and right channels, while the second event was
panned to the left channel.

Mute the mono track, and play the stereo track. Then mute the
stereo track and play the mono track.
The two tracks should sound identical.

USING STEREO SOURCE FILES


ProTools, Audacity, and Audition can all use stereo and mono tracks.
This brings up the following question:
Should You Use Stereo Source Material?
There are a number of factors to consider when determining
whether to use stereo source material or whether to use mono.
First, it is more difficult to work with stereo material. Unless
you are dealing exclusively with stereo tracks, you will need to
keep the tracks together while moving them, and you will need to
have two tracks available i.e. empty - all the time, and you will
need to process both tracks simultaneously. This is not always easy
to do, and it is most often unnecessary.
Second, many types of processing will destroy the stereo
image. For example, reverberation emulates the reflection of sound
within a closed space, but the two mono files already represent
discrete left and right channels. The end result will be two mono
reverberated files.
Third, spatial movement can most often be considered an
abstract process in itself, one that should be used as an additional
method of development. Remember that sound objects in musique
concrte are to be treated as neutral: a mono sound object allows
spatialization; a stereo sound object does not.
Therefore, if you are considering doing Project 4A, you should
use mono files until bouncing gestures.
However, Project 4B is different. In dealing with soundscape
composition, we are attempting to create a referential environment
for the listener. For example, in using a soundscape recording of a
city, a stereo source will sound more real, giving a closer
impression of placing the listener in the soundscape itself.
219

Lab Ten

Therefore, if you are considering doing Project 3B, you can use a
combination of stereo files (for referential soundscapes) and mono
files (for dynamic processing).

TASK: PANNING A MONO SOURCE


VS. STEREO RECORDING
This task explores the difference between using a stereo source
recording and a monophonic version of the same source and using
panning.
Setting Up the Session

In a new session/project, import track 32, City Ambience


from the Soundscapes and Sound Materials CD.

Create a mono version of this stereo file, and import the mono
version as well. Listen to the mono version.

As well as birds chirping and a child playing, the most


prominent sounds are the car and truck. The car begins at about
four seconds and continues through to ten seconds, while the truck
overlaps from ten seconds to the end.
Panning the Track
We will attempt to make the cars move in the stereo field using
panning.

Set the panning to begin at eighty per cent right, then from
four seconds through ten seconds, move the panning to eighty
per cent left, leaving it there for the remainder of the region.

Panning a mono source to give a stereo quality.

220

Listen to the track.

Lab Ten

There is certainly an apparent movement to the sound;


however, is it realistic? What happens to the birds throughout the
recording?

Listen to the stereo file, once through your speakers, and once
through headphones.

Headphones make stereo imaging more obvious. Once a


sound is played through speakers, the soundwaves will reflect off
the surfaces in your listening environment and thus upset the
left/right stereo balance.
Listen for the spatial placement of the following sounds:

the chirping birds

the childs voice

the cars and trucks.


How does this version compare with the mono version by
itself? How does it compare with the panned mono version? How
would you describe the difference between the two versions?
The stereo version is somehow fuller, wider, and more realistic.

SOUNDSCAPE VS. MUSIQUE CONCRTE


The theoretical and aesthetic differences between musique concrte
and soundscape composition have been discussed, and they are
essentially concerned with whether the composer intends the
sounds within a composition to have a meaning or if they are
supposed to be heard as abstract sound. Are there any practical
implications in these differences?
Context
There are, of course, a number of practical differences. First, if a
sound must have an associative meaning for the listener, the sound
must be recognizable. This raises the issue of a sounds context. In
Lab Four, the relationship of a sound object to the sound event and
soundscape were discussed in detail; this is where a sounds context
can be found. For example, the sound object of key jingling could
occur in any number of different sound events: opening a car door,
opening the door of a house, opening a safe, opening a jail door, and
so forth. Each of these events would suggest different meanings to a
listener. Without the surrounding events (such as the actual door
opening), the jingle of keys remains a sound object; however, it may
suggest all, or any, of the above events. This ambiguity is something
that we, as composers, may wish to play with.
221

Lab Ten

Many sounds can be recognized outside of their natural context; others may depend upon the sound event or soundscape for
their recognition. Certain day-to-day sounds have distinguishable
characteristics (which we recognize from their frequency, timbre,
and amplitude change) that might seem difficult to describethink
of the sounds of turning pages in a newspaperyet they would
seem obvious as sound objects. Other sounds that might be easier
to describe in terms of their acoustic parameters, such as turning a
light switch on or off, would be difficult to recognize. (Can such
sounds be considered anonymous sounds? Could you base a
composition on this idea?)
Sometimes a specific sound may suggest something more
general. For example, the sounds of certain types of machinery
(such as a printing press and a bottling machine) may be
indistinguishable from one another to most listeners; however, the
concept of machinery (and industry, progress, work, labour, etc.)
can clearly be implied.
Abstract sounds that we may not actually hear can sometimes
be identified. For example, in one work by Canadian composer
Robert Normandeau, he uses what I can only assume is the sound
of a fly caught under or inside a metal mixing bowl. I have never
personally heard a fly caught in a mixing bowl, but I have heard
flies, and I have heard objects inside mixing bowls; putting the two
separate sounds together is not a far stretch for a listeners
imagination.
Finally, certain natural sounds may not have any meaning to
one listener, whereas to another they may have a great deal of
significance. One example would be the chimes on the SkyTrain;
for those who have ridden Vancouvers rapid transit, the three
distinctive pitches would be instantly recognized and immediately
evoke a reaction. This reaction would not be to the sound itself, but
to its context, the listeners personal experiences of riding the train,
which may be positive or negative. Of course, for those who have
never ridden the train, the sound would have absolutely no
semantic significance.
Time
Aside from context, the listener requires a certain amount of time to
recognize the sound. In more abstract composition and sound
design, we can generalize when we want a high sound or a fast
soundthe origin of the sound matters less than its sonic qualities.
In presenting a sound to a listener, different durations of the
sound would provide different associations. In a task that you will

222

Lab Ten

perform later, the initial sound object will be a crowd of people.


The path of recognition for this sound might be as follows:

Noise (no specific frequencies)

Talking . . . human-generated sound

Lots of talking . . . a gathering of people

No other sounds, not a restaurant or bar . . . a sporting event?

Not a lot of reverberation, but still indoors . . . people waiting


for something

Waiting for a speech? A concert?

Quiet, reserved speech . . . not likely a rock concert. . . .

A symphony or opera? Who is in the audience? White, upperclass people?

My boss goes to operas . . . what a stuffed shirt. . . .

This final thought, which is obviously the most subjective,


might not occur for twenty seconds, whereas the first recognition of
speech would take only a second or two. Within five seconds, most
listeners familiar with the sound could recognize it and place it
quite accurately; once this recognition has taken place and the
sound does not change in any meaningful way, the listeners
personal associations with the sound might begin to form.
Therefore, in soundscape composition, in order to create this
association and personal connectedness within the listener, longer
source recordings are used.
Relationships
Once the listener has a relationship with the sound, we as
composers can begin to manipulate these associations.
What happens if you suddenly cut to another sound? The
relationship between the two sounds, whether it is a natural one or
not, is one that you have created. For example, after the crowd
soundscape, how about using the sound of an operating room?
(Most people will not recognize the sound of an operation from
personal experience, but we have seen such scenes in the media.)
What is the relationship between a passive audience and an
operation?
Processing the File
Another option open to the composer once a sound association has
been reached is to change the sound itself. This change would
223

Lab Ten

amount to signal processing. The level of processing in soundscape


composition can vary depending upon the compositions function.
Processing can range from slight manipulation, which can highlight
a natural feature of the sound, to extreme transformation to the
point where the sound is unrecognizable. Either end of the
continuum is an option, provided its consequence is considered.
For example, once a sound has been established, dynamic
processing (i.e., slowly introducing more and more processing to
the sound) can gradually lead the listener along the continuum
from recognizable to unrecognizable, identifiable to obscure.
Repetition of a sound source with different types of
processingas was suggested in previous labs as a device for
creating variation yet maintaining unity in sound objectsis still an
important concept. Once the listener is familiar with your sound,
that sound will be more easily understood through various
transformations.

TASK: PROCESSING AS A BRIDGE BETWEEN SOUNDS


This task will use processing as a method of relating two different
sounds.
Setting Up a Session

In a new session/project, import the following files from the


Soundscapes and Sound Materials CD:

Track 56: Audience

Track 2: River

Listen to them both separately.

These are two soundscape recordings with which we are


familiar. Although some of the potential associations for Audience
were suggested, as was the possibility of creating an association
between it and any sound that followed, we will try to create a
more abstract association, allowing listeners to assign their own
semantic meaning.
What acoustic parameters can be identified as being similar
between these two sounds? Both are noise-based sounds with lots
of energy in the upper frequency bands (particularly River).
One way to bridge between these two sounds is to emphasize
their similarities, and use the same process on both sounds.

224

Lab Ten

A Strategy
The process we will use will be to emphasize the high
frequencies. Not only will we increase the gain of the upper
frequencies, but we will also lower the gain (as much as possible)
on the lower frequencies.
Processing the Files
We will process both regions through the same setting.

Use an EQ to boost the high frequencies (above 5 kHz) in both


files by approximately 8 dB. Then cut the frequencies (below 1
kHz) in both files by at least 12 dB.

Listen to both processed files, and compare these to the


originals.

Our processing emphasized the spectrum that already existed


in both sounds. Because the originals were predominantly noisebased and high frequency, the processed versions should sound
(obviously) related, but somehow surrealistic, or at least as a kind
of heightened reality.
A Dynamic Relationship
We will now create an imitative dynamic process (real dynamic
processing will be introduced in the next lab) between the Audience
region and the processed Audience.

Place both files in adjacent tracks, both beginning at time 0:00.

Note that there will not be any phase cancellation between the
tracks, since the phase should not have been altered in the
processed version.

Using volume automation, create a five-second fade-in on the


original region, starting at time 0:00.
This will be the fade-in to the entire section.

Create a ten-second fade-out, on the original region, beginning


at time :20.

Solo this track, and listen to it.

Try to follow the identification process that was suggested


earlier; at what point would the listener begin to have a strong
reaction to this sound? Five seconds? Ten seconds? Fifteen?
Notice that the fade-out is much slower than the fade-in. The
reason is that we want to get the section moving, and this first
225

Lab Ten

sound is not very dramatic. Because we want the audience to


identify the sound, the fade occurs relatively quickly. However,
once it has been established, slowly removing it will allow the
listener to change focus slowly.

On the processed version, turn the volume off for the first 10
seconds, then create a ten second fade-in, beginning at :10.

Create a ten-second fade-out from :30 to :40.

Notice that the processed file is slowly fading in while the


original is still heard at full volume. The fade-out of the original
doesnt occur until the processed file is fully heard.

Listen to the two tracks, particularly to the timbral change that


occurs during the fades.

Because we boosted frequencies in the processed file, these


frequencies will be heard while both the processed and original
files are playing. If we had merely filtered out the lower
frequencies, this subtle change would be masked while both were
playing (since the missing frequencies are present in the original
file!). Therefore, the effective result is that during the fade-in, you
should hear the frequencies being boosted. Once the fade-in is
complete, the original fades out, at which time the missing filtered
lower frequencies will become apparent as the original is removed.
Switching Sounds
Now we will do the opposite with River. Instead of
beginning with the original and moving to the processed, we will
begin with the processed and fade into the original.

Create two additional stereo tracks, and place River and the
processed version of River in these tracks. Place these so
that they begin at :25. They must begin at exactly the same
time (right down to the millisecond), otherwise there will be
unwanted cancellation of frequencies due to phasing.

Duplicate both regions at least four times.

Audience is quite a long recording, whereas River is only


about six seconds: by duplicating River four times, we make this
gesture thirty seconds long. This would not necessarily work with
most regions, since the end of one region and the beginning of the
next would be quite apparent; however, the stochastic nature of
this soundwith its complex changes at the microlevel yet little
overall changeallows us essentially to loop the sound
imperceptibly.

226

Lab Ten

Fade in the processed version of River from :25 to :35a tensecond fade-in.

Fade out the processed version of River from :41 to :51.

Listen to the three tracks (Audience, the processed version


of Audience, the processed version of River mute the
original version of River) from the beginning until :51
seconds, paying particular attention to the transition between
sounds.

Notice that by twenty-five seconds, once the original Audience


has faded out, we are left with a rather abstract sound, but we have
arrived at it very smoothly. In fact, we hear it more as an abstracted
sound, since we have heard its progression. At this time, a related
sound slowly unfolds. Although we can hear the similarity (also
high-frequency based), because it is a newer sound, we will tend to
concentrate upon it. During these next few seconds, the abstracted
Audience sound is slowly removed.

Create a five-second fade-in for original River, from about


:40 until :45.

Create a ten-second fade-out about ten seconds later. In my


session, this would be from 1:00 to 1:10.

Your session should look something like the one below when
it is displayed with volume automation:

The above task completed in Audition, with four tracks of volume automation.

Listen to the entire session.

227

Lab Ten

This is a very long, smooth gesture with lots of time for the
audience to listen carefully to the developing sounds and establish
semantic relationships with them. Note that you have only used
two source sounds, but you have created over a minute of material!
One question about this creation would be how to continue
with this gesture. You have established a very long gesture made
up of five- to ten-second fades, with any given sound lasting for at
least ten seconds before a change occurs. If this type of gesture
continues (events beginning and lasting about ten seconds), everything will become predictable, and thus uninteresting. Once a
certain rhythm has been established, upsetting or varying the cycle
is a way to maintain interest. For example, perhaps a few short
sounds fading into silence would offer a contrast, after which a
return to the ten-second rhythm would be soothing.

TO DO THIS WEEK
You should decide whether you want to continue pursuing a more
abstract approach for the final project or try the soundscape version
and therefore explore more referential aspects of sound.
Of course, dont feel that you need to limit yourself to one
extreme or the other; many electroacoustic compositions successfully blend the two, even alternate between them in different
sections of the work.
In either case, you will need to think about what sounds you
want to use for the project. Perhaps you began working on them in
the second project or perhaps you want to try something
completely different. After exploring abstract sound in the previous
assignment, you should have a better idea about what sounds will
work successfully in this type of an assignment.
If you havent done so, listen to all of the sounds on the
Soundscapes and Sound Materials CD; maybe there are some
recordings that will trigger an idea.
Once you have some idea of the sounds you want to use, think
about how you can structure the composition. Think about how the
piece will begin, and how it will end. Reread the unit on
Compositional Strategies (Unit Seven), particularly the segments on
form and structure; because this project is longer, and not
considered an exercise, you will need a larger plan according to
which to structure your work.

228

You might also like