You are on page 1of 39

Basic Mixing II

Mixing a Starter Mix and Static Mix.


In Basic Mixing I, we have explained the starter mix and progression towrads a static mix. Basically we
have covered dimension 1 and 2 more than dimension 3. We actually did not apply any effects or added
anything (that was not there before). The starter mix is aiming for some togetherness and cleaning up what
is not needed. Keeping what is needed, with the help of the Level Fader, Balance, EQ and Compression,
Gate, Limiter. Without adding effects that work overall on the mix, we try to have all instruments sound
best and clear (starter mix), so all can be heard in their range. Together sounding as one mix. To get some
headroom back you may have to switch back to the starter mix again. Change one thing can lead to affect
the rest of the mix. Keeping track of the mix and dimensions is part of checking and re-rechecking. And
should be at constant attention. When a mix starts to be muddy, when two instruments overlap in each
others frequency range (masking), you will need to correct this by using separation. Remember all
instruments are placed inside the frequency spectrum and it is better to spread them out and create some
headroom for them to be heard. Actually there are quite some tools for separation. Just as in human talking,
as long as only one person is talking to you, you will understand and hear well. When a crowd is talking, it is
difficult to understand what is going on. It is a mixing fact that crowding up the mix with more and more
sounds is not a good thing. So we actually cut out what is not needed. Making all instruments sound good in
their own range is way better (dimensions). Constantly think about how the spectrum will change according
to what you add or remove, giving placement in the frequency range for each instrument to shine, but not
intrude. Cutting out what is not needed, may clear the way for other instruments to come more upfront.
Instead of just boosting, try cutting (other instruments) and make some space. Basic Mixing part I will
explain the starter mix, so read this before you go on.
Introducing Dimension 3 (depth).
Basically with applying dimension 3, we will progress with the static mix. (starter mix + dimension 3). Here
we will go further into adding effects and making the overall sound of the mix, static reference mix. Adding
reverb or delay (or any other effect) will add more frequencies and level so is costing some headroom.
Effects can affect placement, maybe a stereo delay will get your instrument to move out of its natural
position. There are quite a few effects that affect the dimensions and are the tools for basic mixing. Still we
separate Fader, Level, Balance, EQ, Compression, Gate and Limiter from the rest, because these are tools
that are commonly used. Read basic mixing I for more info on those tools. Also we call the finishing of the
dimensions, the finishing of the static mix. Basically the static mix is called by the fact that knobs, faders and
settings apply to the whole mix. So we do not automate or place events inside the timeline of the mix, but we
just set knobs and faders for the whole mix.
Effects.
Now, the most interesting, versatile and creative part of mixing, adding effects. Endless effects are provided
to create sounds or adjust it. Available are hardware or software effects. We cannot even discuss them all
over here, so we only focus on the most used and common ones. Also at first we will focus at effects that
work in dimension 3 (depth). Effects are ofhen a welcome addition to a mix, a bit of reverb can do a good
job and distortion on a guitar can make it rock. Remember that each time you add an effect; it will change
the range and whole frequency spectrum of the mix, possibly gaining frequencies. Therefore filling up
headroom more and more. A reverb may add a nice roomy sound, it can also muddy up the mix as a whole.
Cutting some lower frequencies out of the reverb signal can help clear up again, especially the 0 Hz to 120
Hz (180 Hz) range. So knowing effects and what they do to the signal is important, keeping in mind what the
effect is doing with the three dimensions, quality and reduction, headroom, etc. Just adding effects in a row
may sound good at first, maybe later on when your ears are not fatigue, you might think different. Do not
rush into adding effects; think what is needed for the mix to get better. Also for most effects we like to cut
the lower frequencies, just because they might influence the bass range from 0 Hz to 120 Hz. Be gentle with

effects, muddiness and fatigue ears are just around the corner. Because there is a vast amount of effects
available, there is no general solution for mixing. We all try to do our best, but we enter the creative field
and really are on our own. You can pick up tricks and learn from others, there is a good deal and
straightforward information on the net. It can be debated, it can be funny, and it can be good or bad.
Everything will stand by how much experience you have with mixing and how much you understand it.
Time and learning are again factors of success. Whenever you are tired of not getting what you need out of
your mix, be gentle and maybe do a re-check or just stay away for a while and return back later on. Do you
really need all those effect to have a good sound? Remember, Less Is More! The more basic approach will
work often better and is faster and cleaner. Crowding up the spectrum with effects is never a good idea. The
more natural effects are to our ears, the better we can use them to affect the dimensions of the mix.
Track Effects.
Whenever you need a single track or instrument to sound different, you can add a track effect to it. This is
common for all kinds of effects. But for single instrumental track effects, fader, level, EQ, Compression,
Gating and limiter, are most commonly used for mixing purposes. Keep everything adjustable per
instrument or track; this can help even when adjusting the final mix. Track Effects are most common on
computers and digital systems, you can place many. But processing power will drop also, it can be
rewarding to separate things and keep effects to a minimum. Less is more.
Send Effects.
On analogue mixers Send Effects might be all that you have; digital systems do have send effects also.
Whenever you need an effect that works overall on several instruments, you can use the send effect and send
the signal to the Send Effect. Most likely the Return of the effect will come up on the Master Fader. For send
effects you can be efficient with processing power, because using only one instance of the effect for multiple
instruments or tracks. Also it can be fun routing send effects and be creative with sound and effects, trying
effects after each other before deciding what works best. Send effects are effective as a collective on all
instruments. Having two or more send effect channels can help layer the mix, but we try to stay away from
send effects when we could use groups instead.
Group Effects.
As we did layer instruments and grouped them, we gave each layer of our mix a separate group track. A
compressor for welding purposes or an EQ could be placed on a group track. As opposite to a group track,
send effects can be uneasy to keep track of. Thus meaning the routing of the send effects can be inputted
from different tracks or instruments, sometimes this can work confusing. Place effects on group tracks,
when you can. Else use a Send Effect track. Especially when you apply each time the same effect on
separated group tracks (repeated instances of the same reverb), you could choose to just use one instance on
a send track.
Pre-Fader and Post-Fader.
The option to place effects pre-fader or post-fader is a matter of purpose. Any effect placed pre-fader as an
insert is affecting the track signal before the fader level, balance, etc is applied. For track compression on
vocal for instance, we mainly use a pre-fader compressor. This way the threshold setting of the compressor
is not affected by the fader setting of the track. We can now adjust the level of the vocals with the same kind
amount of compression. If we place the compressor post-fader, the threshold is affected by the setting of our
track fader (even balance, etc), so the amount of reduction is influenced. For vocal we choose pre-fader
compression, this way the amount of reduction stays the same when we adjust the single track. But the same
kind of system is applied to all other effects we place inside the mix. Placing pre-fader means, the signal will
first be affected by the effects in place, second by the track mixer settings (fader, balance, pan, gain, etc).
Placing post-fader means, first the track settings and then the effects. Post fader effects are for instance
reverb, delay, echo and all other sound manipulation effects alike chorus, phasing, modulation, etc.
Our Stage Plan according to dimension 3 (depth).
We have discussed and applied the first dimension (panorama) and second dimension (frequency spectrum)
for getting a good starter mix. Now as we would like to finish off all dimensions according to our stage plan,
we can apply some depth in dimension 3. Mixing all dimensions is called a static mix. Mostly we are talking

reverberation sounds that influence our hearing in perceiving depth, as we have finished dimension 1 and 2
(2D), dimension 3 should be our first concern. In dimension 1, we have set panorama. In dimension 2 we
have set frequency range. By rolling off some trebles or highs, we could affect distance for dimension 3. But
however dimension 3 is mainly a reverberation effect that will make our ears believe there is some room or
distance. Suddenly the field becomes 3D with all dimensions in place.
Depth.
Our hearing can calculate or guess the distance (depth) by hearing the dry signal and its reverberations.
Especially the pre-delay between dry signal and first reverberation makes us perceive depth. Reverberations
occur when a dry sound is hitting solid objects alike walls or any other objects placed into a room. Even
outside objects like water, mountains, valleys, tunnels, ambience, etc, somehow cause reverberation to be
transmitted back to us (echo). Specially calculation of the time between the dry signal (0 ms) and the first
reverberation signals ( > 0 ms) come across (returning to our ears a bit later) is making our brains
understand depth or perceive distance. Pre-delay is an important factor for any delay or reverb effect to be
taken in account while we are aiming for depth or distance inside dimension 3. Because the first transients
of any sound will make our brains react to recognize and understand, this goes for the dry sound as well as
for the reverberation sounds (any sound). The most used effects for perceiving depth or distance are reverb
and delay. We also explained before that dimension 2 (frequency range) by rolling off trebles or higher
frequencies we can perceive the dry-signal to be distanced. Depth means distance. When we are using
dimension 1 (panorama), when we placed a dry signal more to the left, the left speaker will play more than
the right speaker does. But with reverberation in dimension 3, for perceiving depth as a room, we must
transmit the 3D Spatial Information to the listener. We could place this at the opposite side on the right.
When a dry signal is playing a note from start, the reverberations returning from the room slightly later in
time (specially the first pre-delay), make our brains understand and calculate some kind of distance (depth).
In combination with panorama (dimension 1), we can use dimension 3 (and 2) for applying our stage plan.
Apart from using treble roll off or high frequency roll off in dimension 2 to perceive depth, common used
effects for dimension 3 are Reverb and Delay. Apart from creative aspects we will discuss later on, we will
use reverb or delay to represent the dry instrument (transients, Sustain) in our stage plan, tracks or mix
with some more natural perceived acoustics. This is called 3D Spatial Information, the information needed
to make our hearing and brains believe in depth or distance.
Delay.
Delay is a most simple effect and will repeat the dry input signal after a certain delay time. Basically delay is
a kind of reverberation, although less overcrowding then a reverb, using lesser reflections. A delay does not
often represent a room, but simply delays the dry signal until the first delay is reflected (repeated). The
delayed signal may either be played back multiple times, or fed back in to the input signal (feedback), to
create the sound of a repeating, decaying echo. The first delay effects were achieved using tape loops. With
some feedback the delay effect can be more exiting. In Reggae the echo effect or delay is used in various
ways but feedback is important for creating that 'dub' effect. Delay's and gates are often synced to tempo of
the track.

Mostly the delay signal will have some kind of ADSR time settings, so the delay signal will fade out in time.
A delay becomes interesting at a faster tempo and will prevent usage of a reverb instead. A delay is less
muddy and fuzzy compared to a reverb, so mostly a delay will likely keep instruments upfront. In these
modern times the delay can be in sync with the tempo on beats and bars. An early version is the Multi Tap
Delay, see below.

Sometimes the delay has a step-sequencer or matrix, nice settings are 3/16 and 8/16. Delays will come in
various shapes and sizes, discussing them all would be a hassle. But in general mixing and for improving
sound on separate instruments, delay is a common used effect. Most times a delay is used as a creative tool,
but can also be used for perceiving depth. As a start it is better to use a delay, instead of using a reverb.
Sometimes you can create a clearer reverb effect using a delay and some creative settings. Remember that a
delay (specially the delayed return of the dry signal) can be perceived as depth or distance (dimension 3).
Delay will leave more headroom then reverb and will sound more open. A Ping-Pong delay is a crossed over
delay and combines left and right signals, see below.

A Ping-Pong delay or stereo delay can affect the panorama, this can affect the dimensions. Watch out for
these kinds of stereo effects and only use when you need it. A Ping-Pong delay or stereo delay can be
creative, but however also can avoid masking. Temporarely unmasking by swaying the automated stereo
delay. The trick with mixing delay is setting up inside a mix, most likely to adjust the delay to the point you
don't really hear it, but it is there. For main vocals that must stay upfront, we could use a delay, keeping
original main vocals earable and still have the ambient early reflections. Use a gate to control what is passed
into the delay or goes out from the delay, this will separate the delay effect even more and will not be a mix
filler. To prevent muddiness use EQ to cut the lower bottom end from 0 Hz to 120 Hz (180 Hz). Delay is
common used as send effect and less as track effect. So the ultimate place would be on a send or group.
Remember for perceiving depth or distance we need the dry signal to be heard unaffected, on top of the dry
signal will sound the delay. We can roll of some highs to make some more distance or depth. When you need
an instrument to sound upfront, you keep the high trebles in place and use no pre-delay or little. When you
need an instrument to sound distanced, roll of some high trebles and use more pre-delay. Conflicting signals
inside the 3D Spatial Information mean our brains will be confused. Rolling off highs for distance and then
set no pre-delay is sending conflicting information. The natural world sound our hearing likes so much, is
sometimes uneasy to recreate while mixing.
Calculating Delay to tempo.
Delay mainly affects tempo. Thus percussive instruments (drums) that need their existence to be heard
rhythmically, need to be in sync with the tempo. You can fiddle around until you find a good setting, but also
calculation of the delay time will give you a hint where to start. Tempo Delay, calculate 60 BPM = the
delay time of one bar in seconds. Divide this by four to give the delay time per crotchet. Divide this by 1000
to convert seconds to milliseconds. When you know the BPM of a mix we can calculate the delay time with:
60000 / BPM = delay time in ms.
Or for any kind of note:
(60 / tempo in BPM) * 1000 ms * 0,75 (dotted quaver)

(60 / tempo in BPM) * 1000 ms * 2 (half note)


(60 / tempo in BPM) * 1000 ms * 0,666 (crotched triplet)
It is good to have some control when mixing, so a separate controller could help to live mix the delay or any
effect (this we will discuss in Dynamic Mixing). Long delay times can be recognized by the brain as echo.
Short delay times can be recognized as ambient or psycho acoustic (small room reverb, ambient) and can
affect the spread of the sound (depth or distance). Reverse delay or backwards echo, is a reversed sample
played backwards with an added delay. Then reversed again. For Reggae Dub Delay, use a single delay
return. Feedback the delay output to itself. The aux send can be used real-time (or with automation,
dynamic mixing) to dub over the original sound. (Boost some EQ around 3Khz and roll of some highs and
lows for Dub).
The most familiar use of delay processors is by guitarists in popular music, employing delay as a means to
produce densely overlaid textures in rhythms complementary to the tempo/sync of the overall piece (this is a
creative aspect). Electronic musicians (synth, sampling) use delay for similar effects, and less frequently,
vocalists and other instrumentalists use it to add a dense or ethereal quality to their playing (without
pushing then backuws on the stage, keeping it more upfront compared to reverb. Extremely long delays > 10
seconds or more are often used to create loops of a whole musical phrase. Sometimes unsynted to tempo
delay is used for a solo instrument (playing a solo for a while and then return to normal song/static mix
reference level).
Echoplex is a term often applied to the use of multiple echoes which recur in approximate synchronization
with a musical rhythm, so that the notes played combine and recombine in interesting ways. On computers
or digital systems this can be achieved by a step sequencer or matrix.
Doubling echo is produced by adding a short range delay to a recorded sound. Delays of 30 ms to 50 ms
milliseconds are the most common. Longer delay times become slap back echo, sync them to tempo. Mixing
the original and delayed sounds creates an effect similar to double tracking or unison performance.
Slap back echo uses a longer delay time (75 ms to 250 ms), with little or no feedback. The effect is
characteristic of vocals on 1950 Rock and Roll records, particularly those issued by Sun Studio. It is also
sometimes used on instruments, particularly Drums and Percussion. Slap back was often produced by refeeding the output signal from the playback head of a tape recorder to its record head, the physical space
between heads, the speed of the tape, and the chosen volume being the main controlling factors. Analog and
later digital delay machines are also easily producing the effect. Slap back delay between 20 to 80ms, no
feedback. Sync to tempo to make rythmical correct.
Flanging, Chorus and Reverberation are all delay-based sound effects. With flanging and chorus, the delay
time is very short and usually modulated. With reverberation there are multiple delays and feedback so that
individual echoes are blurred together, recreating the sound of an acoustic space.
In audio reinforcement a very short delay often of only a few milliseconds, is used to compensate for the
relatively slow passage of sound across a large venue. The unmodified signal is not played, and the delayed
signal is set to leave the speakers at the same time or slightly later than the sounds passing from the stage.
This technique allows audio engineers to use additional speaker systems placed away from the stage, but
give the illusion that all sound originated from the stage. The purpose is to deliver sufficient sound volume
to the back of the venue without resorting to excessive sound volumes of a large sound system placed near
the stage exclusively.
A delay tail on the front vocals, make the vocals appear with more warmth and appear fuller. Without
putting the frontal placement into jeopardy. The more the delay is appearing in the mix, the more it will
cover the vocals, using ducking on the first part of the vocals can free up fuzziness. Delays should only be
used as a creative event but can give certain distance. Certainly even artistic is a Band Echo combined with
a Spring Reverb, this is called dub. It is aslo common to give the main vocals a bit of ambient reverb (small
room, drumbooth) afther the delay, to have some more togetherness with the rest of the mix).
Delays generate space, tempo synched. Divide 60000 ms (one minute) by the song tempo (quarter notes per
minute) = ms. Variations in delay time drive (shorter) or drag (longer) the rhythmic feeling, we could use
automation but that will be afther we finish the static mix. Avoid phasing < 10 ms. Single delays 10 and 30
ms long thicken up sound, while the original sound is localized (upfront), perceived single delays as direct

sound events (early reflections). Stereo delays are suited for rich sound with a low level room / ambience
effect. Delays between 30 and 60 ms are called doubling effects (Beatles). Delays between 60 and 100 ms are
slap echo (Elvis). Stereo delays < 100 ms are acoustic space. > 100 ms is echo distance and space. The longer
the delay time, the more sound appears to be indirect. Delay tends to blur sound less than reverb. To
discretely create space, delay should be very subtly used, so that you miss the muted FX channel, but dont
really perceive the delay when turned back on. Echo, longer than > 100ms which is not tempo synced is good
for creating an effect that is clearly heard as such in the mix (solos).
Echo.
To simulate the effect of reverberation in a large hall or cavern, one or several delayed signals are added to
the original signal. To be perceived as echo, the pre-delay has to be > 50 ms. Short of actually playing a
sound in the desired environment, the effect of echo can be implemented using either digital or analog
methods. Analog echo effects are implemented using Tape Delays (Band Echo) or Spring Reverb. When
large numbers of delayed signals are mixed over several seconds, the resulting sound has the effect of being
presented in a large room, and it is more commonly called reverberation or reverb for short. Reverse Echo
is a swelling effect created by reversing an audio signal and recording echo or delay whilst the signal runs in
reverse. When played back forwards again the last echoes are heard before the effected sound creating a
rush like swell preceding and during playback. Jimmy Page of Led Zeppelin claims to be the inventor of this
effect which can be heard in the bridge of Whole Lotta Love. An echo is a reflection of sound, arriving at the
listener some time after the direct sound (early reflections). Typical examples are the echo produced by the
bottom of a well, by a building, or by the walls of an enclosed room. A true echo is a single reflection of the
sound source (dry signal). The delay time is the extra distance divided by the speed of sound (pre-delay). If
so many reflections arrive at a listener that they are unable to distinguish between them, the proper term is
reverberation. An echo can be explained as a wave that has been reflected by a discontinuity in the
propagation medium, and returns with sufficient magnitude and delay to be perceived. Echoes are reflected
back from walls or hard surfaces like mountains. When dealing with audible frequencies, the human ear
cannot distinguish an echo from the original sound if the delay is less than 1/20 of a second (50 ms >). Thus,
since the velocity of sound is approximately 343 m/s at a normal room temperature of about 20C, the
reflecting object must be more than 16.2 meters away from the sound source for an echo to be heard by a
person. Signals that return before < 50 ms are perceived as more ambient. Between 100 ms to 300 ms with
some feedback.
Reverb.
Reverb stands for Reverberation or reflections of sound hitting an object. The normal objects are Walls,
Floor and Ceiling. But all objects that reflect the dry signal back to the listener are reverb signals. A Reverb
does often represent a room, hall, booth, cavern, cathedral or is ambient. A reverb can transmit more
reflections then a delay. Therefore a reverb can be easily overcrowding the mix. Deep sounds have more
energy than high sounds. High frequencies loose more levels then lower frequencies over the same distance.
There are three areas of reverb perception. Firstly, there's the whole issue of an appealing (a good natural
reverb sound). Second, the sense of distance (depth, pre-delay), which is influenced by the dry signal (direct
sound energy, transients) and the start of the early reflections from the reverb (or room). Reflections in
nearly any time frame will cause a feeling that you are at some distance from the originating sound. This
distance effect will be made up of original direct sound and its relationship to duplicate delays. The
direction of the echo or early reflections is also important and must be placed naturally accepted to our ears
(dimension 3). We could use a nice roll off on the trebles to set some more distance (dimension 2)..
Quality of revrberation.
Go through your available reverbs, examine them all. A reverb may sound good while playing solo; it might
be bad sounding when you hear the whole mix. If bad reverbs have weak stage depth in the final mix result,
they will sound fuzzy or muddy. Bad reverbs need a lot of reverberation power inside the mix to transmit
the 3D Spatial Information to our listening ears.

Good reverb can be perceived as depth to a listener (stage depth). A bad reverb is less effective in perceiving
depth and have to be set louder, thus can muddy or fuzz our mix faster the a good reverb. Test your reverbs
with a drum booth preset (ambient) and a dry drum track. Sort your reverbs out which ones are best. If you
have a reverb that sound naturally good and when switched of is making the drums flat again. Then you
have a good reverb! Write down for later use, because when you need 3D Spatial Information inside your
mix, you would not like to go through all reverbs to find a good one (or just have a bad decision made by
planing a bad reverb). It is a timesaver when you already know what reverbs will sound best and can be
used in other mixes as well. It is likely that on digital systems nowdays Impulse Response Reverbs are a
good way of transferring 3D Spatial Information. With a good deal of naturally sampled rooms and
ambiences the impulse response reverb is most naturally sounding and gives depth in most cases without
adding too much mud or fuzziness. Combined in the mix with an algorithmic reverb (based on calculations
only) can be a good solution for balancing processing power and quality of sound distribution. But however
you must never stay in the mix with a dull sounding reverb that does not add anything and does not transfer
the 3D Spatial Information that you need. It is crutial to know what reverb is about, so this can take quite
some time to accomplish. Overdoing the reverb is a common beginners problem; try setting the reverb level
as you think it should be, and then reducing the reverb level by 4 to 5 dB. The masking effect applies to
effects as well as the original signal, unmasking a reverb path can make you need less reverb level and have
a cleared pathway and leave more headroom / dynamics. If confused, write down the delay or reverb
(depth) pathways into your stage plan or pre-plan this whole subject. Masking is allways there, but as much
as we can reduce it from happening, is a better goal then just boosting and raising levels.
On a digital system, there is need for processing power for a good reverb to shine. The calculations needed
for a good reverb are immense. So do checkout all your reverbs in the mix, a good reverb pays off in the
stage depth and can be heard at lower levels. They can transmit the 3D Spatial Information without the
need of overpowering the mix and will create more depth or distance, with less power be persuasive. Just
add a touch of good reverb and you will notice you will need less power to transmit the 3D Spatial
Information with the right kind of reverb. Even a good reverb must be set higher in level then you might
naturally want to, just to transfer the acoustics to your ears. The acoustics or 3D Spatial Information
contains the information of the dry signal and its reverberations and therefore let us perceive distance and
depth (dimension 3). This is only accomplished by forcing the 3d spatial information on to the listeners
ears. When you have a low level reverb setting, then maybe you can't hear the 3d spatial information
correctly and falls behind in the mix (masking). Then apply more to perceive and push to reverb to a higher
level. With a good quality correctly choosen reverb you are able to have enough reverb to transfer the 3d
spatial information and still are not fludding the mix with reverb (fuzz or mud, masking, unmasking). Best
is to switch from dry mix to reverb and repeat this a few times (while listeningb the whole mix), adjust the
reverb level until your happy with the combination of dry and with the reverb on top of it. When the reverb
is sounding muddy or fuzzy by doing this, either choose another reverb or EQ the reverb or remove some
low frequencies (0 Hz -120 Hz, > 180 Hz or even more). Muddiness can be easily avoided by EQ, but maybe
you can find good quality presets for different kind of purposes, wisely choosen reverb that will just work
better and produce less muddiness or fuzziness.
Reverb is a sound that returns all unlimited reflections of a room (or any object in its path), from all
directions and distances at various levels. These reflections can be extremely lower in level (-70 dB to -90 dB)
compared to the dry input signal. But nevertheless the listener will perceive the 3d spatial information and
can guess some kind of distance or depth. Even if noise is added, the special information is still there. But
however we try to keep noise away from the 3d spatial information. Basically the dry signal (specially the
transients) must come trough unaffected, so the listener can hear the transients and measure distance by

hearing the upcoming reverberation (pre-delay). If a delay arrives within < 15 ms of the original source
signal it will create imaging or panorama problems for example, if you have a sound panned in center and a
delay of 1 ms to 15 ms on the right, what you will hear is the image in the center, shifting to the left. This is
caused by the characteristics of human hearing in its relationship to localization. The ear perceives
localization because a sound wave will arrive at one ear slightly later than the other ear, as part of length of
travel. This is an innate survival mechanism for human behavior. It is otherwise known as the Haas effect. If
a delay of 1 ms to 15ms is brought back and panned to the same position as the original you will create
phasing effects. Also our hearing can perceive louder signals or softer signals being distanced. If a delay
signal arrives later than 15 ms but before 100 ms (approx) it will create more depth or distance (dimension
3). For what you have done is alerted your psycho-aural response, which tells you that you are listening to
the sound in a reflective environment, now our brain can guess the distance better. Whereas if you just
heard the original dry sound (transients), only the psycho-aural response (reverberation transients) would
create the effect that you are standing in a field (panorama and depth together). Our stage plan is based on
dimension 1 and 3 most the most part. But however rolling off some highs for distanced instruments or
tracks in dimension 2, can help perceive distance better. Dimension 2 can be used on the dry signal and also
on its effects (reverberation's). Where most mixes really are going wrong is a unwisely (according to stage
planning, maybe conlicting information) choosen reverb. Also contradictory information, alike using a large
reverb with lots of highs.
Most people can imagine a sound in a church ,cathedral, a large hall, etc. Most of that sound is just natural
events, originated from nature, what we hear from our own world in real life. When we mix a piece of music
together it will soon sound dry, unnatural. Reverb is a tool to add nature and make the listener feel at home.
Flat mixes tend to have none of the natural reverb, so we can try to add some reverb (or other effects) to
make a more natural sound (ambience). Mostly when people have a reverb in hardware or as a plugin, they
will tend to search for a reverb that sounds 'good' . With this method it can be time consuming, but
worthwile to examine your effects, try them out extensively and come up with a good list of presets. You can
try to make a dry trumpet sound natural by using a reverb and searching for a suitable sound, better is to
keep it al correct to natural hearing laws. Some people can imagine in their head how things will sound in
their natural context. Stage planning in the 3 dimensions is important, but when it comes down to creating
sounds in that 3d evoiroment, imagination can be a helpfull tool. Maybe the trumpet sounds best in the
place where you can imagine it to be. Then maybe you can select a suitable preset (alike a large hall preset)
faster, maybe fiddle a bit with controls to make the large hall suitable. Anyway some people just hunt down
for suitable presets, some think before starting off and imagine how it might sound and make a stage plan,
then take the preset most suitable straight away. Reverb or delay can enhance the natural sound of your
mix. Every element of the 3 dimensions alike volume, panning, eq, compression, depth, etc, can be seen as an
element to control toward a natural sound. Alike Delay, Reverb is a tool to control depth. Other effects alike
flanging and phasing, are more unnatural sounds. Effects are nice, but remember to know what purpose you
are using effects for. Mostly i think a natural sound is better then a completely dry sound (so all sound could
at least use a small reverb or ambience, detph in the form of natural reverb's or delays / early reflections are
what we hear 99% in our daily lives. How minimal in a mix used, natural detph will ease the users mind and
is likely to be better. However how easy reverb and delay can be setup, it will allways be difficult to mimic
the natural world.
Basic Reverb rules.
When using more than one reverb, organize by room size. Reverb tends to blur the mix more than delay.
The balance between space and distance can be controlled with the effect level. Reverb length particularly
with gated reverbs and snare reverbs should be tempo synced. Snare reverb tends to end on the next full
beat. Reverb with very short decay times creates discrete places. The longer the delay time, the earlier
distance is created (along with the level). Rich trebles content indicates nearness, lack is distance. The main
ambience usually drums ambience should be discretely mixed. The problem selecting presets is that reverb
never is to be listened in solo mode, but always must be listened and selected in full mix mode. For
instruments that are unfundamental (maybe some fundamental ones) place the original dry signal left and
the reverb signal right, or vise versa. Test a good reverb on a whole mix and see if it still stands out, you
cannot judge a good reverb solo. Take a dry drum group and setup an Ambience or Small Booth. Switch the
reverb on and off, a good reverb does not need to be very loud, but you should miss it when turned off. If
the reverb sounds natural. then you have an excellent reverb preset or device.
Reverb Controls.
Pre-delay - The distance in time between the onset of the original sound and the beginning of the

reverberation sound expressed in milliseconds (ms). Pre-delay is an important parameter to set the distance
or depth (dimension 3). Pre-delay here, is the time span from direct sound to the first reflections added by
the reverb. The longer the time spans the greater the distance of the sound source from the listener. Predelay with percussive instruments (drums) must be used with great caution. For all percussion instruments
including drums and bass, use no pre-delay or up to < 10ms, checking rythmics (we can use an high trebles
roll off for setting distance). High pre-delay for choirs and strings can send them assigned to the back, to the
back rows (stage planning). Pre-delay is setting the distance in reverb. High delay times suggest closer, but is
more fluttery and less tight. Pre-delay between 50 and 100 ms is slappy when not synced/, drums and bass
should have reverb without delay or very low 0-10ms, if needed longer (don't) and synched to tempo
(allways). All sounds relating to rhythm, such as drums and bass, should have reverb without almost to
none pre-delay. Up to 10ms. Check rhythmic consistency. High pre-delay up to 60ms are good for chorus
and strings, to put them in the back of the stage. Pre-delay with very acoustic natural mixes, follow the
natural behavior of pre-delay; longer delay times for nearby and shorter for far away. In pop music, use the
opposite approach; short delay for nearby and long for far away. With percussive instruments use short
delay times or sync to the rhythm/tempo. If reverb muddies up the dry signal, try a higher pre-delay value,
sync it. The quality of the early reflections of a reverb is important, only use the best plugins of reverbs and
delay/reflections. A reverb may sound good in large hall or big rooms / stadiums, when early relections or
ambience is at hand, the reverb may fail. Good reverb is half the work and means a good start, keep track
of reverb presets for later use. Instruments that tend to be close or upfront only use small reverbs or
room/booth ambience reverbs, keeping the trebles alive (don't cut), have no fuzzyness or blur, so that the
early reflections (or the transients of the reverb) are clear to hear. Instruments that are more backward can
use larger spaced reverbs and duller ones.
Deep sounds have more energy than high sounds. High frequencies lose more level over the same distance.
The greater the distance between the listener and the sound event, the lower proportion of high frequencies
in the reverb signal. This is why treble-roll off of the reverb signal is one of the most effective psycho
acoustic means of representing distance to a sound source, since our ears intrepid this information
subconsciously. Reverb should have more treble for close sounds in front of the mix, away less trebles for the
back.
Decay Time - The length of time from the onset of sound after the initial sound has been established until it
has dropped in level by 60 db.
Diffusion - If the diffusion is set to high (reflections very close in time) it will make the reverb sound very
smooth. If it is low you might start to hear discrete delays that might clutter the sound.
Room size - The larger the number, the bigger the size of the reverb space, the bigger the room is perceived.
Some preset programs will introduce more early reflections into the reverb algorithm.
Modulation Rate and Depth - Randomly shifts the time and intensity of the early reflections, creating a
more authentic effect. If using a lot of this function you need to be aware of any pitch variances of signals
with a lot of harmonic content.
Density -The amount of first reflections, early reflections and the time difference between them. You also
have control over the amount of this effect in the reverb mix. Often used for creating good room sounds for
drums.
Frequency Controls - All reverb loses high frequency content over time. If you EQ a lot of high-end over the
diffused part of the reverb it tends to sound very unrealistic (use quality EQ or oversampling EQ). In most
Plate and Hall algorithms the high frequency response gradually tapers off over time. There are also
frequency level controls at various low frequencies to keep the reverb from sounding muddy.
Reverb is generally used as a group effect or send effect and sometimes used as an insert effect (this way
keeping the dry signal intact). It is likely to place the reverb post fader, or else the fader movements will
affect the amount of reverb. By this pathway, when you move the fader the reverb will be the same amount
all-time. Set the reverb mix or ratio to 100%, because we already have the dry signal heard (reverb placed
as send effect), we do not have to mix the dry signal inside the reverb again. Pre-delay and frequency range
of the reverb signal can be perceived as depth. Test your mix at low levels and see if the reverb still is
effective, listening reverb or 3d spatial information at high levels can perceive better (but also can fattique
your ears). But your mix must be in place when listened at softer levels also. A well 3D (three dimensional)
unmasker mix stands when played at all levels. A good reverb does not need to be so much audible as a bad

one, but you will miss it when it is muted from the mix.
The treble roll off of a reverb signal is the most powerful way to perceive the distance or depth in the third
dimension, but actually for this we adjust dimension 2 (frequency spectrum). Vocals for instance should
sound in front with their trebles active, so here we do not roll off. Choirs can be sent to the backstage with
lesser trebles, so here we roll off more. For events at the front select rich reverbs. For events to the back
select duller reverbs. If needed use an EQ in front or behind the reverb to set the distance or correct the
reverb signal. Don't controdict dimension 2 and 3, setting up anbience reverb for close upfront fundamental
instruments and do not roll off the high frequency range to keep it all upfront. Think an avoid contradicting
3d spatial information, use a stage plan and act accordingly.
Again the Reverb is placed as a send effect or group effect. In this way we can make use of the reverb for
one or more instruments together (group tracks). Specially placed on Group Tracks it can give more
welding and layering, togetherness. As a send effect the reverb will not affect the dry signal, thus confirms
with our natural hearing. The dry signal is crutial to hearing and must be kept (leaving transients intact).
On top of the dry signal is the reverbed signal, so our hearing accepts the distance / depth. The dry signal is
always present in natural reverberation hearing. As a creative aspect we could use only the reverb signals,
but for perceiving depth naturally we need the dry transient signal to be present as wel as the reverb signal.
To set the reverb as an insert effect is not common and mostly done out of artistic freedom, still then set the
reverb post fader and 100% wet, adjust the reverb controls until sound is correct. Sometimes only one
instrument needs a reverb especially for itself, so we could insert a reverb and mix the reverb on the
instrument track (for instance the snare). Still routing to a Group Track is best, even if this means just one
single instrument is routed to this group. Group tracks or Send Tracks are good for reverbs because they
can save processing speed and layer or weld the group for more togetherness, summing up towards the
master bus fader. For instance a reverb or delay could do some furthermore welding and blending on a
group track forming a layer. Each layer could have its own reverb. First resort to EQ and Compression for
Groups. Maybe some gating or limiting. Then route to a reverb (delay, echo, effect). Maybe roll off some
lows and highs first.
How many reverbs you need inside the mix depends on your mixing technique. But generally four or more
reverbs on a basic mix are quite common. A good chosen reverb can re-place other badly chosen reverbs.
Sometimes there is little need for reverb and the style of music played needs to be dry (just some ambience),
sometimes there is room for a lot of reverb and is needed for creating the space (distance, depth). If required
you can add a delay after the reverb, by this way you can spread the reverb signal more (stereo delay, watch
the correlation meter) and maybe then becomes clearer to transmit the coherent 3d spatial information and
avoid masking. Sometimes just timed events of automation is needed to temporarely avoid masking. When
the reverb sits behind the mix we call this the Masking Effect, the reverb is masked by the dry signal of the
mix. Individual instruments or tracks. Just adding some delay, a bit of panning can help the reverb jump
out of its masking partner and be freed again. As a drastic measurement tool you could use some widening
or stereo expander (correlation). Automation becomes handy when only a part in the timeline is masked. If
reverb could be synced to tempo this would be worth it on longer reverbs or delays.
Gated Reverb - A setting where the reverb stays at one level over time and then suddenly shuts off. Often
heard in snare drum sounds in the 80s. Gated reverbs are good for keeping rhythmical content. Basically a
gated reverb is two devices in one. A reverb and a gate. Reverb 70's effect, set reverb to pre fader, lower the
original sound level fader, only the reverb signal will stay.
Group and Send FX using Reverb or Delay.
To make some kind of combination and the fact that on digital systems for processing speed, we cannot just
throw in a lot of reverbs and hope for the best. Most likely your digital system can cope only with a few good
reverbs in place. In older days recordings where done in rooms separating players and with multiple
microphones to capture dry and reverberation signals. Off course it would be great to use a reverb for every
instrument but we can't. Anyway it would be a bit complex keeping track what you needed the reverb for in
the first place. For dimension three, we need the reverberation. Therefore we must know why we use the
reverb. Commonly used for dimension placement and stage planning. Keeping track means some kind of
bargain on complexity and amount of reverbs. Anyway more reverbs mean more mud and fuzz, so keeping
only a few good reverbs is the way to go. Maybe 4 to 6 reverbs for a full mix to shine is quite good goal.
Delay after the Reverb.

A delay can help avoid masking of the reverb and can help to be added after the reverb signal to make the
3d spatial information more clear. Only do this when masking does not go away. Sometimes a delay can be
placed in front of the reverb. Automation can help parts to unmasked events.
Masking.
Masking or the masking effect will hide your reverb behind the dry signal, the 3d spatial information that
the reverb (or any other effect) is adding will not be perceived as depth or distance. Masking also occurs
when two signals / intruments play in the same frequency range from the same direction. Unmasking is
when we correct this and have cleared pathways for instruments signals to shine, saving headroom by a
reduced level and still have a good mix. Basically we can maybe hear the reverb somehow; it is masked and
therefore hidden behind more louder and sustaining sounds. There are some solutions. The fist is
questioning the instruments that are sustaining (or the reverbs that are sustaining) and are affecting the
transients, if this is not needed maybe a compressor can help to clear up some headroom or reduce the
sustaining sound or raise the transient sound (or gating). Second is just raising the level of the reverb
(common easy solution). But before our ears will understand the 3d spatial information of the reverb,
maybe you will raise too much (creating more fuzz or mud and having less headroom left over). A good
quality chossen and clear sounding reverb will solve this problem better. Bad reverbs are causing overblown
mixes and lose a lot of headroom, still not be perceived as depth. Preparation in dimension 1 and 2 is crutial
before adding dimension 3. Then with a good reverb there is little level necessary for our ears to recognize
the 3d spatial information. Dimension 1, 2 and 3 are all needed to perceive depth or the dimensions, and
make our ears understand the mix content (stage). Just when reverb is sitting behind an instrument,
changing the pan or balance on either the reverb or the instrument of question might do the trick and
uncovers the reverb (unmasking), panning it the first choise to grab for, and the level next. Anyway
dimension 1 panorama and dimension 2 frequency spectrum are coherent to dimension 3. Maybe you decide
to re-place a guitar track and set it more left, and then maybe the reverb (or effects) that are routed for the
guitar must be looked after also (more to the right). When you add depth to a mix by adding effects, it is
better to have dimension 1 and 2 somehow finished then start with dimension 3. The more completed your
mix is towards finished, anything you change will have a cascading effect and requires thinking and maybe
re-thinking and more work. Whenever a reverb or delay (or both) is masked by other instruments or just
can't be heard enough, try to undo the masking effect by forcing the 3d spatial information onto the
listeners ears. With a good reverb that is in place, you won't have to force too much and avoid to fuzziness
and muddiness altogether. A mix can be soon muddy and using EQ to correct this is well accepted, but in the
first place the sound of the reverb is of importance and panning/level. So whatever signal you input, you
better be sure it is cleaned from unwanted frequencies or material. So remember to sort your reverbs out
and know what reverbs you like best, this will give you a head start and will avoid the complexity and saves
time and frustration in further mixing. If the result is not mono compatible, try two identical reverb presets
from two different devices, panned left and right. Both reverb devices receive opposite send signals so that
the left of the panorama is reverbed right and vice versa.
Keeping track of things.
It is good to describe the information on a track why you setup a reverb or delay (or even any other effect)
and why its settings are there, why you need it. Take care to describe the 3d spectral dimensional placement
(stage plan). Also you can write down all reverbs that you like and keep track of them for later use while
mixing. Software and digital mixers sometimes have digital notepads; keep pen and paper at reach. Modern
DAW's often provide notepads for a song, track, instrument, mixer, so keep notes and keep track of your
info. You might have forgotten the next day, what brilliant solution you had the day before.
Starting a Mix and progression towards a Static Mix, Workflow of a mix.
Until now we have explained how any mix can be started, after recording is done. For a quick overview here
is a 'to do' list. Each time we refer to an instrument or track, you can find specific information about this
instrument or track below this mixing section. Each instrument referred over here, can be found below. For
panorama, frequency spectrum, quality, reduction, compression, reverberation and other specific tools,
refer to each instrument's section for details.
0. Recording instruments or tracks must be done in quality before mixing, with quality equipment. Keep the
signals noisefree by itself, free of humming sounds or continues sounds, do not use a noise reduction plugin
or system when recording. Some like to record with the Dolby button on. Try to be careful with placing
effects, EQ or compression on recordings in progress. Try to separate and record as much dry. Try to record

in stereo, on digital systems use 32 Bit Float for internal processing purposes. Convert samples / files to 32
bit floating point.
1. When starting a mix, set all Faders at 0 dB, set pan or balance in the middle (center, unity). Remove any
EQ or Compression, effect, plugins. Turn off all effects. Set all equipment you are using for mixing to zero,
dry, bypass, unity, centre, etc. Re-set all on your mixer.
2. Sort out your tracks from left to right on your mixer. Placing more fundamental instruments or tracks at
the left side. Spreading to the right side. Label every track. The tracks from left to right could look like this,
by example, Basedrum, Snare, Claps, Hihat, Overhead, Toms, Crash, Others, Bass, Guitar 1, Guitar 2,
Piano, Epiano, Keyboards, Synths, Others, Background Vocals, Main Vocals. Next to the Main Vocals on the
far right there is place for each send track then ending up with the master track. This can be debated, but
your free to setup your mixer anyhow you like. Modern small mixers or controllers have only place for 8
tracks at a time, spreading drums on channels 1 to 8, and the rest on channels 9 to 16, can help for switching
back and forth on the mix setup (specially when using mix controllers). Label sort and color code tracks,
assign them to group tracks, folders and route them. Use the group solo function to control the routing.
Prepare the mixer for a new start (starter mix).
3. Listen trough every track (in solo mode), cutting out any unwanted signals like noise, pops, clicks or
rumble. Any unwanted material must be removed, first choose to do this on a manual level (manual
editting), from audio tracks to samples. Some think this is tedious and some really like this fase. Some
remove breathing noises manually from vocals or some use a de-esser. This manual editing may seem as a
bit time consuming. better to remove and be sure you are only hearing what you need. When you are using a
sampler, you could clean the samples. How you do this is not of importance, but take some time to clear up
and clean up. Once you start mixing and listen to your whole mix or combination of instruments and tracks,
unwanted sounds might get hidden inside the mix (masking) and are uneasy to find their location. So clean
up while you can, when you can. Check each track, for breathers, editing mistakes, clicks, clean them. Only
reduce noise or humming when needed, better to have every recording clean before using any
noisereduction system.
4. Define your mixing strategy with a panorama sketch.
Mute all folders and tracks, with exception of the drum folder. Start building up the rhythmic backbone,
starting with the bassdrum, followed by snare, making use of panning, EQing, Compression, gates and
reverb (delay) until the drums represent a powerful and rounded sound. If the bass is part of the drum
group, add it to the mix after editing as required. Your next step is to build up the instruments that provide
harmony and warmth. Distribute them according to their complimentary spectral properties to the left or
right in the panorama. Create a good lead vocal sound and add it to the center. Balance the group levels of
all groups edited so far. Distribute decorations and additions in a spectrally sensible manner around the
existing basis. If an event sounds fuzzy, look for a spot within the three dimensions where it can be heard. If
you cannot find the spot either with a good panning strategy or with EQing or layering, reconsider the
reason for having this event at this place in time. Fine tuning of volumes at extremely loud or quit levels.
Do a first check on the whole mix. Set the master fader at 0 dB. Set all balance to center and adjust all
faders until youre a bit satisfied. Do not use EQ, compression or effects. Youre looking for mix that is quite
straightforward and that comes from one direction, all from centre. By only adjusting each fader until you
find some dry mix that works for you. This must be easy to setup and only takes a few moments to do so.
Don't worry and fiddle to much, we can be more precise later on. Also you could use some EQ to sort out the
bottom end of your mix. Using a low cut from 0 Hz to 30 Hz for Basedrum and Bass. Using a low cut from 0
Hz to 120 Hz (180 Hz) on all other tracks or instruments (including the rest of the drum set). Adjust the
starting frequency of the cut, just below the last main frequency. Keeping what is needed and deleting what
is not needed, just use some EQ for adjusting the bottom end. At least by doing this we guarantee that Base
drum and Bass have a clear path and the mix is cleared from any rumble, pops or click in the bottom end
range, as a result we now have more headroom. For some distance and reduction the Base drum and Bass
can be rolled off in higher trebles. Setting some distance on other instruments or tracks according to our
stage plan, roll off some more highs. Do not pan the mix for now (keep dimension 1 and 3 unaffected). Just
apply some reduction, quality, headroom, separation and togetherness.
5. Listen to your dry mix for a while. Decide by experience how to plan the dimensions. Draw a quick
picture; plan the stage inside the three dimensions. Plan the fundamentals alike Basedrum, Snare, Bass and
Main Vocals in center. And build the rest of instruments around this (unfundamentals), placing

unfundamentals more to left or right, don't be afraid to pan. Anyway do not touch anything right now, just
think of it or draw a quick sketch on paper. First we set dimension 1, panorama. Pan first before setting
fader level again, apply the panning law and know that relative volume of a signal changes when it is
panned. Completely apply all panning first, then adjust all fader settings until are satisfied. Keep adjusting
balance (panning) and fader (level) until satisfied with your stage planning for dimension 1 (panorama).
Listen to your dry panned mix for a while. Fader and Pan are most important settings to start a mix and
mostly overlooked, so we tend to take more time here for listing and adjusting.
6. By having some notion now where to place instruments, it is time to listen and decide what instruments
need EQ or compression in order to adjust its frequency range (dimension 2) and coherence to other
instruments. Also by doing this we can save some headroom. We can try to adjust for quality and reduction.
We have made a separate instrument section below for reference. Anyway we need to adjust every
instrument for its spectral content, mostly doing EQ where needed. A steep filter for bottom end cut off's
reduction (seperation, saving headroom). Headroom in the bottom end (0 - 120 Hz) should be only for Bass
and some Lower Bassdrum thumb sound (kick), cutting all other instruments in the lower range. For the
whole mix and for the Bass and Bassdrum (or any other fundamental instrument) is what we are after
mainly. For quality each instrument or track can be adjusted until sounding good, keep in mind not to fill
the misery area from 120 Hz to 350 Hz. Try to avoid boosting the mid's of all instruments, instead choose a
few, leave the rest or cut. Mainly using tools alike EQ, Compression, Gate or any other dynamic tool. Also
for distance we could roll off some highs for each instrument or track. Remember when you adjust an
instrument or track, to bring the level back into the mix directly afterwards.
7. Now solo the most fundamental instrument (likely it is the basedrum, but however some start with the
main vocals). The basedrum should be on the left of the mixer side. For example we have chosen the
basedrum as most fundamental instrument here and is most common to do so. Solo play the bassdrum and
watch the master vu-meter. Keep the level at -6 dB to -10 dB on the vu-meter by setting the base drum fader
level accordingly. Next add the Snare or Bass (you decide) and use its fader to set the level. Do not touch the
base drum fader; adjust the instrument level youre working on only. Each time you add a track to your
mix, set the corresponding fader level. Until you have worked your away trough the right. Looking for some
togetherness and having levels just right for a dry mix is crutial over here. Just keep adding and adjusting
until finished. When finished then do a check on the vu-meter level, you must have some headroom left, else
set all instrument or track fader back the same amount. Leaving some headroom for later on.
8. Decide how your gonna separate the Basedrum from the Bass. Making them sound well in the lower
frequency range. Start off with the Basedrum (listening in solo mode), roll of some subs from 0 Hz to 30 Hz
(50 Hz), roll off some of the highs > 8 KHz for some distance according to stage planning (behind main
vocals and bass). Creating a good Basedrum sound. For quality and reduction on the basedrum refer to the
instrument section below. Maybe add just a tiny touch of reverb, with little pre-delay (no pre-delay actually
for rythmic content). Only use an ambient reverb or small room/drumbooth reverb, we can use for the
whole drumset so can be on a group or send. Then aim for a nice -6 dB to -10 dB level at the master VuMeter while playing. Remember this is your reference track or most fundamental instrument. This
reference fundamental instrument is used to set all other instruments after. Instead of the Basedrum, maybe
the Main Vocals or any other instrument could serve as most fundamental. But keep in mind that likely
fundamentals are lower frequency instruments or tracks, as we need the center of the speakers to produce
the lower bottom end fundamental frequencies (left and right playing together). Accordingly and measured
off by this reference (most likely the basedrum track), keeping always in the center of the panorama. Also it
is best not to sway around in center, just keep it dead centre or the added signals must refer to center. Left to
right time lined events are not recommended at all. Keep your most fundamental (base drum) instrument in
center at all time. So listen trough the whole bassdrum track solo, adjust it, and just be certain it stays
always in center all the time.
9. Next, For the Bass just roll off some very low subs (0 - 30 Hz) and roll of some highs ( > 8 Khz). Solo the
bass and create a good sound, refer to the bass instrument section for this instrument specific while mixing.
Listen to both Basedrum and Bass together, then only set the Bass fader and listen to the combination of
Base drum and Bass (do not touch the basedrum fader, this is your static reference). Do anything (add EQ,
Compression, etc) to correct the bass signal now. Set the level of the Bass until it feels and hears correctly
(togetherness). Also keeping the Bass in center always.
10. Then introduce the Snare. For this you can decide to solo the Snare, apply a low cut for separation and
create a good sound (see the snare instrument section). Snare usaually needs a larger reverb. Do anything
you need to correct and enhance the snare signal now. Then in combination with the Basedrum set the

Snare fader (solo basedrum and snare). Introduce the Bass and keep setting the Snare fader furthermore.
Maybe you do not find right settings at start, keep fiddling soloing and playing together. Setting only the
Snare or Bass faders. Just find some fader settings that workout best, then leave them alone.
11. Introduce the Main Vocals, first in solo mode. They must be upfront, so no trebles cut over here. Just roll
off the bottom end to separate from basedrum and bass. You can always adjust the fine roll off frequency
later on, when you unhappy with the vocal sound. Use a stereo EQ filter setting to balance the vocals in
center even more. Then try to make a good sounding vocal (see the main vocal section below). This can
mean dropping a de-esser in place or delay / ambience room reverb (we allready have one in place for the
drums and bass. Or use some fine EQing to get the vocals really sound correct. Maybe some compression.
Then unsolo the vocals and again adjust its fader to set the mix. Remember vocals must be heard clearly
upfront, when not reconsider now.
12. Then add the Hihat, placing it with balance slightly right, according to position. Roll of a great deal off
the lows from the Hihat. See the specific hihat section below for more details. Add the overheads and give
them some distance by rolling of some highs. Maybe a Stereo Expander can widen the overheads, watch the
correlation meter for mono compatibility issues.
13. Continue to add each drumset instrument until finished. Adjusting only the newly introduced and stay
away from earlier introduced instruments. Workout a good steady sound for drums, spend some time to
create and finish off the drums first. Drums are inportant, they sound so much better inside a mix when first
completed as a drumset. Only continue when happy and completely finished all drumset events /
instruments.
14. Add Guitars, Keyboards, Synth, Percussion and any other instruments or tracks. Remember when you
place something left or right, you need coherence, so counteract. We need to counteract instruments, we can
counteract instruments and their reverb signals. Keep away from the center and be creative placing them
left or right (be couragous). So placing a guitar left might need the keyboards placed right as an opposite
coherence (counterweight). Work out your mix in dimension 1 - panorama, pan, balance and fader level
first. Then adjust dimension 2 - frequency spectrum of each instrument by adding EQ or compression as an
insert effect on each individual instrument. Cut lows and highs where needed. Also EQ and compression can
adjust the internal fundamental frequency range, so making your individual instruments sounding best is
off course recommended. According to our stage planning, we try to stay within the dimension 1 and 2
boundries more and tend to place dimension 3 later on.
15. When you have some choirs you do need them into the back of the stage, so roll of some highs and for
keeping them out of the lower frequency range roll off the lows also. Also here on the choir we can use a
stereo expander to widen the choir in the background. According to panning laws we spead the background
vocals or choirs (lower voices more centered and high voices more outwards). By widening the overheads
and choir we keep them away of the already crowded centre path.
16. Next, for the rest of the instruments (all instruments) decide where to roll off moere bottom end in order
to keep the lower frequency range of your mix only available for Basedrum and Bass, to separete and avoid
masking, leaving some headroom. Also avoid instruments not needed to play inside the 120 - 350 Hz misery
range. Solo first, cut where needed and create a good sound (use your stage plan and the dimensions). This
can mean some heavy balancing, EQing with some steep filters or compression. Just not to interfere.
16. Repeat until you have finished off all instruments. Remember it is not recommended to adjust an
instrument or tracks fader after it has been set. Do anything to make the track / instrument / sound better
now. When working each instrument or track, do try to adjust that track only, without adjusting the other
tracks.
17. According to your stage plan, you must now have setup Level, Balance and Frequency Range for each
separated instrument or track. And maybe already have rolled off some high trebles on some of the
instruments or tracks that are more distanced, all according to our stage plan. We placed only dimension 3
when needed, mostly ambience for upfront instruments and larger duller reverbs for more distanced
instruments.
18. Listen to the Drumset, Snare and Bass together. Maybe create a Group Track and route them to it. This
will be your first Group of many to come. Some do like to route the drums to its own group, therefore you

can also route the bass to its own group. This will keep them separated as an individual instrument groups.
19. Next assign groups to instruments that are close to each other and can form a layer together. Maybe a
group for guitars. Another one for piano, epiano and keyboards (synths). A group for the background vocals
(choirs) and a group for the main vocals. Assign Group Tracks for each range of instruments. For now do
not use any effects on the groups. If you like to use an enhancer while mixing, use it on a separate group and
route only instruments or tracks that need to be upfront, but we dont use it for now.
20 Try listening to the whole mix again, by muting or soloing instruments you can find out if the placement
of each instrument or track is correct and according to our stage plan. Else keep correction dimension 1, 2
and 3. Be sure you have found some kind of clean sounding mix before you go on that is exactly according to
our stage plan. If not keep fiddling about until you are satisfied. Try to stay inside dimensions 1 and 2 by
using only Fader, Balance, EQ or Compression (gate, limiter). Then correct dimension 3. Maybe this will
take an hour or so, it is crutial to get it correct.
21. Listen to the whole mix and decide it's level, pan, balance, eq and compression, gate and limiter are
setup correct. If not keep adjusting the mix until satisfied. Working only instrument or track based (see the
specific instrumental details below). We tend not to use any effects on groups, sends or on the master track.
22. Now we should have a mix that is clear (dry), where instruments can be heard, still have some
togetherness and have some idea of the dimensions 1,2 and 3, sounding correct as planned. Even with
separation that is contradictive to its opposite togetherness, it is possible to have a combination of both. A
mix thrives on separation from start (dimension 1 and 2) and get some kind of layering. Only adding some
reverb or delay in dimension 3 to create some depth, when we are sure that we are happy with dimension 1
and 2 first. Be aware of masking and learn to understand it very well, learn how to unmask.
23. Now its time to glue the mix more by adding to the groups where needed, hopefully we have created
enough headroom for mix adding purposes. EQ and Compression on a group can weld or glue instruments
together. Making groups appear as layers for mixing purposes. Summing up towards the master bus fader
output. Compression on a group can give a feeling of a layer (togetherness, glueing or welding) and give
some coherence of grouped instruments. Also by using EQ in front of the compressor (only place an EQ or
compressor when needed) you can sort out the frequency range by cutting lows or highs (or compress), by
this way the threshold of the compressor will only react to a cleaned dry input sound. You must cut lows
when they are not needed, affecting other instruments in their range that are more fundamental. You can
cut highs (trebles) when you need the group to be set back into distance (depth) or when you know these
frequencies simply do not exist at all (preventing noise, humming, clicks, etc). Planning the three dimensions
finally. Remember that Panorama is first looked and adjusted, then Frequency Range, then depth.
24. For working out depth on a dry sounding mix we can use reverb or a delay (most common) to give some
space (dimension 3). The group tracks are likely places to add 3d spatial information (placement, depth) to a
mix. So a good reverb or reverberation effect on a group or send track will give room characteristics and
placement. As we did combine instrument sets, we can now use the groups function to combine overall
effects (alike reverb, delay, compression and EQ, etc). Somehow you know you need at least a few reverbs
for Drums, Snare, Bass and Vocal alone (ambient small room or drumbooth), we can maybe route al
instruments needed to this group. Each group can differ in room and reverberation settings (see the specific
instrument details below). Place a reverb where you need it mostly, but be scarse. Rather on groups. You
can decide to place a reverb on its own single track or on a group, depending on the purpose. By using
groups (or sends) you can save just some reverbs though and keep it tidy. Now you understand you have at
least a few good reverbs running in the mix, just to sort out dimension 3. Choose good quality reverbs. We
did not even consider using reverb as an artistic (creative) factor; reverb or reverberation is common for
dimensional placement (3d spatial information). You can understand that a mix with 4 to 8 Reverbs is
common, because almost each different set of instruments (Tracks, Groups, Sends, Layers) need placement
and depth, as well as some welding for togetherness. Use a bright reverb for upfront instruments or tracks,
and a duller reverb for distanced instruments or tracks. Use comprossion on a group when you need to weld
them more together. Summing up groups towards the master bus fader output.
25. With all these reverbs in place, avoid muddiness. So maybe EQ or filter the lower bottom end of the
reverb with a good cut from 0 Hz to 30 Hz (50 Hz or much higher) on fundamentals. And a good cut from 0
Hz to 120 Hz (at least, > 180 Hz) on unfundamentals. When instruments or effects are masking separate
them with Balance, Pan, EQ, Compressor or some delay after the reverb. Or in more extreme cases use a
stereo expander after the reverb to even make the panorama wider, maybe use timed events as automation

(just when we need it as a last resort). Crowded mixes could be widened and listened as if played outside the
speakers; this gives some more room in the field (stage planning). Basically be reluctant to place the stereo
expander and use only as a last resort. Watch the correlation meter, goniometer, when you are using the
stereo expander or are working in dimension 3 with reverb. Check for mono compatibility, maybe you can
leave the correlation meter in visual sight always. According to your dimensional planning now add depth in
dimension 3, by adding those reverbs (delays) that are really needed to create depth and transmitting the 3d
spatial information to the listener. According to our stage planning, some instruments need to be upfront
and some more set backwards. If a set of instruments alike a drum group needs a particular reverb, place
the reverb on the group track. If a reverb only affect's the single track or instrument (snare for instance),
place the reverb on the single track (snare track) or even still place it on a group so other instruments can
make use / benefit. For instance with the snare reverb, you can place it on the instrument track to make a
difference, but this will keep it from being used by other instruments. Try to have reverbs and delays
available on group or send tracks, instead of using them on single tracks. Transients are always fist
recognized by human hearing for calculating depth and distance; we need the dry signal to be present
(transients, on top the reverb signal). The dry transients must be heard, as well as the reverb signal (also the
transients from the reverb signal as well as the original dry signal). So our mixing tactics must include all
necessary transients to be heard, to be perceived as natural depth (dimension 3). Any confusion created by
not applying dimension 3 correctly will affect listening pleasure. Any conflicting information will confuse
the listener.
26. Now here is where the routine starts to fade. As you have setup a mix that is consistent in placement in
the three dimensions. Work around the mix for some togetherness and clarity balance (separation, quality
and reduction), now its time to be more creative and invest some time. But with following guidelines 0 to 25
we at least have started a mix with some rules or routine. Anyway to finish off what you have started, do a
check, a re-check and a double check on your placement. Check levels, peaks, frequencies. Use hearing,
listening and visual methods (spectrum analyzers, correlation meter, goniometer, peak rms meter, etc) to
come to the right kind of decisions or conclusions. Now you should have a well sounding static mix where
you can add more quality and effects or automation, because you have some headroom still left inside you
can be creative and mix further towards the end result.
27. You could place a Limiter on the master track, just to avoid some peaks. But on most digital systems you
will be signaled by a led light when youre passing over 0 dB. Anyway even on 32 Bit Float digital systems,
going over 0 dB is not recommended. On a 24 Bit or 16 Bit system always stay below 0 dB. When you do use
samples or audio repeated times inside your mix, from drum samplers or instrumental samplers, it would be
a hassle to find out what bit rate they are all played in. Better use a common bit rate and sample rate
(preferably 32 bit floating point). So staying below 0 dB everywhere seems to be a good solution of not
harming any parts of your mix (else convert). If you have a limiter placed on the master track, just be aware
it's for peak scraping only, mostly a brickwall limiter with a threshold of -0.3 dB or a low peak reduction of
1 dB or 2 dB. Do not tend to use any more limiters. When your mix tends to be too loud and attacking the
master limiter, then set back each instrumental fader or their corresponding group fader by the same
amount (creating some more headroom). Do not touch the master fader; this will always stay at 0 dB. Only
when your master fader is the last control to your speakers as an amplifier you can change it scarcely, better
is to find a solution to keep the master fader at 0 dB at all cost. Listen you mix at loud and soft levels.
Sometimes instruments disappear when playing at soft levels. A mix must stand loud and soft. But most of
the time listen to soft levels while mixing, do not over excite your monitor speakers as well as your fatigue
ears. You can train your ears better working at softer monitor levels.
28. Summing up - When you understand that Bass and Base drum should occupy the lower frequency range
of 30 Hz to 120 Hz only, without interference of other instruments. Bass and Bass drum should own the
lower range by themselves. This can mean all other instruments and its effects are somehow or completely
cut in the 0 Hz to 120 Hz (180 Hz) frequency range, thus avoiding the bass range. Spend some time working
on the misery area from 120 Hz to 350 Hz, where most instruments have a piece.
29. Keeping other instruments (unfundamentals) in their range and place them left and right (opposite to
each others opponent instrument to counteract) is keeping them out of center. Center is the place for
Bassdrum, Snare, Bass, Main Vocals. Keeping the main vocals upfront by not cutting off its trebles. For
setting other instruments more left or right (away from the centre path), does not mean they are perceived
as such. When they are accompanied by 3D spatial information (pan, frequency, depth) as in form of a
reverberation sound opposite to its dry signal, cutting off trebles to make some distance, you are placing
them inside the three dimensions. Reverb or delay can work as a counterweight. When placing a dry
instrument to the left, maybe a reverb placed on the right can work as on opposite filler. Also comparison
instruments can work as opposites. We tend to layer them in groups. The groups finally can be used working

towards a finished mix (static reference mix). Using techniques on groups alike EQ, Compression and effects
for more welding together. Using groups or send for our reverberation needs. Mostly we do not like
anything on the master track, some like a limiter in place to scrape some peaks or for peak warning
messages. Try to watch the master track vu-meter while mixing and try to keep it below 0 dB.
30. Keeping a balanced overall sound coming from both speakers is planning the three dimensions and
mixing towards this goal. Avoid masking of reverberation by adding a touch more or pan (balance) them
away. Sometimes a stereo delay might work behind the reverb to avoid masking. Watch the correlation
meter for mono compatibility. Do checks and re-checks and make sure your planning and mixing rules are
applied. Listen a mix dry (without those reverbs), check if you have not used too much reverb, but just
enough to transmit the 3d spatial information to the human ears.
31. Quality is a general rule. Off course it is important how any separated instrument is sounding in quality.
So while busy generating a nice mix, individual sounds (solo) make up the mix (summing). You can adjust
any sound, track or instrument anyhow you like, with the use of EQ, Compression and other effects. Beef it
up, make it nice. So when using effects (especially reverb) do not hesitate to use the best instead of using the
most efficient. Avoid muddiness and fuzziness, apply separation (use the dimensions, the whole stage). Use
the rules for quality and reduction. Use panning laws. Refer to the specific instrument details below. And
finally as all instruments play as a mix on the master track, your mix must sound dam good! Only when
youre happy with your mix, as is, you should continue. Else revert to the basics of mixing (repeat the
mixing steps above), add or remove until youre happy with your final sound. As a final static mixing stage,
adjust the groups or just play around with them until you find a nice coherent static mix.
32. Until now we have worked on the starter mix towards static mix, and have finished it until satisfied.
Basic Mixing III will explain dynamic mixing. But for now we will skip the dynamic mixing and jump to a
pre-master. A pre-master can be a good tool to hear and analyze the mix, before we continue with the
dynamics of the mix. What final sound of a mix is best? Well, this is more complicated to explain, according
to style and preference. But maybe you can remember this chart below? You can read about it in Basic
Mixing I!
33. Volume automation for introducing events. Volume automation for song structure dynamics. Panorama
and stereo expander automation for clearing up the last remaining fuzzy spots. Carry out further
automation. Creative fine tuning to refine details. Constantly experiment in order to improve events that do
not yet sound right. Set the brick wall limiter in the master section to -0.3 dB. Export mixdown, 32 bit, no
fade in or out and a bit of clean silence. Use the mute buttom compositionwise when needed or create new
pleasant combinations in time. Remember the more instruments play at the same time, the more worries
and corrections needed. Also a song or track will sound dull and equal when all instruments plat from start
to finish, consider composition wise events and cut when needed. Less is better then more.

Anyway, repeated mixing will give you a beforehand understood notion what a finished static mix should
sound alike. Experience and understanding might be the main factor for learning to apply. For checking a
mix, a spectral analyzer can be a worthy tool visually. For instance you could check your mix against other
commercial recordings, using the A/B method. AAMS Auto Audio Mastering System can be good a tool to
help you analyze your mix and get some suggestions for better mixing results. You can train your hearing by
listening a lot of good commercial available music on your mixing monitors. Or just listen to a lot of
commercial available music anywhere you can. At least you know what quality your monitor speakers will
play. And you know what commercial music is sounding alike. When in doubt while mixing, take some
distance again. Compare your mix to other music. Fist Revert back to dimension 1, then 2 and then 3.
Check how much headroom you have left. Listen with clean ears, listening hours of music can make your

ears fatigue. Then it might be good to leave the mix for the next day and start with a fresh mind and fresh
ears or just take a good (>15 minutes) nap. Sometimes this is needed to really interpret well. Also premastering a mix can help clarify more. You can use AAMS Auto Audio Mastering for this purpose. A
mastered mix is perceived louder and also stands up more against other commercial recordings. Premastering can reveal sometimes more (what is good, what is bad), even when not heard inside the mix, this
suddenly becomes clear in the pre-master. Let somebody else listen to your mix (pre-master) and you will
get some feedback, depending on the style of your music mix and this persons dislike, choose how to
interpret this advice or criticism. Don't be worried by other people their critism, use this to your advantage.
Do not bypass or hurry, you will never get anywhere near a finished mix bypassing the rules of engagement,
bypassing natural laws of sound. A finished starter mix to static reference mix takes up to 4 hours of time
(maybe more or less). A well finished static mix can take up to 12 hours of time. Together finishing off the
static mix altogether can take up to 16 Hours. Remember that it takes the dimensions, quality, reduction,
separation as well as togetherness to finish off a mix completely (static reference mix). Better be educated
about these subjects and purposes, if not you could be stumbling with mud and fuzz for a long time! When
you know by experience what youre doing, time will decrease fast. Only continue with dynamic mixing
when satisfied with the static mix!
FX example.
Send FX1, < 600 ms, Small Reverb, Ambience on Drums and some Bass, no or little pre-delay, slight trebles
roll off (overhead, bass drum, loop, bass, snare,etc).
Send FX2, 1/4 note delay, medium to large reverb space, snare, no pre-delay, no treble roll off. Shorten the
snare track with a gate. Experiment with a thick gated reverb.
Send FX3, > 1200 ms, big room, background events, chorus strings, up to 60ms pre-delay, trebles roll off
strong.
Send FX4, 600 - 1200 ms, ambience, lead vocals, no pre-delay or 1/8 th note, no roll off trebles.
Send FX5, Decay depends on style, delay or reverb delay combination, lead vocals if needed.
Send FX6, Decay depends on style, guitar & keyboard if needed, L10/R20.
Send FX7, Delay effect, strong instruments (vocals), solos.
Send FX8, Chorus.
Send FXx, Reverb layering. For instance give percussion tracks a medium, thick room with quality. The
return is processed with a little widening to counteract the masking effect and to place the percussion
behind the drums. A little pre-delay on the reverb and slightly attenuated trebles.
General and Specific Instrument Details.
First let's explore some instruments and basic settings for the whole frequency range of the mix. Deeper
unfundamental (sure even fundamental) sounds spread in circular form and can hardly be detected below
100 Hz so avoid, whereas high frequencies spread directionally and are easy to detect. First of all the
panning rule for fundamental instruments is in center, unfundamental instruments are not centered but
placed more outwards.
Between, 0 Hz to 30 Hz (50 Hz), Bottom End. Stay away from the bottom end range unless you are mixing
with and for subs! Mostly this range from 0 Hz to 30 Hz (50 Hz) is heavily reduced for all instruments,
tracks, effects or sounding events, fundamental or un-fundamental (cut). The bass takes the lowest one-and
half octaves in the center, not a place for any other instrument, keep free for bass. Above that is the bottom
sector of the bassdrum 80 to 100 Hz, small banded the tumb or kick. Between, 30 Hz to 120 Hz, Bass range.
This bass range frequency is mainly for Base drum and Bass only. The only instrument that can go as low as
30 Hz is the Bass, therefore the Base drum could be cut from 0 Hz to about 60 Hz. Do carefully remove all
other unwanted instruments, tracks or events happening inside this bass range (0 Hz to 120 Hz). Cut all
other instruments, tracks, events heavily with a steep cutoff filter, be wary of boosting the bass range

frequencies. You should be able to find the instruments main lowest frequency and cut just below this
frequency.
Between, 120 Hz to 350 Hz, Misery Area. A frequency range where most instruments play, inside this misery
area range, almost every instrument will play some of its main frequencies. Best left alone for the most part.
But however you can make an outstanding mix when you know to work inside this misery area range!
Between, 350 Hz to 1 KHz, Generally it can be worthwhile applying a cut to some of the instruments in the
mix to bring more clarity to the bass within the overall mix.
Between, 1 KHz to 2 KHz. Irritating. Perceived as loudness for beginners.
Between 350 to 2000 Hz , nasal, woody and piercing. A mix can sound nasal over here.
Between, 2 KHz to 3 KHz, Generally often used to make instruments stand out in a mix.
Between 2 KHz and 8 KHz, is speech related, vocals can shine over here. A vocal lift at 2.5 KHz to 4 KHz is
common over here. The skin of the bass drum is also over here.
Between, 6 KHz to 10 KHz, Boost, to add definition to the sound of instruments, edge, and ring.
Between 8 KHz to 12 KHz, Treble range, cymbals, high percussion, s-sounds, chimes, etc.
Between, 10 KHz to 22 KHz, Trebles area, follow stage plan for setting distance (Roll Off).
Between, 12 and 22 Khz, Upper Trebles are air, can aid a mix but overdoing is worse.
Drums in general.

Panorama: The location of the drum set is crutial, according to your stage planning, try to keep it natural.
Basically Base drum and Snare are panned at centre, the rest of the drum set more left or right (according
to drumset position, keeping them out of center).
Quality: Drums need to be at a constant volume. Rarely do they change in level throughout a tracks or mix.
Reduction: Apply a good steep low cut from 0 Hz to 30 Hz (50 Hz) for the Bassdrum. Other instruments of
the drumset can be cut from 0 Hz to 120 Hz at least, to keep the bass range clear. For every instrument
inside the drumset, roll off some highs anywhere from 10 KHz to 22 KHz to set the distance (must be behind
Bass and Main Vocals according to your stage plan).
Compression: Drum compression is an art to say the least. The different amounts and styles of compression
can completely and utterly change the way the drums sound. Knowing how to compress can save you from a
weak sounding mix. The attack and how big you make the transient peak is the most identifiable part of a
hit. A too fast attack setting will cut the transient peak and your drum won't hit hard. But if you have a
slower attack that initiates the compressor right after the transient peak, it will accentuate the hit
(transients). The compression after initiating and bringing down a part of the sustain, the signal falls below
the threshold and slowly releases bringing up the decay making the drum last longer and sound larger and
more full. This is really a very general overview; you must experiment and listen to find the desired sound.
Percussive elements (drums) with long attack time settings (10 ms to 30 ms >) enhance the transients, some
more assertiveness, punch or bite is applied to the transients this way. Also setting up the compressor in
Opto mode, allows percussive instruments to behave faster and that is a good thing. For all drums that are
directly rythmic and percussive use Opto mode. This will get your drum set more clear and defined. Keep
the Snare, Base drum and some Hihats short (only the transients pass the compressor unaffected) while
reducing their sustain. You can allways recreate a bit of depth by intruducing ambience with a good quality

reverb. By using a ducking gate or side chain compressor, you could compress the rest of the mix or certain
instruments instead. Bass with Bassdrum compression on a sidechain.
Reverb: Drum rooms or drum booths (ambient) are recording industries standard on drums. In the early
days when only acoustic drums where around, the only way to separate the drums from interfering with
other instruments was placing them in separate rooms. So the microphones used could only pickup what
was needed. We perceive drum booths as natural (ambient). Drums are mostly placed back on the stage
(behind vocals and bass) cutting out some trebles from the reverb signal to create depth or distance. You can
give drums some reverb but not too much, just enough to transfer the 3d spatial information, be scarce, only
the snare needs a larger reverb or more ambience. Be scarse to use reverb on drums. Only when you mute
the reverb, you will notice the change to the dry signal (inside the whole mix, do not solo). Only give enough
reverb for drums to transmit the 3d spatial information to the listener. Listen dry and with reverb and
decide how much is needed. In the drum section, apply a little less reverb to the basedrum then to the other
drum tracks. The basedrum can sound flabby when too much reverb is applied. The snare can have a larger
and louder reverb. Mostly the reverb tail will end rhythmically short, just before the next beat or bar
appears. Reverb can be long (tough interfering with the rhythmic content) and afterwards shortened with a
gate, to make them rhythmically stand inside the mix we sync them to tempo when we can. Avoid mud by
setting a low cut EQ in front of the reverb or behind (cutting). When reverb is applied on drums, try to
make the reverb sound in rhythm with the dry-signal. Maybe by gating the reverb sustain in sync with
tempo. Use mostly no pre-delay or < 10ms, checking the rythmics (we can use a high trebles roll off for
setting distance instead). Missing spatial information can make drums dry and unnatural, so enforce
enough reverb so that the depth and distance is clear to be heard, use the best reverb you can find. When
reverb becomes obvious, most likely you have gone too far. Just set the reverb so that the 3d spatial
information comes across (enforced), but is not overcrowding or too powerful. The more natural the reverb
sounds, the better (quality). Try to stay away from the pre-delay, set it at 0 ms to stay in rhythm. Use small
rooms or ambient reverb. With a bit of pre-delay from 0 ms to 10 ms (0 ms please for rhythmical content).
Just roll off some trebles after the reverb to set the stage plan even more. Try assigning the toms, snare and
hihat a short crisp plate program, with a reverb time of 1.2 seconds. A reverb with a longer decay time can
be used on the overheads. Cymbals can be enhanced by a longer reverb. Generally, up tempo songs require
a shorter reverb time to allow the reverb to decay between beats and thus avoid blurring the sound (sync).
Basedrum.

Panorama: The Basedrum belongs in the center (fundamental). Also in the timeline the basedrum belongs
in dead centre of the panorama (specially the lows). If at any time the basedrum is more left or right, adjust
until the base drum is dead centre again (goniometer, correlation meter). Some simple conversion / effects
can be used alike converting to mono and then back to stereo again, can keep the basedrum dead centre at
all time during the mix. Specially working with bassdrum samples, be aware they are straight in the middle
centered. Beware of stereo, maybe even make the channel track mono (then you assure the signal is in the
middle from the original signal). Specially the low range 50 - 120 Hz must be straight from center, sudden
left or irght events in this range are better to be avoided. Watch the correlation meter or goniometer with
bassdrums.
Frequency Range:
Cut, 0 Hz to 30 Hz (50 Hz), Reduction, Separetion.
Bottom, 60 Hz to 100 Hz, Find The Boom, kick or thumb.
Around, 80 - 90 Hz, Solid Bottom (club) End.
Cut, 80 Hz, 60's Records!
Cut, 120 Hz to 250Hz. Muddy Lose it, Separation.
Cut, 400 KHz, Open, Less Woody.
Around, 1 KHz, Knock.
Around, 2.5 KHz, Slap Attack.
Boost, 2.5 KHz to 4 KHz, Kick Drum Definition Presence.
Boost, 6 KHz, Click High End.
Roll off, 10 KHz to 22 KHz, Trebles to set distance and reduction.

EQ: The Kick Thumb (head or hole) and the Skin are two basic frequency ranges to find. It can be very
handy to creat out of a single bassdrum (sample or real) two signal tracks, one track with the 0-120 Hz
frequency range and one track with the rest of the highs. This will make our purposes and plans with the
bassdrum more easy to adjust until they sound correct. The bassdrum is most important to keep track of
rythm. The Skin will result in higher frequencies 2.5 to 5 KHz. Applie some boost or cut over here. Mostly a
boost will make the Bassdrum more rythmic into the whole song or tracks, what is a good thing. Sometimes
the Skin sound will progress towards 7.5 Khz. The Thumb or Head/Hole has bottom range between 60 and
100 Hz. Use a bell filter with a medium Q, hunt down the Skin and Thumb frequency ranges until you find
the hotspots of both. Now you know what to reduce, and manage the two hotspot frequency ranges. The
lower the bassdrum the harder to edit, between 75 and 100 Hz is best main frequency. In clubs pressure
develops at 90Hz, because of the speakers output. Below 60 Hz or deeper you have to be very careful and
avoid swaying in panning, keep it centered at all time.
General Quality: Apply a little cut at 120-300 Hz and some boost between 60 Hz and 100 Hz (when needed,
Thumb or Kick). The main frequency ranges are from 60 Hz to 100 Hz for bottom boom and 2.5 KHz to 5
KHz for the heads or the thump sound. Search in these area's to improve the quality of the base drum
(boost when necessary). When the sound has a tendency to boom or resonate, try cutting between 200 Hz to
400Hz. Creating a modern sound, boost slightly in the 6 KHz to 12 kHz range, to accentuate the transient
click when the beater hits the skin. All this to make the bassdrum more clear and can be spotted by the
listener for rythmic understanding.
General Reduction : Apply a steep cut from 0 Hz to 30 Hz (50 Hz), adjust by listening to keep bass range or
bottom end clear, but does not affect the boom thumb or kick (around 80 Hz). The Basedrum has a specific
middle frequency between 60 to 100 Hz, for instance 80 Hz. Then you can be sure you can cut the Basedrum
lower frequencies from 0 Hz to 60 Hz, this will leave some more room for the Bass to play. A lower midrange
EQ cut, can help in the 120 Hz to 350 Hz misery area, basedrum has no purpose here, so cut. For bassdrum
and bass thin out some 180 -250 Hz by a few db. Apply a mid cut from 1 KHz to 3 KHz, where the
basedrum really does not need much power. Roll off some highs 10 KHz to 22 KHz that are not needed, this
will affect distance also, set according to stage planning.
Compression: The compressor has two functions. First restricting dynamics (high level peaks, top end
limiting, and compression) for the occasional over's. Second getting more punch trough transients, using a
long compression attack time is more percussive and avoiding the sustain, meanwhile as a rythmic aspect
keeping bassdrum in center localized. Use Opto mode for all percussive drums.
Gates : Sometimes a noise gate is used for this purpose in sync with the rythm. A bassdrum can be gated
short then added an ambience reverb (from the drum group). A short gated base drum can sound good with
a cut sustain and transients intact (short impulses are less tonal, so especially for basedrum good to keep
short). Also when the Thumb Kick frequency range is short, it will affect less loudness of the whole mix
(lower frequencies have more power). The shorter you can make de bassdrum the better, with the ambient
reverb or roomverb the tail will be recreated and will be more clear rythmically. Sync to tempo 32nd note
when needed. If you need very deep notes or a very deep bassdrum, remember the rule, the deeper the
shorter.
Reverberation: Be carefull and apply the rule ' less is more'. Too much will affect the Skin sound and
therefore making less rythmic inside the mix (masking, fluttering). Use no pre-delay. If needed stay below <
10 ms. A good ambient room or small room reverb / drumbooth can be used for the whole drumgroup, so a
good reverb is available allready. Just send in some of the bassdrum. When you have used sustain
compression or gating, a reverb can help for setting some space and depth. The bassdrum is usually left
dryer treated with a short reverb, to stop it sounding indistinct and cloudy (muddy). Be careful with the
reverb loudness, this will make the basedrum flabby, rather than punchy and dynamic. Set reverberation
pre-delay at zero to be in sync with the rhythm. An Ambience reverb is just enough (small reverb). A large
reverb is almost never used; this will easily make the basedrum reverb flabby, muddy and overcrowding.
When a basedrum reverb is switched off (bypassed) you must recognize the dry signal. When turned on, the
reverb must add but not too much, but is enough to convey the 3d spatial information. Keep basedrum
reverb lowest in level according to all other reverbs used on other drum set instruments or tracks, even
bass. Use no pre-delay or < 10ms to set the distance (drums are a bit behind the bass and main vocals
according to our stage planning). You can maybe roll off some high trebles by EQing the reverb signal (> 10
KHz) instead. Remember we already rolled of some highs for reduction. Be sure the reverbed signal does
not contain too much lows < 60 Hz to not affect the bass. Remember a small frequency ranged and
shortened bassdrum Thumb is best rythmically. Bassdrum needs the least amount of ambience reverb.

Snare.

Panorama: The Snare belongs in the center (fundamental), according to the snare position (stage planning),
maybe a little left or right (not much, very slightly). Beware of stereo, maybe even make the channel track
mono, plugins can do this job and keeps the snare straight in center.
Frequency Range:
Cut, 0 Hz to 120 Hz, Reduction, Seperation.
Between, 120 Hz to 400 Hz, Fatness Power, Wood.
Cut, 400 Hz, Snap.
Between, 400 Hz to 800 Hz, Body Thunk Sound.
Between, 800 Hz to 1.2 KHz, Power.
Cut, 1 KHz, Mellow.
Boost, 2 KHz, Bite.
Around, 4 KHz to 7 KHz, Crispness, Boxy.
Boost, 8 KHz, Sizzle.
Roll Off, 10 KHz to 22 KHz, Distance, Reduction.
EQ: Alike bassdrum has two core frequencies, Lows 120 - 300 Hz, Strainer (high bands). Use a low cut at 80
Hz. Maybe pitch or tune the snare. Also splitting up the signal two ways makes processing easier.
General Quality: Anywhere from 120 Hz to 1 KHz, Boost or Cut. Boost around 240 Hz for more fatness,
wood or power. Get some bite at 2 KHz. Crispness at 5 KHz, for more sizzle boost 8 KHz. Loose some
boxiness at 6 KHz. For quality correction the ranges 110 Hz to 250 Hz (Bottom Snare) and from 3 KHz to 7
KHz are good boosting ranges. Accentuate the stick impact and rim shots at about 5 KHz. The rattle lies
mostly between 5 KHz and 10 KHz. The bang is in the range of 1 KHz to 3 KHz. Body resonance can be
found at 100 Hz to 250 Hz.
General Reduction: To separate Snare from the Basedrum (Frequency range collisions in dimension 2), use
a steep EQ low cut up to < 120 Hz. A damping in the mid range 1 KHz is where nice EQ cutting can be done
to leave some headroom. Try applying some midrange cut to the rhythm section to make vocals and other
instruments more clearly heard. Roll of some highs from 10 KHz to 22 KHz to set the distance according to
stage planning.
Composition and Tuning: A snare tuned to the chords or composition can be crutial. A snare with tonal
content (tuned) can be more realistic and in tune with the song. You can be certain that a snare that is off
tune (1 note plus or minus) can already sound horrible. Use the pitch to adjust the tonality (set the right
toned snare). Some use pitch on all snare hits and adjust word by word throughout the composition. Again
this may sound tedious and time consuming, still a pitch tuned snare is best.
Compression: First is top end limiting for the dymanic range (keep the transients intact but leave some
headroom free). Second using compression with a long attack time for more transients (or maybe a gate).
Creating more snappyness and percissiveness with longer attack times. Third, adjustment and control of the
strainer sound (strings of the snare) with a faster release time. The snare will sound best when the transients
are loud and of good quality. Using a gate or compressor to wipe away the sustaining snare sound is
perfectly correct (especially in sync with the tempo). Use Opto mode for all percussive drums.
Gates: Allright we tend to give the snare a good wide big reverb to make an open sound. Short snares are
very nice, especially when going into the big reverb, so we need a gate to just only allow the transients and
important sounds of the snare trhough. We can also cut the snare sample by manual edits. Mostly then
mixed into an ambience reverb (group) for an ambienced result. Maybe place a noisegate after the reverb
device. For longer snares sync the gate to tempo.
Reverberation: Without a sustaining snare, we can now choose a decent reverb and make snare sound the
way we need. For the snare to be different, single track a medium sized Room Reverb. It is perfectly all

right when you choose a large reverb for the snare alone (a way bigger, larger room, then for instance we
use on the bassdrum or rest of the drumset). To separate the snare from the other drums a large reverb will
help it stand out more or a short crisp plate program. Snare drums are traditionally treated using a plate
reverb, hall settings also work well. Try 0.5 sec for a short reverb, and over 2 secs for an obvious effect. Use
no pre-delay or < 10 ms, checking the rythmics (use a high trebles roll off for setting distance instead).
Place a gate after the reverb and bring it within the beat of the snare (sync). If you don't roll off the highs of
the snare or this reverb, the snare will sit nicely upfront (just cutout the lows). Place the snare a little behind
the basedrum, at its natural placement.
Hihat.

Panorama: The Hihat can be placed slightly left or right, according to its natural position more right. As the
rest of the drumkit is all unfundamental, we place them into the panorama according to our stage plan. The
Hihats needs to have a low-cut filter < 250 Hz. The more deep frequencies in the hihats origin, the more you
place it right (outwards). Hihats slightly to the right, place the shaker left (Counter weight instruments).
Frequency Range:
Cut, 0 Hz to 200 Hz (500 Hz), Reduction, Seperation.
Boost, 800 Hz, Fullness.
Cut, 1.5 KHz, Smoothness.
Boost, 4 KHz to 5 KHz, Edge, and Crispiness.
Boost, 10 KHz, Sparkle.
Cut, > 15 KHz, Roll Off, Distance, Reduction.
Quality: Looking for some Fullness, Smoothness, Edge and Sparkle, cut or boost. This all depends on the
actual sound of the hihat, most usually they dominate the 8 KHz to 12 KHz area, so first apply a boost 3 dB
and then surf the area until you find a sound which is suitable for the mix. Main frequencies are the ring
from 7 KHz to 10 KHz. The stick noise at about 5 KHz, and a clang in the range of 500 Hz to 1 KHz. Use an
oversampling EQ of quality.
Reduction: Apply a steep filter cut from 0 Hz to 200 Hz (500 Hz). Depending on the kind of hihat, cut a lot
more to about 3 KHz (5 KHz). However hi hats can use frequencies as low as 5 KHz, these don't necessarily
contribute to the sound, they only serve to take up space in a mix (headroom). Roll off for distance > 15 KHz
according to stage planning.
Compression: Do not use a gate on hihats. Mostly hihat signals are not compressed at al, or only slightly
when some dynamical events need to be reduced or gained, use note or event based manual edits or
automation to correct those parts. The hihat has no natural dynamic content in the lower frequency ranges,
so has not much effect on the dynamics, though is earable.
Reverberation: Use no pre-delay or < 10 ms, checking the rythmics (use a high treble roll off for setting
distance instead). Hihats work well with a short to medium bright reverb setting. Try adding a high level of
early reflections (ambience) to add interest and detail. Dance music uses little or no hihat reverb to retain
the timing and impact of the dry sound (else sync to tempo). Try a short crisp plate program.
Overheads.

Panorama: Keep centered, maybe use a stereo expander to set as wide as possible. Or used as an
counterweight for other instruments (espescially the drumset).

Frequency Range:
Cut, 0 Hz to 200 Hz (400 Hz), Reduction, Separation.
Cut, 1 KHz, Openness.
Boost, 12 KHz, Zing, Air.
Quality: Add some lusture around 4 KHz with a high shelf EQ. When processing highs use a quality or
oversampling EQ.
Reduction: Apply a low-cut filter from 0 Hz to 200 Hz (400Hz). Roll off some high trebles to send the
overhead to the back of the stage (according to your stage plan).
Reverberation: Use no pre-delay or < 10 ms, checking the rythmics (use a high trebles roll off for setting
distance instead). Try a longer delay time > 1.2 S, using a hall programme with a decay of about 1.5 seconds.
Cymbals.

Panorama: Place according to stage position normal cynbals (not crash cymbals) need to be close to the
hihat on the right side, eighter more left or right. Sometimes at center but widened alike the overheads or
used as counterweight. Crash cymbals can be placed anywhere, be sure to pan as wide as possible.
Frequency Range:
Cut, 0 Hz to 100 Hz (400 Hz), Reduction, Seperation.
Boost, 100 Hz to 300 Hz, Clunk stick, Clang or Gong.
Cut, 200 Hz to 400Hz, to thin cymbals, Seperation.
Cut, 1 KHz, Openness, Boxy.
Cut, 1.5 KHz to 6 KHz, Ring.
Boost, 7.5 KHz to 12 KHz, Shimmer, and Sizzle.
Boost, 12 KHz, Zing.
10 KHz to 16 KHz, Air, Crispy Cymbals.
Roll Off, > 12 KHz, Limiting, Distance.
Quality: Main clang or gong sound at 200 Hz, crispness at 5 kHz. There are many types of cymbals, splash,
china, effect cymbals, orchestral cymbals, marching band, gongs and specialty stuff. Therefore only adjust
the cymbals marginally. Reducing is always better when you adjust cymbals. Cymbals can be overcrowding
and irritating. The main resonance lies below 1 KHz, in the range from 75 Hz to 300 Hz. From 1 KHz to 3
KHz is the bang of the beat and from 5 KHz to 10 KHz is the click. Resonance is from 8 KHz to 15 KHz.
Pay attention to the quality of plugins used, adding some brilliance. When processing highs use a quality or
oversampling EQ.
Reduction: Cut anywhere from 100 Hz to 350 Hz. A gentle roll off in lows 12 to 24db so it does not phase
with the snare.
Compression: Do not try using a gate. Mostly signals are not compressed at al, or only slightly when some
dynamical events need to be reduced or gained, use note or event based manual edits or automation to
correct those parts. Has no natural dynamic content in the lower frequency ranges, so has not much effect
on the dynamics, though is earable. For crash cymbals their frequency of play is often very small, therefore
compression is unsuitable and can be a hassle when trying to setup. Alike toms when they are just sporatic,
we tend to only use compression when the input signal is more constant over the timeline included and can
be managed more. Else use manual edits instead.
Reverberation: The cymbal track is already a kind of close ambient room sound. Use no pre-delay or < 10
ms, checking the rythmics (use a high trebles roll off for setting distance instead). Cymbals particularly can
be enhanced by a longer reverb.

Toms.

Panorama: Hi Tom placed slightly or far Right, Low Tom placed far left, or opposite. Remaining Toms
placed in between, alike their natural stage. Mid tom slightly left, floor tom far left.
Frequency Range:
Cut, 0 Hz to 30 Hz (50 Hz), Less Muddy Mix events, Separation.
Between, 80 Hz to 300 Hz, Fullness, Boom.
Boost, 400 Hz to 800 Hz, Warmth.
Between, 1 KHz to 3 KHz, Ring.
Between, 3 KHz to 8 KHz, Attack.
Between 8 Khz to 16 Khz, Air, Distance.
Quality: If the toms sound weak, use a bell EQ about 100 Hz to 200 Hz (150 Hz mid frequency) or identify
the exact center frequency for each tom. Rack toms fullness at 240Hz, attack at 5 kHz. Floor toms fullness at
80-120 Hz, attack at 5 kHz. Each tom has only one frequency range, so this can be adjusted with only one
single steep bell filter, toms mostly have just a small frequency range sweet spot. Roll of some trebles to set
the distance.
Reduction: Cutting out the lower bottom end < 120 Hz or more can free up some headroom, but as toms are
not continues events maybe < 50 Hz is quite ok, to keep some power.
Compression: A noise gate on the toms can remove some sustain or some compression artifacts, keeping the
transients intact. Sometimes a manual edit, does not take any more time as toms only appear in sudden
events. Use Opto mode for all percussive drums. Alike crash cymbals when they are just sporatic, we tend to
only use compression when the input signal is more constant over the timeline included and can be managed
more. Else use manual edits instead.
Reverberation: For toms maybe a large snare reverb (same as snare) can bring them out. Toms have a
natural sustain, so don't need much reverb. Plate and small room settings are good for pop, with metal
benefiting from longer settings. Hall settings are good for a big tom sound or try a short crisp plate
program.
Percussion.

Panorama: Percussive elements are often panned left or right and kept away from the center, set as much
outwards as possible. A stereo expander can bring the percussion elements even more outwards. We like to
pan percussion more left or right, not centered. Bongos to the left and far behind in distance, conga's far to
the right and also far behind in distance. Panning outwards they remain unmasked by other signals, set in
distance when other instruments allready overcrowd the stage.
Frequency Range:
Cut, 0 Hz to 120 Hz, Reduction, Seperation.
Cut, 200 Hz to 400 Hz, Higher frequency percussion.
Between, 200 Hz to 240 Hz, Resonance.
Around, 5 KHz, Presence Slap.
Between 10 KHz to 16 KHz, Air, Crisp Percussion.
Quality: Resonance at 200 Hz to 240Hz, presence slap at 5 kHz. For distance depth roll of some high trebles,
send the more to the backstage.

Reduction: Roll off some lows from 0 Hz to 120 Hz or more, as they are outwards and do not need a lower
transmission (panning laws). According to stage placement, roll off lower frequencies.
Compression: Compression can help bring forward the transients while reducing the sustaining sounds
(keep some headroom). Use Opto mode for all percussive drums.
Reverberation: As for using reverbs or delay, maybe the reverb placed for the snare (Group, Send)can be
useful also for percussion purposes, we tend not to use the ambient reverb of the whole drumset, maybe just
some to glue into some togetherness with the rest of the drumset. Percussion requires a long reverb with
little pre-delay and a little high frequency cut or is damped by the reverb setting. For the Percussion
(Group, Send) a medium sized Room from 1.5 seconds to 2 seconds of delay time. A pre-delay of 15 ms and a
medium roll off in frequency (damped or EQ). Maybe place a stereo expander afterwards to widen the
percussion outwards. The masking effect might hide the reverb, but just set the loudness of the reverb high
enough to get some 3d spatial information transmitted. Maybe a stereo expander after the reverb signal and
some automation will solve the hiding problem, just watch the correlation meter as you are using the stereo
expander to widen. A delay can help sweeten the percussion, only when you adjust the delay in tempo
synced and keeping pre-delay short. Reverberation, use no pre-delay or < 10 ms, checking the rythmics (use
a high trebles roll off for setting distance instead). Transients are more important for percussive sounds,
they are mostly placed backstage so for rythmical contect we need the original transienst to be heard.
Percussion instruments are placed consciously toward the rear, we will need a large reverb with some predelay and filtered trebles. Reverb can be generously applied here, so the masking effect will stay away or is
overpowered by reverb and brings over the 3d spatial information. Reverb layering for instance, gives
percussion tracks a medium, thick room with quality. The return is processed with a little widening to
counteract the masking effect and to place the percussion behind the drums. A little pre-delay on the reverb
and slightly attenuated or lowered trebles.
Bass.

Panorama: The Bass is most fundamental (next to the basedrum). A bass that is not played in center entirely
trough the track timeline, but rolls a bit from left to right is offsetting the balance of the mix, because the
bass uses heavy lower frequency components. Bass is always placed at the center and if not, it is correction
time. So really keep the bass dead centered. Sudden events in the bass that are left or right, make the bass
less effective, sway around making transmission of both speakers less effective. Maybe convert bass to mono
or use a mono convert at the end of the channel. Convert bass samples to mono. The bass needs to be dead
center at all times! Use a correlation meter and most of all the goniometer to make the bass dead center in
all events.
Frequency Range:
Bass Note Range: 33 Hz (C1) to 523 Hz (C5).
Bass Guitar Note Range: 31 Hz (B-1) to 392 Hz (G2).
Roll Off, 0 Hz to 30 Hz, Reduction, Seperation.
Boost, 60 Hz to 100 Hz, Bottom, Small Frequency Range, Carefull and Exact.
Cut, 60 Hz, Humming Noises, Eliminate.
Boost, 100 Hz to 120 Hz, Pointy, Prominent, Fat.
Between, 120 Hz to 300 Hz, Warmer.
Around, 200 Hz, Leave it or cut.
Around, 250 Hz, Nasty Bass Frequencies, Cut.
Between, 400 Hz to 800 Hz, Clarity.
Between, 500 Hz to 1.5 KHz, Pluck noise.
Around, 800 Hz, Mid Tops, Fret noise.
Around, 2 KHz, Presence and Definition.
Between, 2 KHz to 6 KHz, Edge, String noise.
Roll Off, > 10 KHz, All Highs, Distance, Reduction.
Reggae Bass Sound : Boost, 40 Hz, +10 dB, Boost, 80 Hz, +12 dB, Cut, 160 Hz, - 8 dB, Cut, 240 Hz, - 6 dB,

Cut, 600 Hz, -15 dB, Boost, 1 KHz to 1.5 KHz, +1-3 dB.
EQ, Frequencywise the bass needs room. Every low frequency 0 - 120 Hz content must be bass only,
specially in center. Only the small range 80 - 100 Hz basedrum bottom kick frequencies are welcome, other
instruments should have a big cut over here.
Quality : To get some quality maybe a bit of a boost in the 40 Hz to 70 Hz range, better not to do so, be
carefull. For that wooden sound try the 750 Hz to 1 KHz range. Main area's bottom at 60 Hz to 80 Hz,
attack pluck at 750 Hz to 1 KHz. 800-1200 Hz is nasal, or the woody part. String noise pop at 2.5 kHz.
Reduction: Bass needs space to play, any unnecessary sound event in the lower frequency range will create a
muddy bass (masking). So it is best to cutoff rest of instruments frequencies anywhere from 0 Hz to 120 Hz
at least, but we pay special attention to the rest of fundamentals, Base drum, Snare and Main Vocals. For
bass this does mean a lot, we expect that the range 0 Hz to 30 Hz can be cut steeply, while leaving 30 Hz to
120 Hz (180 Hz) intact. The only instrument that can go as low as 30 Hz is the Bass, no other instruments
wil get this low. So for Bass you can cut from 0 Hz to 30 Hz. Just to get rid of all flabbiness, pops, rumble,
bottom end bass sounds. A cut or damp in the 120 Hz to 350 Hz (500 Hz) misery range, will get some more
headroom back, listen how much you can cut over here. For bassdrum and bass thin out some 180 -250 Hz
by a few db. Roll off some highs > 10 KHz, to set the distance according to stage planning. Also the bass
does not really need any high frequencies, so cut anyway. The bass must fall behind the main vocals in
distance.
Compression: Control balance of heavy notes and stop notes, damping or dead notes. Dynamic limiting of
sudden level peaks and irregular playing. Creation of sustain with long notes (especially in song with low
tempo), with a long release time, sync to tempo. Boosting quieter side notes (funk with fast release times
must exclude sustain). Supporting rhythmics and percussiveness with long attack times (transients).
Sometimes multiband compression can help a bass, only resort when all else fails. A good played bassline is
worth millions, making easy manual level / volume / muting adjustements and less compression tricks
needed for avoiding those dead notes or just edit them out of the bassline. Bit slower attack and slower
release, so you leave or accentuate the transient of the hit. Off course if you want to smooth out the bass and
bury your bass in the track by getting rid of the attack, have a fast attack on the compressor. Attack time
can be set to between 10 ms to 40ms. For getting Bass more sustain, ratio 4:1 (to 6:1), reduce threshold until
it hits. View the gain reduction meter, try to get the bass as stable in sustain as can be. Use some gain to
compensate for reduction. Set release, so that the sustain reduction is stable. Attack times that are longer 5
ms to 30 ms is letting the transients (first part of a note) through pass. The compressor attack time can be
used to control the snappiness and therefore the definition at the start of each note. The amount of
compression controls the balance between the heavy sounding notes and corrects damping sounds and dead
notes. The dead notes can be well emphasized with the compressor by sustaining. A short attack time cuts
the transients more, therefore raising the sustain. The length of the bass note is the groove. Baselines
specially sound good when notes sustain equally on each second or fourth quartertone. A compressor can
also be used to shape dynamics, limiting peaks. (Creating sustain for long sustain tones in tempo to the
rhythm). A long release time (according to tempo) is more rhythmical. If the bass sounds weak in the
transients you can support a long attack time. The release time must be set for the bassist playing style,
short notes will need a fast release time. Watch for background noise pumping.
Reverberation: Try not to use. Else try an ambient reverb (alike the base drum on the drum group) and use
very subtle or inaudiable. Maybe a small Room or Ambient reverb. With a bit of pre-delay from 0 ms to 10
ms (0 ms please for rhythmical content), when set higher synced to tempo. Just roll off some trebles after the
reverb to set the stage again. Use no pre-delay on reverbs, (maybe a bit just to set the distance behind the
main vocals, but you could use a treble roll off instead). Bass just needs enough ambient reverb, just slightly
more then the bassdrum has.
Chorus: Double signal the bass, splitting into two signals, lows and highs. On the highs the chorus can do its
job without phasing > 250 Hz (no to sway the lower frequencies around, keeping it centered). Chorus and
phase, on a bass only above > 250 Hz. Maybe you have split the bass signal allreay in two frequency parts,
then just use chorus on the higher part > 120 - 250 Hz.
Guitar Acoustic.

Panorama: In mixes guitars mostly come in pairs. As we can use this technique to set one left and one right,
to keep a balanced feel and to stay upfront. As guitars might be understood as crutial, they are not
fundamental. So not placed in centre. When only a single guitar is played, maybe use some other instrument
on the opposite side (counteract). When accoustic guitar and vocals are the only fundamental mix
components, we have to use different mixing techniques, vocals centered upfront and accoustic guitar
offcenter with a widening effect more outwards. Even for solos a switch filter can become a good tool.
Keeping the main vocal upfront at center, we try to avoid masking the main vocals.
Frequency Range:
Cut, 0 Hz to 80 Hz (120 Hz), Reduction, Seperation.
Between 80 Hz to 120 Hz, Power.
Between 200 Hz to 400 Hz, Boom, Warmth.
Cut 500 Hz to 1 KHz, Brittleness.
Between 1 KHz to 1.5 KHz, Strumming.
Between 2 KHz to 3 KHz, Abrassion, Bite.
Between 2 KHz to 5 KHz, Clarity, Honky.
Between, 5 KHz to 10 KHz, Nasal.
Quality: Checkout the highs with a spectral analyzer. Add Sparkle, try some gentle boost at 10 KHz using a
Band Pass Filter with a medium bandwidth. Bottom at 80 Hz to 120 Hz, body at 240 Hz, clarity at 2.5 KHz
to 5 KHz. Apply a little cut at 300 Hz. Roll off a bit of the high trebles to set the distance. Guitars solo-ed
can sound thin, while in a mix they are good.
Reduction: Cut lows from 0 Hz to 80/120 Hz (250 Hz/400 Hz) depending on the note range and when the
guitar events are combined with vocals (automation). Be sure everything below 100 Hz is at -15dB cut. A
good steep roll off in EQ from 0 Hz to 120 Hz (250 Hz) can help free up some headroom and get rid of some
nasty guitar body sounds. Now the correlation gets better. Also when the highs are not nicely rolled of, do so
with a roll off in EQ for distance. Sometimes the lead vocals and guitar play at the same time, meaning they
need to stay in distance upfront. We can try to switch cutoff filters when the main vocals are sounding ( >
250 > 400 Hz). Use a quality oversampling EQ on the range > 8 Khz.
Compression: Can sound weak when adjusted with an EQ in front of the compressor. The frequency range
and broadband was specified by the EQ and removes low rumble and other unwanted signals. Attack time
can be set to between 10 ms to 40ms, for more transients and to be percussive/rythmic. Fast attack usually
somewhat fast release. Compress with ratio 4:1, with an attack of about 5 ms, hold 250 ms, release 50 ms
(100 ms to 250 ms), for the suistain sounds. Make up for gain. You might even need to add an effect that
generates more warmth. Uncompressed guitars are difficult to handle inside the mix. For a more pecussive
rythmic approach use Opto mode, for a softer more contained sound use RMS mode.
Reverberation: A short bright plate reverb can work well on a steel strung acoustic guitars. Apply some
reverb or delay (or any other guitar effect) can help to counteract as opposite. Many guitar players prefer
the sound of the spring reverb in their amplifiers. Maybe setup a delay aftherwards. Maybe use some small
doze ambient reverb available on a group or available send.
Delay: Delay can workout better for guitars that must stay upfront, a reverb will draw more to the back.
Delays are more clear and less muddy/fuzzy, so this helps again to make the main guitar upfront but still
have some space. Maybe an widener or expander placed behind.
Guitar Electric.

Panorama: In mixes electric guitars mostly come in pairs. As we can use this technique to set one left and
one right, to keep a balanced feel. As guitars might be understood as crutial, they are not fundamental. So

not placed in center where it is allready overcrowded. Dont be shy with panning settings, more outwards.
When only a single electric guitar is played, maybe use some other instrument on the opposite side
(counteract). When accoustic guitar and vocals are the only fundamental mix components, we have to use
different mixing techniques, vocals centered upfront and accoustic guitar offcenter with a widening effect
more outwards. We can use a switch filter setting when main vocals are sounding or not. Even for solos a
switch filter can become a good tool. Keeping the main vocal upfront at center, we try to avoid masking the
main vocals.
Frequency Range:
Cut, 0 Hz to 80 Hz (120 Hz), Reduction, Seperation.
Between, 125 Hz to 250 Hz (400 Hz), Warmth.
Boost, 500 Hz Body.
Cut, 500 Hz to 1 KHz Brittleness.
Boost, 2 KHz to 3 KHz Abrasion, Bite.
Filter, 2.5 KHz +3db LF / +6db MF.
Crisp, 3 KHz to 5 KHz.
Roll Off, 4 KHz to 4.5 KHz, Irritating.
Boost, 6 KHz, Distorted Guitars.
Quality : Fullness at 240 Hz, bite at 2.5 KHz. Clean electric guitars can be treated like acoustic guitars.
Apply a little cut at 300 Hz. Roll off a bit of the high trebles to set the distance.
Reduction: Cut lows from 0 Hz to 120 Hz (250 Hz) depending on the note range and reduction needs (are
the main vocals sounding?). Be sure everything below 100 Hz is at -15dB cut. Now the correlation gets
better. Also when the highs are not nicely rolled off, do so with a roll off in EQ for distance. Sometimes the
lead vocals play guitar at the same time, meaning they need to stay in distance upfront. Checkout the highs
with a spectral analyzer. Add Sparkle; try some gentle boost at 10 KHz using a Band Pass Filter with a
medium bandwidth, use oversampling quality EQ.
Compression : With ratio 4:1 with an attack of about 7 ms, hold 250 ms, and release 50 ms. Fast attack
usually somewhat fast release. When needed enhance the transients, electric guitars tend to have enough
suistuin. Make up for gain. You might need even need to add an effect that generates more warmth. For
sustain, set a fast attack time and a release of around 250 ms, when really needed. Set the ratio from 4:1
upwards, and apply gain reduction up to 12 dB.
Reverberation: Apply some reverb or delay (or any other guitar effect) can help to counteract as opposite.
Many guitar players prefer the sound of the spring reverb in their amplifiers. Maybe setup a single delay.
Delay: Delay can workout better and makes any instrument stay more upfront, as it tends to keep upfront, a
reverb will draw more to the back. Delay's are more clear and less muddy/fuzzy, so this helps again to make
it stand upfront but still have some space (unmasking). Metal guitars are often panned completely left or
right and sometimes make use of heavy delay.
Piano.

Panorama: Also it is more likely we will place the piano (as an unfundamental instrument) by setting pan
more left or right. When it appears as fundamental instrument (alongside main vocals) we can try to widen
and expand around the main vocals. We could counteract the piano with another instrument or
reverberation device opposite of the panorama. Or even use a widened reverb tail that progresser outwards.
A piano can get any placement, when played by band members, left or right. But however sometime the
main vocals will also play the piano, so therefore maybe set the main vocals a bit to the left or right, with the
piano as opposite. In this case we do not roll off any trebles and keep them upfront. We can also place the
piano slightly behind the main vocals at center (as fundamentals), switch mute the filter of the piano in solo
mode or when main vocals are sounding.

Frequency Range:
Piano Note Range: 28 Hz (A-1) to 3651Hz (B7).
Cut anywhere from 30 - 120 Hz, Reduction, Seperation.
Boost, 100 Hz, Power.
Around, 250 Hz, Clarity.
Boost, between 1 KHz -3 KHz, More Aggressive.
Boost, 2 KHz, Harmonics.
Boost, 6 KHz, Attack.
Boost, 12 KHz, Sparkle, Air.
Quality: Bottom at 80 Hz to 120 Hz, presence at 2.5 to 5 KHz, crisp attack at 10 KHz, honky-tonk sound
(sharp Q) at 2.5 KHz. Roll off some highs when needed for distance.
Reduction: Difficult to master inside a mix. Basically because piano can play a wide range of notes on all
octaves. Depending mixing purposes, we can address two situations. First we have a mix where already
Bassdrum and Bass are fundamental instruments playing. In this case the piano becomes unfundamental.
For the Bass range to stay clear, we can cut a lot from 0 Hz to 120 Hz out of the bottom end of the piano.
Second, when we have no Bassdrum or no Bass playing, or both are abcent, thus the piano can be more
fundamental, we can leave some lower frequencies inside the spectrum and be more careful rolling of the
lows. Anyway a good EQ cut from 0 Hz to 30 Hz (50 Hz) is always applied. Still we like to roll off all
frequencies lower then < 120 Hz. We can roll off some high trebles for distance.
Reverberation: Depending on stage planning, we can add some ambience reverb for upfront, and depth by
using a larger reverb. Maybe use some delay instead (specially when needed upfront).
Delay: Delay can workout better and makes any instrument stay more upfront, as it tends to keep upfront, a
reverb will draw more to the back. Delay's are more clear and less muddy/fuzzy, so this helps again to make
it stand upfront but still have some space.
Epiano.

Panorama: An epiano can get any placement, when played by band members, still not at the allready
overcrowded center. It is more likely we will place the epiano (as an unfundamental instrument) by setting
pan more left or right. We could counteract the epiano with another instrument (piano) or reverberation
device opposite of the panorama. However when fundamental sometimes combined with the main vocals. So
therefore maybe set the main vocals a bit to the left or right, with the epiano as opposite. In this case we do
not roll off any trebles and keep them upfront. We can also place the epiano slightly behind the main vocals
at center (as fundamentals), switch mute the filter of the piano in solo mode or when main vocals are
sounding.
Quality: A famous epiano is the rhodes epiano. Less difficult to master inside a mix. Roll off some highs
when needed for distance.
Reduction: Depending mixing purposes, we can address two situations. First we have a mix where already
Bassdrum and Bass are fundamental instruments playing. For the Bass range to stay clear, we can cut a lot
from 0 Hz to 120 Hz out of the bottom end of the epiano. Second, we have no Bassdrum or no Bass playing,
or are both abcent, thus the epiano can be more fundamental, we can leave some lower frequencies inside
the spectrum and be more careful rolling of the highs (keep them upfront). Anyway a good EQ cut from 0
Hz to 30 Hz (50 Hz) is always applied. Still we like to roll off all frequencies lower then < 120 Hz when
unfundamental.
Reverberation: Depending on stage planning, we can add some ambience reverb for upfront, and depth by
using a larger reverb. Maybe use some delay instead (specially when needed upfront).

Delay: Delay can workout better and makes any instrument stay more upfront, as it tends to keep upfront, a
reverb will draw more to the back. Delays are more clear and less muddy/fuzzy, so this helps again to make
it stand upfront but still have some space.
Organ.

Panorama: Organ alike piano and epiano can combine together, maybe placing organ left and (e)piano
right. Counteracting is common, placing left or right. Remember that organ when using a rotary effect
(leslie effect) can move inside the panorama. Keep track of where the organ is supposed to be placed
according to your stage planning.
Frequency Range:
Roll Off, 300 Hz, Lowers Power.
Boost, 2 KHz to 3 KHz.
Quality: Bottom at 80 Hz to 120Hz, body at 240Hz, presence at 2.5 kHz. Roll off some highs for distance.
Reduction: For the Bass range to stay clear, we can cut a lot from 0 Hz to 120 Hz out of the bottom end and
bass range.
Reverberation: Depending on stage planning, we can add some ambience reverb for upfront, and depth by
using a larger reverb. Maybe use some delay instead (specially when needed upfront).
Delay: Delay can workout better and makes any instrument stay more upfront, as it tends to keep upfront, a
reverb will draw more to the back. Delays are more clear and less muddy/fuzzy, so this helps again to make
it stand upfront but still have some space.
Keyboards.

Panorama: More left or right, we use counteracting and our stage plan to decide where to place these
instruments. Keyboard often sweep in panning, just keep them out of center, reduce the lows. More left or
right but maybe use some stereo expander when they are more fundamental.
Quality: Keyboards usually can play lots of different instruments altogether. Keyboards always use a lowcut filter anywhere between 50 - 150 Hz. Lookout for DC offset and remove it.
Reduction: Dividing each keyboard instrument and giving them a separate track makes the mix more
adjustable. Depending on the instrument played by the keyboard, decide what to do. Playing a Bass or
Piano, Epiano or Synth, Brass, with a keyboard is deciding where their frequency ranges are. Cutting still
from 0 Hz to 50 Hz is always applied, but still leaving the Bass range alone we should cut towards 120 Hz or
even higher. Also we could control the trebles to set some distance according to individual stage planning of
each instrument. Sometimes keyboard can play bass or basedrum (drums), then we will refer to them as
bass and drums and react alike (making then fundamental instruments). Sometimes keyboards can play
percussive instruments; we react accordingly as if they were real percussive instruments.
Reverberation: For Background Keyboards or background sounds (Group) use a Large Reverb with a predelay of about 25ms (Check the snare reverb for starters). Use an EQ to roll off the highs strongly and the
reverb sends all to the back in distance. Maybe a Modulation delay.
Delay: Delay can workout better for keyboard that must stay upfront, as it tends to keep upfront, a reverb

will draw more to the back. Delays are more clear and less muddy/fuzzy, so this helps again to make the
keyboards upfront but still have some space.
Synthesizers.

Panorama: Panned more left or right, we use counteracting and our stage plan to decide where to place
these instruments. Synths can play bass sounds as well, so decide on tactics according the instrument sound.
Quality: Synthesizers usually can play lots of different instruments altogether, mostly analog or digital
artificial sounds.
Reduction: Dividing each instrument and giving them a separate track makes the mix more adjustable.
Depending on the instrument played, decide what to do, deciding where their frequency ranges are. Cutting
still from 0 Hz to 50 Hz is always applied, but still leaving the Bass range alone we should cut towards 120
Hz or even higher. Also we could control the trebles to set some distance according to individual stage
planning of each instrument. Sometimes a synth can play bass or basedrum (drums), then we will refer to
them as bass and basedrum and react alike (making then fundamental instruments). Sometimes a synth can
play percussive instruments, we react accordingly as if they were real percussive instruments.
Compression: Most synthesizers don't need compression, be scarse. Analog filter sweeping can be
compressed for peaks with a ratio of 4:5 to 6:1.
Reverberation: Adding delay can support the synth sound to become more natural and fitting inside a mix,
used as a creative effect. To set the distance we could roll off some highs. Maybe a Modulation delay. To
thicken synth sounds, try a bright reverb with predominant early reflections. A short, high level reverb
makes the synth sound like multiple instruments in an acoustic space.
Delay: Delay can workout better and makes any instrument stay more upfront, as it tends to keep upfront, a
reverb will draw more to the back. Delays are more clear and less muddy/fuzzy, so this helps again to make
it stand upfront but still have some space.
Violins.

Panorama: Depending on the frequency range of alt-violins and the higher violins we place them more
outwards (arrording to panning laws. As for modern pop music we might set all strings behind the
drummer, spreading them out in panorama (stereo expander). Leaving the violas and cellos more centered.
The violin and alt-violin more outwards. For a more classical approach use the orchestral stage plan to
place all stringed instruments.
Frequency Range:
Violin Note Range: 195 Hz (G3) to 3136 Hz (G7).
Viola Note Range: 131 Hz (C3) to 2093 Hz (C7).
Chello Note Range: 65 Hz (C2) to 1047 Hz (C6).
Roll Off, 0 Hz to 120 Hz (180 Hz), Reduction, Seperation.
Around, 200 Hz to 500 Hz, Fullness.
Cut, 800 Hz to 1 KHz, Recession.
Boost, 5 KHz to 10 KHz, Clarity.
Between, 7.5 KHz to 10 KHz, Scratchiness.
Between, 10 KHz to 16 KHz, Air, Sparkle (if present).
Roll Off, 10 KHz, Distance, Reduction.

Quality: Fullness at 240 Hz, scratchiness at 7.5 KHz to 10 KHz. Roll off highs for distance.
Reduction: Cutting a lot of lower frequency range 0 Hz to 120 Hz (195 Hz), but not harming the main
frequency range. Viola's and cellos have a lower frequency range, so we might cut a little less 0 Hz to 120
Hz. Still we like to cut all rumble and lower frequencies, also rolling off highs to set them behind the
drummer.
Reverberation: High pre-delay for strings can send them assigned to the back, to the back rows and roll off
the highs of the reverb can set even more backwards.
Brass.

Panorama: Horns, Trumpets, Trombones and Tuba. Depending on their frequency range and placement,
decide where they fit in. Scattered across the whole panorama. Placing lower instruments more centered
and higher instruments more outwards (panning laws).
Frequency Range:
Trumpet Note Range: 115Hz (E3) to 1047 Hz (C6).
Trombone Note Range: 82 Hz (E2) to 698 Hz (F5).
French Horn Note Range: 65 Hz (C2) to 698 Hz (F5).
Tuba Note Range: 37 Hz (D1) to 349 Hz (F4).
Piccolo Note Range: 587 Hz (D5) to 349 Hz (F4).
Flute Note Range: 262 Hz (C4) to 2349 Hz (D7).
Oboe Note Range: 247 Hz (B3) to 349 Hz (F4).
Clarinet Note Range: 147 Hz (D3) to 349 Hz (F4).
Alto Sax Note Range: 147 Hz (D3) to 880 Hz (A5).
Tenor Sax Note Range: 98 Hz (G2) to 698 Hz (F5).
Baritone Sax Note Range: 73 Hz (D2) to 440 Hz (A4).
Bassoon Note Range: 62 Hz (B1) to 587 Hz (D5).
Cut, 0 Hz to 120 Hz (180 Hz), Reduction, Seperation.
Between, 120 Hz to 550 Hz, Power, Warmth, Fullness.
Between, 1 KHz to 5 KHz, Honky, Contrast.
Between, 6 KHz to 8 KHz, Rasp, Harmonics, Solo.
Between, 5 KHz to 10 KHz, Shrill.
Roll Off, 12 KHz, Distance, Reduction.
Quality: Fullness at 120 Hz to 240 Hz, shrill at 5 KHz to 10 KHz. Roll off some highs according to distance.
Reduction: For the higher instruments alike trumpets and some trombones, cut a lot from 0 Hz to 180 Hz.
As for lower instruments alike Tuba and Horns cut a lot from 0 Hz to 120 Hz. We do not like the brass
instruments behind the drummer, so do not roll of too much.
Compression: The trumpet is by far the loudest of the horns, with a large dynamic range that can reach
from soft melodies up to stabs and shouts, not so constant overal levels. When dealing with EQ and
compression, you'll often deal with the horn section as a single unit (Group). Apply a good amount of
compression on peaks, but stay away from really compressing the main parts.
Reverberation: There's something that adds to the excitement of a horn section when you hear it from a
distance, when it's interacting with the room. We tend to use a more roomy reverb sound, hall. Reverb and
delay work very well with horns.
Orchestral Instruments:

Panorama: According to stage position.


Frequency Range:
Harp Note Range: 65 Hz (C2) to 2794 Hz (F7).
Harpsichord Note Range: 44 Hz (F1) to 1319 Hz (F6).
Xylophone Note Range: 392 Hz (G4) to 2093 Hz (C7).
Glockenspiel Note Range: 195 Hz (G3) to 349 Hz (F4).
Vibraphone Note Range: 175 Hz (F3) to 1397 Hz (F6).
Timpani Note Range: 73 Hz (D2) to 262 Hz (C4).
Marimba Note Range: 65 Hz (C2) to 2093 Hz (C7).
Quality: Keep track of the Note Ranges.
Reduction: Cut below lowest note.
Vocals.

Recording: Record doubled takes. Use low in the mix so it is not obvious. Timing is important, so maybe
manually edit the audio. The different doubled takes can differ in tuning and vocal quality, but most of the
time does not need to be returned at all.
Panorama: Main Vocals are placed at center and upfront, dead in front of all fundamentals and
unfundamentals. Maybe have two different copies running at left and right (doubling), still this must result
in centered main vocals (avoid swaying around). A good trick is panning duplicates of the vocals left and
right. You can invert the right signal for a real dramatic effect. Also pitchshifting left -4 and right +4 can
make a more dramatic effect.But however the vocals should align to center always.
Frequency Range:
Vocals Note Range: 82 Hz (E2) to 880 Hz (A5).
Cut, 0 Hz to 100 Hz (120 Hz), Roll Off, Reduction, Seperation.
Fullness, 120 Hz.
Male Fundamentals, 100 Hz - 500 Hz, Power, Warmth.
Female Fundamentals, 120 Hz to 800 Hz, Power, Warmth.
Cut, 200 Hz to 400Hz, Clarity.
Boost, 500 Hz, Body.
Boost, 315 Hz to 1 KHz, Telephone sound.
Boost, 800 Hz to 1 KHz, Thicken.
Vowels, 350 Hz to 2 KHz.
Cut, 600 Hz to 3 KHz, Loose Nasal quality.
Consonants, 1.5 KHz to 4 KHz.
Between, 2 KHz to 5 KHz, Presence.
Between, 7 KHz to 10 KHz, Sibilance.
Around, 12 KHz, Sheen.
Around, 10 KHz to 16 KHz, Air.
Between, 16 KHz to 18 KHz, Crisp.
Words:

sAY, 600 Hz to 1.2 KHz.


cAt, 500 Hz to 1.2 KHz.
cAr, 600 Hz to 1.3 KHz.
glEE, 200 Hz to 400 Hz.
bId, 300 Hz to 600 Hz.
tOE, 350 Hz to 550 Hz.
cORd, 400 Hz to 700 Hz.
fOOl, 175 Hz to 400 Hz.
cUt, 500 Hz to 1.1Kz.
EQ: If vocals sound tend to close-up, boost some 120 to 350 Hz. Men range 2 KHz and women 3 KHz, with a
wide Q-factor (standard for vocal use). The range trough 6 - 8 KHz are sensitive sibilant sounds to 12 KHz.
Boost subtle always. Combining a de-esser can help. Even before EQ look at some manuel editing. Wideness
and openness at 10 to 12 KHz and beyond. Use a quality oversampling EQ on highs. Sometimes a complete
vocal track needs to be processed overal, AAMS Auto Audio Mastering System can help with it's reference
vocal presets.
Quality: Filtering can make a difference for a chorus section (for instance that is mudded, masked). Boost
some, 3 KHz to 4 KHz for our hearing to recognize the vocals more naturally and upfront. Boost, 6 KHz to
10 KHz, to sweeten vocals. The higher the frequency you boost the more airy and breathy the result will be
(the more quality EQ you will need). Cut, 2 KHz to 3 KHz, apply cut to smoothen a harsh sounding vocal
part. Cut around 3 KHz, to remove the hard edge of piercing vocals. When a vocal sounds boxy, then apply
some steep EQ 150 Hz to 250 Hz. This will reduce these levels to unboxy (sounds more open). Boost 2 KHz
to 3 KHz (5 KHz) with a low Q-factor to ranging from 1 KHz to10 KHz, to adjust speech
comprehensibility. Some slight support here is standard, any microphone will muffle the sound a bit, so we
must compensate for this at the 2 KHz - 3 KHz range. Main adjustment ranges, fullness at 120 Hz,
boominess at 200 Hz to 240 Hz, presence at 5 KHz, sibilance at 7.5 KHz to 10 KHz. You could add a small
amount of harmonic distortion or tape emulation effect. A good trick can be running duplicated
manipulated copyes of the main vocal left and inverted right, this will be heard in stereo, but not in mono.
Reduction: Roll Off < 50 Hz (< 80 Hz downwards steep), cut below this frequency on all vocal tracks. Use a
good low-cut from 0 Hz to 120 Hz. This should reduce the effect of any microphone pops. It is common to
use a high pass filter (at about 60 to 80 Hz) when recording vocals to eliminate rumble. The better vocals are
recorded, the better they can placed inside the mix. Breathers are a question of style, cutting them is
common. If you duplicate a track, do not duplicate the breathers. You can edit out all breathers separately
on its own track, and then remove all other breathers from the vocals. Sillibals and 'Ts' end sounds
(rattleling in chorus) can be faded out. Mostly done by manual editing the vocal tracks. No roll off to make
the main vocals even more upfront (keeping trebles intact), roll off background vocals though.
De-esser: Expanding or Compressing frequencies between 6 KHz to 8 kHz are in the 'sss' and de-esser
range. With a bandpass filter. A good de-esser is crutial (extreme reduction, but no lisp effect). You can
also edit all 'sss' sounds manually. To make the vocal more open, boost trebles from 10 KHz > (use
oversampling EQ) to make them sound upfront. Consider manual editting before using a de-esser.
Tune and Double: Autotune or tune the vocals. Maybe to original track mixed with the tuned track together,
just copy a ghost track and manipulate. You can even use some stereo expansion or widening. When you do
not have enough vocals or background vocals, copy them and double / tune / manipulate. Do not widen at
copyed tracks for main vocals, but you can widen de background vocals with an stereo expander according
to panning laws.
Compression: 1176! To make all vocals sit in the mix, we need compression. Mostly compression on vocals
can sound loud and hard, it will be fine inside the mix and keeps it upfront. Background choirs can be of
many voices, often compressed on a group combined. A fast attack and release, ratio depends on the
recording and vocal style. Usually a soft knee compressor. Long attack times for the transients, and release
time should be set to the song tempo (shorter then) with little sustain. Vocals now have more presence and
charisma, upfront. Start with a ratio of around 4:1 and work upwards for a rockier vocal. Use a fairly fast
attack time, release time would normally be around 0.5 sec. A reduction of 12 dB is common for untrained
vocalists. Be careful not to over compress, you can always add more later on. A multiband compressor is a
good tool for removing unwanted sounds from vocals, use as de-esser for sss sounds, but also for other
unwanted frequencies alike pops clicks and some rumble. Also the multibands can be used for different
vocal applications. One Band, 0 Hz to 120 Hz, mainly compress for rumble and pops, use a fast attack.

Another Band, 3 KHz to 10 KHz, search for de-essing sss sounds. Start with ratio 5:1 to 8:1. Lower the
threshold until sss peaks are hit. Another Band, 4 KHz to 8 KHz, can be used for presence with light ratio's
1:5 to 3:1.
Reverb and delay: Using a large reverb on main vocal is not allowed, it is less direct, the singer will sound
backed up. Use small room or ambient reverb, be subtle, making the listener not aware of them or
noticeable. Combined with bigger rooms and delay, help make the vocals sound fuller without pushing them
backwards. Delay, instead of reverb. A delay can make main vocals fuller, without placing them further
back on the stage. The more delay used, you must pay attention to the center placement of the lead vocals.
Use the goniometer.
Reverberation: Reverb for the lead vocals tend to be dry, require a high-quality oversampling device to
prevent them from being pulled into a cloud of reverberation. You need a small, unobtrusive reverb with
attributes similar to a drum booth. Often combined with a delay works well. That might blur less than a
medium reverb. A delay might be far better on main vocals, specially when you need then to be upfront as in
most cases. A delay tail on the front vocals, make the vocals appear with more warmth and appear fuller.
Without putting the frontal placement into danger mode (panorama). The more the delay is appearing in
the mix, the more it will cover the vocals, using ducking (sidechain or not) on the first part of the vocals
(transients and a little sustain part) can free up and loose some fuzziness. Record vocals dry and you can
apply reverberation in style later on. Use a large amount of small room reverbs on the main vocals, instead
of using a larger size reverb. Or double the main vocals. Add one track with a small room reverb. Add
another with a bigger room trough a delay (1/4 step) and a gate to stay in rhythm (1/4 step). Maybe use a
spaced echo. Anyway it is better to not clutter the vocals with on top reverbs and delays after each other
(serial). Separate all reverb channels here (parallel), containing dry signal and reverbed signals. Sometimes
expand the reverb or delay outwards. For Main Vocals (Single Track or Group) a vocal room, drum booth
or small ambient reverb. Bright reverbs can sound exciting, but emphasize sibilance. No pre-delay to set the
vocals upfront. Combining with a delay, using a medium reverb might be just too much. Main Vocals - Try
to use a Stereo Reverb with delay tail for the main vocals, place the reverb a little hidden. You might solo
the reverb and listen to it and find it a bit loud. Within the vocal mix it might be just right, so dont be
scared by this effect. The dry vocals will mask the reverb a bit. To place a choir into the back requires a long
reverb, with a bit of pre-delay and damped high ends. The reverb can be set quite high for our ears to
accept the 3d spatial information and fight the masking effect. Experiment with a stereo expander in the
reverbs return. For Vocals delay can give more depth and placement inside a mix. Use a stereo delay to add
small amounts of delay (around 35 ms), watch out for correlation effects.
Delay: Delay can workout better and makes any instrument stay more upfront, as it tends to keep upfront, a
reverb will draw more to the back. Delay's are more clear and less muddy/fuzzy, so this helps again to make
it stand upfront but still have some space. Lead vocals reverb and delay, its all about the mix. Create a dry
counterweight by doubling the lead, add EQ, compression maybe a short delay, mix it back in. This way the
lead vocals are not pushed back too far, but at the same time sound fatter. A little stereo reverb with delay
tail for the vocals may work.
Offside Vocals.
Panorama: Sometimes a main vocal singer is accompanied by one or more vocalists. Mostly placed more left
and right from the centered main vocals. According to their stage position. As long as you counteract and
balance the stereo field, both speakers are playing the same kind of vocal loudness. The background vocals
are spread by panning laws, lower voices more in the middle and higher on the outsides. Basically the
settings for these accompany vocals are the same as for the main vocals.
Background Vocals or Chorus.

Panorama: The chorus is always arranged that the higher voices are more outside and lower voices more
centered according to panning laws. Use a stereo expander for the chorus to widen even more. Also there
are effects that can double or hamonize vocals.

Quality: When a vocal sounds boxy, then apply some steep EQ 150 Hz to 250 Hz. This will reduce these
levels to unboxy (sounds more open). Boost 2 KHz to 3 KHz (5 KHz) with a low Q-factor to ranging from 1
KHz to 10 KHz, to adjust speech comprehensibility. Some slight support here is standard, any microphone
will muffle the sound a bit, so we must compensate for this at the 2 KHz - 3 KHz range. For bigger chorus
you can duplicate tracks and use some automatic tuner, pitch or any modeler, thus slightly changing the
color of each copy. Chorus can be layered on several tracks, as for recording chorus maybe 4 to 16 (or more)
vocals could be used to generate a nice sounding chorus section. The more natural vocals sound the better.
Roll of some great deal of highs for distance, set them at the back of the stage.
Reduction: Use a good low-cut or roll off 0 Hz to 120 Hz. To make the chorus more distanced, lower trebles
from 10 KHz > to make them sound at the back of the stage (behind the drummer).
Pitch Shifter : A real-time pitch shifter set to shift -4 and +3 panned more left and right, can be used for
doubling and creative effects. Also pointing out doubling, harmonizing and special vocal effects alike the
vocoder or voice changers.
Reverberation: Backing Vocals are placed toward the rear, a large reverb with some pre-delay and filtered
trebles. Reverb can be generously applied here, so the masking effect will stay away or is overpowered by
reverb and brings over the 3d spatial information.
Record vocals dry and you can apply reverberation in style later on. For Background Vocals or Choirs
(Group) use a Large Reverb with a Pre-delay of about 25ms (Check the snare reverb for starters). Use an
EQ to roll off the highs strongly and the reverb sends them all to the back in distance where they belong.
High pre-delay for choirs can send them assigned to the back, to the back rows. Try sending the background
vocals to a group track. Set a compressor that compresses the loud section, but leaves the quiet ones
uncompressed. When this is used to feed a reverb the loud sections will be dryer and the softer sections
wetter.
De-esser: Frequencies between 6 KHz to 8 kHz are in the 'sss' and de-esser range. A good de-esser is crutial
(extreme reduction, but no lisp effect). You also edit all 'sss' sounds manually.
Static Mix Reference.
Reading up to here, you should have enough information to finish off the Static Mix as a refernce for
futhermore mixing. Using the dimensions, quality and reduction. As well as finding some stability with
separation and togetherness. Unmasking as much as we can to have clear pathways and at the same time
save some headroom. Until now we have discussed first the Starter Mix progressed towards a finished Static
Mix. Basically called static because after setting up there is no automatic timeline movement of knobs,
faders and settings in the timeline of the mix. In the Static mix, we have setup quality, separation
(headroom) and the three dimensions (stage plan). We have discussed why it is better to start with
dimension 1 and 2 (starter mix) before starting with dimension 3 (static mix)). We would like to finish off
dimension 1 and 2 as well as dimension 3 for a good static mix to finish off. Again the Static Mix is our
reference point for furthermore mixing purposes, so we need to be sure we have done our very best to get
the highest possible result before we progress mixing more dynamical. Now is a great time to just listen,
correct until your completely satisfied. Waiting a day and resting our fattique ears might be a good idea for
a last later on re-check. Be 100% sure you have finished a good reference static mix, or else re-check or restart, before progressing...
Dynamic Mix.
The rest (after finishing off the static mix reference) is dynamic mixing. Dynamic mixing is taken in account
events that can happen suddenly or on a timeline throughout the mix, then most likely will return to static
reference mixing levels. Mostly digital sequencers offer a lot of automation possibilities. For controlling
some outboard equipment alike a control mixer or plug-in, controllers can help to automate easier.
Understand that dynamic mixing is influencing all time lined events, even if this is just hitting the mute
button while recording and automated mixing. The mute button can be handy for a static mix, we still see
them when used automated as dynamic events.
Automation.

One important thing to know beforehand is that automation should only take place when you are finished
setting up the starter mix and static mix. Adjusting Fader, Balance, EQ, Compression, Gate, Limiter,
Reverb, Delay as well as setting up routing, the three dimensions for each instrument, leaving some
headroom by separation and togetherness as a mix. This can seen a fiddly job that can take hours, it is. Well
most of this Starter to Static mixing is technique. Understanding the material you are working on. Be happy
with the Starter Mix and Static Mix before entering Dynamic Mixing (Automation). Often starting to soon
with dynamic mixing will mean you need to adjust the Static Mix or even adjust the Starter Mix in a
repeated kind of fashion. This is basically not allowed, but sometimes nessecary for correcting our mistakes.
Better nit to make these kind of mistakes. When adjusting the Dynamic Mix, you might get into an endless
loop of adjusting. You will notice that you are swapping between both worlds (static and dynamic),
constantly making corrections. It is better to first have the starter mix and static mix completely finished off
and then go on to dynamic mixing. When you think dynamic mixing is not needed, we all commonly think
different. But dynamic mixing can take longer than a starter mix and static mix altogether. When you have
spent up to 4 hours for a static mix, expect dynamic mixing will take up to 12 hours.
Automation Events.

We can do so much with automation to make a mix stand out more, we will not discuss every aspect, we only
discuss a few often used automation tricks:
1. Introducing new sound events, new instruments or new tracks. Automating the instrument fader level can
do some good sound information tricks. Let's say you have a Mix running (playing) and at a certain timeline
a guitar starts to play solo. New instruments (alike this guitar solo) can be introduced at a louder level at the
start of the first introduction (let's say 1 bar in the tempo line or timeline). Then we will reduce the solo
guitar instrument to its normal level. Our hearing will accept newly introduced instruments better (spatial
information) when the transients at first are a bit louder then normally played. When the solo guitar playes
onward automate back to its normal operating level. Normally this louder setting for 1 to 3 seconds will
make our hearing accept the solo guitar (reconization). This can be done by setting the Fader of the Solo
Guitar (Static Mix Reference) and then automate the louder timeline event parts.
2. Whenever we need more Emphasis on a part of the mix. Sometimes a chorus or verse will be a bit
overcrowded, this can make this part of the timeline a bit misunderstood. For instance on the chorus or
verse of a song maybe other instruments (not fundamentals alike drums or bass) do not come out clearly. A
good trick is to automate the whole drum group, setting a lower level while playing the chorus or verse part.
Or maybe automate the main vocals level a bit louder instead while playing the chorus or verse. Also maybe
just automate the reverb section of the drum and bass to a lower level setting while playing chorus or verse.
3. Arranged fade-in and fade-out on single instruments or tracks. When you need to fade-in or fade-out a
whole mix, leave this for the mastering stage. We do not usually fade when a mix starts or ends. Also we
never automate the master track. So what is left is automation fades to make individual instruments or
tracks fade-in or fade-out in a certain way. This is purely based on the material and creativeness.
4. Panorama automation. When single spots appear obscure (alike when main vocals clash with the chorus
section). Sometimes while playing a mix certain events just seen to clutter, hiding behind other instruments
(masking) or any other reason to shortly adjust the panorama. By doing this we can make some time lined
panorama events by automation to avoid instruments overcrowding or masking. Sometimes we need to
move a reverb or delay for unmasking them for a short while to make then heard better. You can correct
this situation only in the timeline part where this occurs, you can just shortly pan one instrument more left
or right, and then returning to its static mix position. The use of an automated stereo expander can do a
wonderful job, without touching the panorama setting. When for instance the main vocals clash with the

chorus vocals, maybe a little automation on the stereo expander of the chorus vocals can do the trick of
unmasking. Also automated widening / expanding effect can help.
You can only correct automation by using automation. We allways return to the static mix reference level.
Whenever you have automated a fader for instance, you can only correct this by automating it back to the
original static level. Setting the main vocal fader (as we do in the starter / static mix) does correct the level,
but when we have automated this fader already, this setting will be overruled by the automation. Once you
start using automation, you can only correct it by automating more timelines or just edit the automation.
Stay away of changing the static mix and allways use them as reference. Don't use an offset while
automating. Maybe now you understand why finishing of a Starter Mix or Static mix first is important as
reference, before starting with Dynamic Mixing (Automation). Automation or dynamic mixing can be time
consuming and maybe takes three times as much as finishing off a Starter Mix or Static Mix. But can be
very rewarding and creative. Take as much time to correct little instances. Do not add more effects until
later on. First we try to correct the static mix with automation, before we will add.

Next is Basic Mixing III.

You might also like