You are on page 1of 103

Digital Recording, Mixing and Mastering. Volume 4 ..--+=({ethix-wing})=+--..

Digital Recording

ADATs and DA88's have now virtually killed off analogue reel-to-reel machines, but what are the real advantages?

Perhaps the most surprising thing about digital multitrack recorders, is that they were so long in coming (the affordable ones, anyway). By the time Alesis released their first ADAT machine in 1992, stereo DAT machines had already been around for four years, with the original R-DAT format actually proposed back in 1983. But the problems which beset Alesis in the development of their 8track digital recorder postponing its delay for some 18 months after an already lengthy R&D period left no-one in any doubt as to the technical difficulties which had to be overcome. Format-itus One of the major problems was the choice of recording medium. Like Tascam, whose 8track DA88 machine was developed around the same time as ADAT, Alesis faced a choice between using an existing tape format or developing an entirely new one. Alesis opted for the tried and trusted technology of SVHS tape cassettes already widely available through the domestic VCR market. Perhaps because of this, ADAT got a head start of around six months over its rival, the DA88 though the two were broadly similar in terms of audio performance. Should you have been considering a move into digital multitrack recording at that time, your choice was by no means limited to Alesis and Tascam machines or even to a digital tape format. By the early 90's, directtodisk recording systems were already a realisable goal for anyone with a PC or Macintosh, and a large enough hard drive. Digital sound cards were available for both machines (and falling dramatically in price as far as the PC was concerned). Furthermore, with a largescreen graphical interface at your disposal, editing on a dtd system promised to be far more intuitive. Not only that, but being a 'non-linear' system (unlike tape) you also had the advantages of random access: being able to move instantly to any point in a recording for playback or editing. Although ADAT and DA88 tapes were in cartridge form, you experienced exactly the same delays as reel tape when it came to getting from one position in a recording to another. The benefits... So where does the attraction for machines like ADAT and the DA88 lie, given such stiff competition? Well, using a digital multitrack tape machine actually provides you with 8 'physical' tracks, each with its own output which can be independently mixed or processed. By contrast, direct-to-disk computer-based systems (with the exception of more advanced hardware/software packages like Pro Tools), offer only two outputs - with on-screen tracks having to be mixed down to a stereo output signal. From the point of the established commercial or home studio, there is also the over-riding advantage of being able to simply unplug an existing analogue machine and stand a digital multi-tracker in its place. Chances are, all the input and output levels will match, and the machines feature almost exactly the same transport controls, recording level meters and monitoring systems. A studio taking delivery of an ADAT or DA88 in the morning could be up and running the same afternoon. Better still, there's no steep learning curve to get past and that counts for a lot. As many manufacturers have found to their cost, you cannot simply ignore public familiarity with certain technology and disregard traditional perceptions of the way things work. You only have to look at the graphic imagery used in computer user interfaces for evidence of that. This is where digital multi-track tape scores heavily over rival hard disk systems. It is offers recording in a form people are familiar with through reel-to-reel and cassette machines. No backing-up problems, no system crashes - and no weighty instruction manuals to wade through.

But the advances over analogue tape systems are even more pronounced. Digital multitrackers offer significantly improved recording quality, lower noise, a more convenient tape format and much more accurate editing and control facilities. In fact, so marked are the improvements in these areas, many 16track analogue studios have opted to change to an 8track digital format, making use of the ability to bounce down tracks (with no loss in signal quality) to compensate for having fewer tracks. In any case, one of the features of both the ADAT and the DA88 is that they can be run in pairs (or even greater combinations) to achieve the required number of tracks. The choice is yours... If this sounds like the sort of technology you'd be comfortable with as the central component in your recording setup, you'll be happy to learn your choice now is a little broader than the two original machines. Alesis unveiled its successor to ADAT, the ADATXT, a little over a year ago, and joining them as development partners, Fostex have their own ADAT-format machine, the RD8. Both offer significant improvements over the original ADAT, many of which are now coming onto the secondhand market, which provides you with an additional (somewhat cheaper) route into digital multi-tracking. As for Tascam, they have again adopted a different approach to Alesis. Instead of bringing out a successor to the original DA88, they extended the range to include the less-expensive DA38 a machine sharing the same basic design, but with additional features intended to appeal to the recording musician and domestic studio user.

What The Hell Is: Mixing?

Y ou might think mixing should be easy, but you'd be wrong...

Almost every book I've read on the subject of mixing has likened it to baking a cake. You take a bunch of ingredients, mix them together in the right amounts, put them into 'the oven' and watch them emerge as something special: a whole which is greater than the some of its parts. It's a fair enough analogy, I suppose, but personally I've always thought mixing is more like sex: everyone seems to do it, everyone knows why they do it, but everyone has to learn how to do it without being shown. Unfortunately there's no mixing equivalent of the 'knowledge' you pick up behind the bike sheds at school. You simply have to jump in (so to speak) and be prepared to get it wrong.

And people do get it wrong... spectacularly wrong in some cases. I have recordings of the Beatles produced by George Martin which place the entire band in the left-hand speaker, the vocals in the centre and a solitary tambourine on the right. This was obviously a problem of mixing associated with stereo recording (something Martin and the engineers at Abbey Road didn't quite get the hang of until the late 60s) but still very disconcerting to listen to. And you don't have to go that far back to hear the complete mess made of mixing music on TV. It wasn't uncommon to hear the lead vocalist, guitarist and snare drum in the mix, but no trace of the other instruments. I've also heard songs rendered unrecognisable by mixing harmony vocals much higher the lead vocals, and gaps in the music where you could see the drummer doing something but you couldn't hear it. Admittedly, these are all pretty extreme examples, but they do serve to illustrate how critical the job of mixing is. In your home studio, the chances are you'll be mixing your own music and you'll know exactly what instruments are used and how you want them to sound. There should be no question of anything being 'lost in the mix' or not being given its rightful position. On the other hand, such familiarity can be a problem when it comes to being objective. After a couple of weeks listening to the same track, objectivity can leak away faster than best bitter from a cracked pint glass. This is precisely why so many bands and artists hand over responsibility for the mixing to someone else, often without them even being present. It's also why remixing a piece of music can produce such startlingly different results. The history of mixing The whole concept of mixing has changed dramatically over the years, having evolved through three distinct stages. I mention them here not as a history lesson, but because each still offers a valid way of recording and mixing. Before the arrival of multitrack, the onus was on musicians to get their performances right so they could be recorded in a single take. It was the job of the mixing engineer to achieve a good balance of instruments 'at source' - almost as if he were mixing live. This relied to a considerable extent on his musical judgement (or the lack thereof) and how he thought the band should sound. Admittedly, that meant that no one heard a bass drum on a pop record until around the mid-60s. But if you're familiar with these early recordings, you'll know this was a perfectly acceptable way of working, and certainly proved itself as a means of capturing a live feel which has rarely been bettered. With the advent of multitrack recording machines, it became possible to defer the mixing process until after each instrument had been successfully captured on tape. As a result, musicians could get very precious about their performances ("Yes I know we've done 28 takes, but I think I'll get it this time") and be in the pub before the mixing started. Then, the introduction of stereo recording and progressive increases in the number of tape tracks saw the mixing process grow in complexity, yet it remained essentially the same for the best part of 25 years. It's still a valid and flexible way of working but it's easy to get carried away with the idea that more tracks always make life easier. In general, the more tracks you have, the more you'll fill and the more you'll feel obliged to pour their contents into the mix. On the other hand, by separating the mixing process from the composition and recording stages, you allow time for your objectivity to return and, providing you keep the original multitrack recordings, it's also much

easier to remix a track at a later date. The challenge to this way of working came with the introduction of the cassette multitracker (the Portastudio) in the mid-80s and the development of MIDI and computer-based sequencers. Not only was it possible to record and mix music to an acceptable standard at home, but the lines between writing and arranging, recording and mixing were beginning to blur. Mixing was becoming much more a part of the creative process, rather than simply an exercise in balancing levels and adding EQ. There was also a change to the concept of a piece of music having a fixed, definitive mix. These days, with radio and club edits various mixes of a song included on a CD single, the potential of remixing has been fully explored. And with it has emerged a whole new generation of remix artists who can reach inside a track and turn it inside out. It's no longer enough to express a liking for a particular track, you also have to name a favourite mix. The universal's not here... yet Because of the changes that have overtaken music production over the past 15 years, it's become impossible to set down prescribed methods of mixing which are universally applicable. With sophisticated sequencers, direct-todisk systems and digital recorders now competing alongside portastudios and multitrack tape machines, musicians and remix artists have the freedom to choose their preferred method of working. Mixing is no longer a process which has to be carried out after recording has been completed. And despite the flood of cheap, highquality designs which have come on to the market, it doesn't even need to be done on a mixing desk. Software mixing is now a reality for owners of inexpensive PCs and there's even the prospect of digital mixing for the slightly better-heeled. Even so, the general principles of mixing hold good. Before we look at these principles, a word about the basic requirements. Firstly, your ears. Keep them fresh (and clean, of course). Never ever attempt to mix a piece of music at the end of a long listening session. Take a break of at least an hour and preferably overnight. The human ear is incredibly good at identifying problems with certain sounds, but not if it's had time to get used to them. Secondly, your monitoring system. It goes without saying that you should buy the best equipment you can afford. Without a reasonable system you'll have no idea how accurate an image of the music you're getting. But even if you do splash out on an amp and speakers, how do know you're getting a true picture? The answer lies in listening to your mixes on as many other systems as possible, so that you know, for example, if you're tending to mix a little bass-heavy or aren't adding sufficient top end. Finally, don't think about mixing through headphones. Irrespective of what it may say on the box, headphones do not reproduce music in stereo. They reproduce it 'binaurally', which is quite different, and makes it all but impossible to set up an accurate stereo mix. Feel your way You can take any approach to mixing you feel is appropriate to your music, from the 'wall of sound' (favoured by people as disparate as Phil Spector and hardcore guitar bands), to a cleaner, more considered approach where space is created around each instrument in terms of both frequency and time. The latter approach is undoubtedly the more time consuming. You need a good ear to determine the area of the frequency spectrum in which each sound predominates and to prevent too much overlap. But that's what professional studio engineers and producers are able to do, and the results usually speak for themselves. The most basic function of mixing - the balancing of levels between individual instruments (or tracks) - is not something anyone can advise you about. You know how you want your music to sound and the level controls are in your hands. But do bear in mind the likely destination for a particular mix. There's no mystery here. The primary requisite for the dance floor is a rhythm track which to hit the punters in the solar plexus. But apply the same bottom end to a song destined for someone's car stereo, and it'll cause major problems. Bass needs to be tailored quite specifically to the needs of a particular track. Using EQ, it's possible to strip away low frequencies to quite a high level before the ear will tell you anything is missing (though this is where having an accurate monitoring system is so important). Very low frequencies are often not audible but will soak up a high proportion of a speaker's available energy. Filtering them out can actually increase the perceived volume of the audible bass and will certainly reduce distortion at high sound pressure levels. As effective as EQ is in such applications, it can be something of a mixed blessing in the wrong hands. Use it to correct minor problems with individual sounds and to create space round certain instruments by filtering out unwanted frequencies, but don't rely on it as a universal panacea. Obviously, much will depend on the versatility of the controls; sweep and para-metric EQ is much more effective at homing in on problem areas of the frequency spectrum. But they can just as easily be responsible for raising the profile of certain sounds till they just don't fit in any more. There's no clear dividing line between the two, except to say that the ear is much more forgiving of frequencies which aren't there than those that are. So wherever possible, try cutting the frequencies you don't want, rather than boosting those you do. Wet, wet, what?

One of the areas of controversy which has divided musicians and producers for years is whether to record tracks 'dry' or 'wet'. No, it's nothing to do with towelling yourself off after you get out of the bath, it's down to whether you add effects such as reverb and delay before you record them or whether you leave them dry and add your effects during the mixing process. There are pros and cons to either approach which need to be carefully considered. Record your track with effects and they're impossible to remove subsequently. If at the mixing stage, you decide you have too much reverb on the vocals, you'll have to live with it, or re-record the performance. On the other hand, you may only have a single effects processor and want to use this for another effect on mixdown. So unless you do without the vocal reverb, you have no choice but to record with it. Vocals need reverb like England needs Michael Owen but overdo it and it's dead easy to lose the voice in a sea of mush. Reverb often has the effect of pushing vocals back in a mix. Great for preventing them sounding like they're sitting on top of it (as they often can when recorded dry) but not so good if it's masking an otherwise excellent performance. You can get round this by introducing a pre-delay to the reverb. This can be set up on most effects processors and can be applied to many instruments, but is particularly useful for creating space around a vocal or bringing it forward while giving it an 'aura' of reverb. You'll need to experiment with the pre-delay setting, but around 30-50ms should do. The tendency of reverb to clutter up a mix is something you need to listen for very carefully. And it's vitally important that you choose a program with the right reverb time for each track. 'Hall' programs sound great in isolation but can clog up the music quicker than the mud at Glastonbury. Short reverbs are great for creating interesting room ambiences and don't take up as much space in the mix, but can sound unnatural. This is one argument for not adding reverb until mixdown. When all your instruments are 'in place' you can properly assess the type and quantity of reverb you'll need. If this isn't feasible (perhaps you only have one effects processor) try to keep reverb to the minimum needed to achieve the desired effect and limit reverb times. Long reverbs often don't have time to subside before being retriggered and can accumulate in your mix like Glastonbury mud (yes I know I've said it already, but you should have seen it). Use pre-delays if they're available and don't reject the use of gated programs. The overuse of gating effects on drum sounds in the late 80s may have contributed to their current unpopularity, but they can be extremely useful in chopping of unnecessary reverb tails and creating space. Another trick is to limit the frequency response of reverb using either your mixer's controls, or your processor's built-in EQ (if it has it). This is best done by monitoring return signals from your reverb unit and cutting any unwanted frequencies or limiting those which appear to be obscuring the sound. Panning for gold The art of panning instruments and sounds to create a convincing stereo image is one of the most important in mixing, yet is frequently misunderstood. So often, you hear demo tapes where the instrument placing appears to have been carried out quite arbitrarily. It's like sharing sweets: one for this side, one for that side, and one in the middle for luck. Panning is an essential part of mixing; a means of achieving balance in your music as well as creating the transparency of a stereo image that we all take for granted in commercial recordings, but which can be difficult to reproduce. Though I'm loathe to talk about what usually happens in a mix (if we all did what 'usually happens', we'd still be playing whistles and banging hollow logs), there are a few basic ground rules which you really can't get away from. The first is that the dominant, low-frequency instruments invariably sound better placed at or around the centre of the mix. I'm talking here about the bass drum, the bass guitar or synth and any deep percussive instruments you may be using. Pan them too far left or right and your music will sound off-centre. Fine, if that's what you're aiming at, but there are much better ways of getting creative with your pan controls. One of the best is to set up some interesting rhythmic interplay using your different percussion sounds. Obviously, if you're using a sample loop for the drum track this may not be possible, but you could always augment it with additional percussion (such as cabasa or claves) and pan these to the left and right. Alternatively, try setting up a delay on one of your instruments and panning the dry and delayed signals to opposite sides of the mix. Lead vocals are also placed at the centre of mix in most recordings, though this has much to do with where you'd find the singer at a live performance. There's is certainly nothing to prevent you experimenting with the positioning of the vocals, particularly where you also have backing vocals as well which can be placed in a similar position on the opposite side to the lead vocals, to balance things out. But again, hard panning left or right of any vocal parts can be difficult to live with. I should also remind you that

pan controls are not static, and there's nothing to prevent you from panning instruments left and right during a recording. It's easily overdone, but in moderation it can provide a real sense of movement (quite literally) within a mix. A more subtle alternative would be to use a stereo chorus program on a effects unit which features autopanning. This leaves the dry signal in place, but shifts the chorusing between the left and right speakers. And talking of effects brings us back to reverb which can be used to create a convincing stereo image from any mono source. By panning outputs left and right, you can use reverb to produce a much broader, more expansive sound, even at short reverb times. On the other hand, reverb may be upsetting your stereo imaging by changing the apparent location of a specific instrument. If this does occur, try panning the reverb to exactly the same point in the stereo field as the dry signal, preferably sticking to a mono effect. Instant mix fixes To round things off, how about a couple of ways to provide an instant fix for your mix? If you've already mixed down to stereo and found the result disappointing, try sticking the entire mix through an aural enhancer (of the kind we looked at a couple of issues ago). Though not always successful in treating a complete mix, they can alter the overall sound in subtle and distinctive ways, particularly processors which affect the stereo imaging. Alternatively, give the track to someone else to mix. The results may not be to your liking (at first), but I guarantee they'll reveal a side to your music that wouldn't have emerged had you been sat behind the mixing desk. What have you got to lose? Nigel Lord 09/98

15 Reverb tips

Our team of experts, producers and engineers give their best reverb tips

Diversify Rather than trying to make everything in the mix in the same acoustic environment, why not use a couple of really diverse reverbs to add some strange depth to your tunes? A really dry, upfront vocal works nicely alongside a really 'drowned' string section or a small bright room setting on the drums. Automate Try automating return levels if you have a digital mixer so that the reverb comes and goes in different sections of the song. By tweaking the aux send levels, manually, during the mix you can add splashes of reverb on the fly to add interest to snares or vocal parts. Take your time Spend some time choosing or trying out different 'verbs. Different songs lend themselves towards different types and sounds. Don't just settle with what sounds good in solo... Send that EQ Remember you can always EQ the send. Most large consoles offer you a choice of high and low EQ on the aux sends. On small desks, route the instrument/voice to another channel via a group or aux send, float this from the mix and send this to the reverb effect. Now you can add EQ to the send and even automate it as it's now on a fader. This is commonly used for those delays and reverbs that you want to move easily during the mix, such as wetter vocal in the chorus. Old tricks Reverse reverb is an old trick, where you can hear a vocal before a singer comes in, or a snare before it plays, easily using tape as you simply turn the tape over and record it backwards. You can do it using a computer, but you will have to move the audio to the right place after recording it. Use combinations A combination of reverbs on things can be good. A short setting for the snap sound with a longer bright plate can turn a biscuit-sounding snare into a more live sound. Old school plate In the old days it used to be called delay to plate. You sent the signal to a loop of tape then sent that to the reverb. The speed of the tape would adjust the delay as the time it took to get from the record head to the playback head. This gives, say, a voice a dry sound before the reverb comes in, giving a more upfront sound while keeping the wetness, which would usually take it to the back of a hall somewhere! Some people still use the tape method today for that old school sound. Simple drum one

Early reflections on drums can also give more of a tail or decay. Experiment A nice gated verb on guitars to old spring verbs on snares or even the mighty space echo can sound unique when balanced in the mix. That will give you more distance and room for placing things in a mix, while adding that extra sparkle to the sound. More reverse Reverse your sample, add reverb, then reverse your sample complete with reverb back around the right way again. This way, the reverb trail leads up into the sample, instead of trailing away from it. And again! For a different angle on the same reversed reverb theme, have the reverb trail panned left on a separate track, then the original sample centre-stage (ie. mono), followed by a regular reverb trail on another track panned right. The result is a reverb that leads up into the sample and trails away afterwards, while panning across the stage, left to right. Reverb over your mix Pick out key instruments or sounds and highlight them with reverb while using reverb sparingly, if not at all, on the remaining mix. You may have to adjust reverb send levels as the track progresses so you're not left with the track sounding dry where the reverbed sounds are no longer playing. Reverb and bass Usually, bass and reverb don't mix too well, unless you're specifically after a warehouse sound. Unfortunately, this effect results in a loss of definition among the bass regions. Run your reverb returns into a couple of spare channels in your mixer and back off the bass EQ, or add a high-pass plug-in EQ. Go mono! Don't forget using mono reverbs at times as well. These won't conflict with your rich stereo reverbs. Pre delay This determines time taken for the initial reflections to return back from room walls. Use a calculator from www.hitsquad.com/smm to get a pre delay value matched to your tempo. A common technique is to set the predelay to eighth-notes and add the reverb to a straight quarter note kick drum pattern to create an off-beat bouncy feel. FM

Cubase Tips - Tooled-Up

Do you know Cubase's editing tools inside and out? If not, read on for some handy tips that'll make your life easier..

One of the reasons Cubase has remained so popular must be its inherent flexibility. As the old saying goes, there is more than one way to skin a cat - and that, metaphorically at least, is exactly what Cubase allows for. There are many ways of achieving the same result tucked away within Cubase. Which method to use is largely down to the user's personal preferences, but there's also an element of 'horses for courses' involved too. The best way to know what method to use in what situation is to know what tools are available and what they actually do. So without further ado (and before I start rolling out some more of my Granny's old sayings) let's take a look at Cubase's various tools. The Toolbox As you will no doubt be aware, right-clicking in a window opens up a toolbox (For Mac users, it's [option]+click, or go to the tools menu). The majority of Cubase's windows have their own set of tools available for quick and easy editing, some of which are found in all the editors and some which are exclusive to only one. Most of the tools also have modifiers, accessed by holding down combinations of the shift, control and alt keys (command, control and shift on a Mac - but not necessarily in that order!). Here's a rundown of what each tool does... Common tools 1. The Pointer. This is the most-used tool. Use it for selecting parts or events, either by clicking on-screen items or 'rubber banding' groups of objects. Holding down Shift allows you to add or remove an item from the selection. Clicking and holding on an item or group of items turns the pointer into a hand and allows you to move the item freely around the screen. Hold down the Alt key before clicking and dragging to create a copy of the item. In the Arrange window, hold down Control when moving an item to create a ghost copy of a part (more on ghost parts later). 2. The Pencil. This tool is mainly used for creating and re-sizing items. The items can be parts in the Arrange window or events in the List and Key editors. To create an item, simply click and drag at the desired location in your song. In the Key and List editors, using combinations of the Shift and Control keys creates notes with varying velocity. There are four values of velocity that can be entered in this way, from 32 (Control and Shift keys held) to 127 (no keys held). To resize an item, select a pre-existing item with the Pointer, change to the Pencil, click on the selected item and drag it to the new size. In the Key and List editors it's possible to re-size more than one event at a time. Using the Pointer again, select the notes to be edited, switch to the Pencil, click on to one of the selected events and drag it to the new size. All the other items selected are stretched or shortened by the same amount. Holding down the Control key while doing this will force all selected parts to the same size. Back in the Arrange window, there are some more handy little shortcuts to be had from the Pencil. You can 'drag out' copies of a part by holding down Alt, clicking on the part to be copied and dragging - the part will be copied repeatedly to fill the selection. Doing the same thing with the Control key held down will drag out ghost copies of a part. 3. The Eraser. This does exactly what it says on the tin. Use it to delete items by clicking on them or moving across them while holding down the mouse button. But you didn't really need me to tell you that, now did you? 4. The Magnifying Glass. This is really more of an audition tool than anything else. In the Arrange window, clicking a part with the Magnifying Glass changes the part's appearance to represent the events it contains Moving the mouse while keeping the button pressed then plays any events that the Magnifying Glass passes over. In the Edit windows, the Magnifying Glass simply plays the event it is clicked on. Arrange window tools

...

5. The Q Match tool. This Arrange window tool is exceptionally useful, especially when dealing with drum and other rhythmic parts. Using this tool always involves two parts: a reference and a target. You simply select the Q Match tool, then drag the reference part and drop it on top of the target. The idea behind what this does is simple: to match the feel of the target part with that of the reference part. Let's say we had a two-bar unquantised hi-hat part. Now let's say that we had programmed a kick and snare pattern that had a slightly different feel. Now, the aim here is to match the feel (i.e. the bar positions of the events) of the kick and snare with that of the hi-hat part. In the arrange window, drag the hi-hat part onto the kick and snare part. A dialog box will pop up to ask if you want to include the accents (the velocities of the hi-hat part) - the options are self-explanatory. That done, and all being well, the result should be as tight as a gnat's chuff. So to speak. There is a little more to it than this, as the quantisation setting does come into play, but we shall cover that when we take a look at all of the quantisation methods in a future instalment. 6. The Mute tool. Use this tool to mute individual parts within an arrangement. That's it - there's nothing more to be said about it. Move along, now. 7. The Scissors. This tool is used for cutting a part into segments. In normal use, the Scissors simply create a split in a part at the point at which the Scissors are used. However, holding down the Alt key before clicking on a part creates splits along the full duration of a part. Each new cut has the same length as that of the first cut. 8. The Glue. The opposite of the Scissors tool. Clicking on a part with the Glue tool merges it with the following part on the same track, creating a single, longer part. The two parts in question do not have to be next to each other, as any space in between them is included in the new part. Holding down Alt when using the Glue tool will merge all parts from the selected part to the last part in the arrangement. Editor tools 9. The Crosshair (a.k.a. the Compass). This is probably the most versatile tool in Cubase's armoury. Its function not only varies with which window you are using, but also with what area of the window you are using it in. Let's start by looking at its function in the Key editor as this is where it is most useful, and has the most modifications. As an example, consider an E major chord as voiced on a guitar. Using the crosshair to click and drag a line that slopes from top left to bottom right adjusts the lengths of all the notes to follow this line. Doing a similar thing with the Alt key held down adjusts the start points while still keeping the note ends in their original positions. Hopefully, it will be clear at this point why I used a guitar -voiced chord - the crosshair is perfect for creating a strummed effect. Holding down the Control key when using the crosshair allows you to move either the start or end points of an event or group of events. Using the tool on the left half of a note moves the start point without affecting the end point; using the tool on the right half alters the end point without affecting the start. The crosshair can also be used to edit data in the controller portion of the Edit window. Its operation here varies depending upon what type of controller information you are editing, but the following general rules apply: using the tool with no extra keys held down simply alters any existing data to follow the line drawn, for example ramping velocities. However, in the case of a MIDI continuous controller message, such as volume, there is not necessarily any volume data to edit (the volume stays at whatever level its last message told it to be - it doesn't have a continuous stream of 'volume=112' messages). To get around this, simply hold down the Alt key while drawing the desired fade (or filter sweep or whatever). Now Cubase will create a series of controller messages to match the desired line, their spacing being determined by the Editors' 'snap' setting. 10. The Brush. This tool is used for creating large numbers of events quickly and easily. In the Key and Drum Edit windows, clicking and dragging the Brush on the screen creates a line of notes whose length is defined by the current quantise value, and separation by the current snap value. Combinations of the shift and control keys create different note velocities, in the same way as they do with the Pencil tool. You will notice that notes are only drawn in a horizontal line from the point of your initial click. Holding down Alt enables you to 'paint' events anywhere you want. In the List editor the Brush performs the same operation, but can be used to create any type of MIDI message, with the message type being defined by the 'ins.' setting at the top of the List Edit window. We shall cover this in more detail when we take a closer look at the Editors.

...

11. The Kickers. These two tools are used for nudging events earlier (to the left) or later (to the right) in a part. The amount an event is nudged by is defined by the editor's snap value. 12. The Drumstick. This tool, exclusive to the Drum editor, is used, rather unsurprisingly, to create events on the drum grid. Using the shift and control keys while entering notes adjusts the note's velocity (the velocity values that are created are defined for each individual instrument in the drum editor). Notes in the Drum editor only appear as individual hits with no note length. The notes entered with the drumstick do indeed have a length value attached to them, but normally this isn't important, as most synths' drum kit patches are set simply to trigger on an incoming note on message and ignore the note length. If, however, you need to enter longer notes, then these longer lengths can also be set on a per-instrument basis.

Adam Crute The Mix 05/00

...

Drum Fundamentals 2

The second part of our hands-on tutorial

Read Part 1 here I hope you've been practicing your up, down, tap and full strokes as they are going to come in handy with this months lesson, which involves tackling grooves. Getting Started Getting the right positioning of your kit is all-important. Make sure that all of your drums and cymbals are easy to reach. You don't want to be overstretching when you're playing. It may look very 'rawk' to have your cymbals six foot above the kit but invariably you'll end up breaking your back trying to actually hit them. As a result your playing will suffer not to mention your health, so it's best to just keep it simple. Place your feet on the appropriate pedals and adjust the stool - or throne - so that your thighs are flat whilst you're resting on them. Pivot on the ball of your foot when playing, drawing the power you need from your whole leg and not just your ankle. Once you start to 'groove' your legs will automatically bounce and not stick rigidly to the pedals. Cross your hands over (left over right or vice versa depending on your set up) so that your hi-hat hand is above your snare drum hand. Now that you're all set to go, let's take a look at some basic patterns. Groove 1 - click here for notation OK, here's you're bread and butter pop/rock groove. Simple, yet essential. All of the rock/pop grooves that you will play are just variations on this pattern. If you've never read music before this is the easiest way to understand what's going on. The top line, indicated by the X's are notes to play on the hi-hat (play them as tap strokes). The bottom line is the bass drum and the middle is the snare (play the snare as a full or down stroke). We are in 4/4 time, which means that there are four crotchet beats to each bar. As you can see, there are four black notes spread between the bass and snare drums in each bar - these are the crotchets. The hi-hat is playing eight notes to the bar - these are quavers or 1/8th notes. When counting out the bar adopt the practice of saying '1-and-2-and-3-and-4-and' whilst playing the groove.

Start of by playing the bass and snare drum alone, which should sound on the 1,2, 3 and 4. The bass drum should be on 1 and 3, the snare on 2 and 4. This is known as the backbeat. Once you are comfortable playing this, add the hi-hat that will be played on all the beats that you are counting (1-and-2-and-3-and-4and). Always play to a click if you have one. Start at around 80bpm (beats per minute) using either a metronome, a beat on a keyboard or, as a last resort, songs from a tape or CD. Once you are a master of this backbeat consider yourself well on the way. Now try these. Groove 2 - click here for notation In this groove the bass drum plays an extra note just before the second snare drum of the bar. So the backbeat translates to your counting pattern as 1, 2, 3-and-4 (remember beats 2 and 4 are on the snare drum). Groove 3 - click here for notation

...

This time around the bass drum beat moves to an earlier place in the bar. Count the back beat as follows 1, 2 and-3, 4 (remember the snare is on 2 and 4). Groove 4 - click here for notation This last pattern incorporates the use of a rest! Hang on, it's not that worrying. Simply count the backbeat like this 1, 2-and, and-4 (snare drum beats!). And finally... Got it? Great! Now that we've managed to work out the maths of it all it's time to play all these patterns over and over until we're grooving to the max! Make sure you use a click to keep yourself in time and try to let the beat flow through you. Relax at the kit, enjoy it, don't over play. Music is about having fun, OK? Right, you have your orders now get down the shed and start rocking!

...

Guitars And Sampling

How to use the old-fashioned plank in a hi-tech environment

It's a familiar story: you started playing guitar, did the rounds in a few dodgy local bands, played to 'select' audiences in the Dog & Arse and, to the endless stream of drummers your combo digested, described Sony's rejection letter as 'record company interest'. Ready to throw in the bandana, you boxed up the gig-scarred geetar and thumbed through the dreaded Sits Vac, fearing the inevitable. Then, from nowhere, MIDI came to you, along with samplers, synths, sequencers, the full monty. Band schmand! You could suddenly do it all on your own. At home. Yessss! So, while you lavished creativity with all the verve of a born-again bandmeister on the MIDI set-up, your guitar gent wept in the corner, gathering dust and untouched, aside from the occasional late night, half-cut discordant strum. You rotten sod. How could you neglect the very thing that gave you your start in music? Time to make some reparations... In truth, there's far more scope between guitars and sequencer/sampler set-ups than you might imagine. Beyond the use of short loops and effected noises, which, with a little imagination, can yield a host of interesting results, a sampler with a reasonable amount of memory can also take the rle of a mastering device for extended sections of audio. Sample examples If you play guitar to any reasonable extent, you'll doubtless be familiar with the general restrictions of analogue recording getting a good take can be a painstaking task in itself but drop a sampler into the equation and you'll quickly find there's a lot more room to manoeuvre. One of the simplest examples of the benefits of sampling guitar parts is explained thus: imagine you want to record a lead guitar part, but, as ever, the playing is inconsistent. Sometimes it begins well, but you lose it halfway through. Then again, on that wild take there were some fantastic moments, but the mistakes and timing problems render it unusable. You get the picture. Supposing you record a number of takes into the sampler the rigid, note-perfect version; the wild, improvised job, and so on. If you had enough channels on an analogue multitracker, you'd probably work out a composite take, made up of the best sections edited together, but your sampler will allow you to do this in the same way or, even better to run, for example, four separate takes set to different MIDI channels and triggered from your sequencer. You could then use MIDI volume controller information to cut between them. Obviously you'll need to insert the initial volume commands before the note trigger to ensure that the tracks you don't want to hear are set to 0, but from then on in you can switch between takes with ease. In this way you avoid all the fiddly editing and note shifting that a composite sample track would require, and retain all the original takes, in case you decide to change the configuration. Chorus and flanging effects are easy to achieve if you double up a sample and play both at once. Panning the two samples left and right and modulating pitch on one will give you a good repro of chorus, with more modulation taking you into flanging. Alternatively, you can pitch one of the samples up or down a tad and timestretch it to maintain the timing consistency between the two samples. Simultaneous triggering of a single sample throws up some interesting results, often creating an out-of-phase sound run three or four triggers of the same sample concurrently, or build up the layers over a longer period for a gradually swelling sound. Vacuum packing If you do find yourself in a momentary creative vacuum, it's always worth playing around with reversed samples. OK, so it's a well trodden path, but there's more to it than backward bloody cymbals. Psychedelic loons used to reverse tapes on multitrackers to get those oddball lead riffs that your old man lost it listening to, but with the sampler, push one button to get the same effect.

The chord from nowhere is a great way of introducing a track, or leading into a wild section, and if you've laid down a riff or lead part that lacks the verve that you anticipated, try copying and reversing it. Timing the reversed piece in and running it simultaneously with the normal part can throw up some intriguing developments, or alternatively, work on a composite of the two, intermingling the normal and reversed parts. Copying parts and shifting the pitch in octaves is another way of adding depth to a sound. Obviously, the outcome is dependant on the nature of the part; a single note will simply be thickened by an upward or downward octave shift on the copy, whereas a riff will gain a double or half-tempo counter riff, depending on whether you shift it up or down. Pitch shifting can, of course, be taken further, and not necessarily restricted to octave shifts experimentation can lead you to a harmonised counter-rhythmic accompaniment to your original part, and why stop at one copy? Take it as far as you can and edit back to the parts you like. Chord sequence Sampled guitars also respond well to effects your sequencer can generate. Tremoloes, delays, flanging and phasing, volume-based effects and layering all produce interesting results. You can create a range of delay effects from reverb simulation to intricate multi-taps quite simply by multi-triggering your samples. Experimenting with the timings and velocity values of the re-triggered sample will give you an infinite scope of delay possibilities. If you're using Cubase, the List or Key edit pages allow simple, graphical drawing-in of controller values; 'ramp' and 'v' shaped velocity settings, for example, will throw up some neat variations; offsetting the placement of your trigger notes lets you formulate more complex rhythmic effects. Using MIDI to control effects If you possess a multi-effects unit that has MIDI capabilities, then acquiring a MIDI foot controller can open up a host of possibilities for guitarists, maximising the instantaneous control you can have over the unit. The universal standardisation of MIDI 'language' means that any MIDI foot switch will be compatible with any MIDI effects unit, so buying this extra won't depend on your purchase of an expensive dedicated piece of kit there are cheaper products to be bought on the market. The simplest function a MIDI foot controller allows is a program change message. This gives you the ability to assign certain effects patches or programmes to the switches on your foot controller, providing instant and ordered access to the effects you wish to use. Mapping the program change information lets you place the effects switching in a logical order for the particular track you're playing essential if you want to avoid any mis-selections during live playing. Getting more involved, a foot controller can also be employed to alter parameter information, such as overdrive intensity, compression, feedback on delays, repeat speed and so on. Using the standard 0-127 controller message values, the degree of parameter change can be altered via a rocking foot pedal assigned to the parameter of your choice, and with a foot pedal switch assigned to parameter changes, you can stomp to any of up to 128 parameters (depending on the complexity of your effects unit). EQ frequency, master volume, wah-wah swell and other effects can all be assigned to the foot pedal and selected at will. The results can be enhanced further by copying a sample and assigning the two to left and right pan positions to produce ping-pong effects. Vary the frequency filters between the two samples and you'll arrive at a different effect again interesting if you record sweeping filter data. Running two complementary guitar phrases, panned left and right, with the filter sweep of one ascending from a low frequency to a high one and the other doing viceversa, can work well with strummed chords or single bass notes. For tremolo, repeat a short trigger note at chosen points to dictate speed. Ping-pong and velocity effects can enhance results in the same way as with delay sounds, as can any filter sweeping effects you add. Chopping the sample up on your sequencer brings more rhythmic possibilities, so don't shy from trying un-guitarish sequencer tricks, particularly in combination with the original part.

How to sound lo-fi

Instead of trying to make the most of your gear, why not make the least of it?

Sick and tired of spacious reverbs and 24-bit delays? Bored of subtle compression and glossy mixing? Then come with us on a journey into sound. Monophonic crap sound. Increasing numbers of artists, from trip hop to big beat are using that dirty lo-tech feel we all know and love, and no doubt E-mu will eventually launch a Planet Dirty module, but in the mean time, how do you make those kind of sounds? Armed only with this article and Portishead's lo-fi tips you'll soon be able to shag your sonics in a variety of innovative ways with kit you'll already have, while next month we'll be concentrating on equipment tailor-made for the job. Before we start, it's assumed you know how to use multi-effects and compressors already. If you don't, you should probably read your manuals carefully and try out any tutorials to get the hang of it all, otherwise you're going to pick up a lot of bad habits. Also, be warned: some of these techniques can result in howling feedback and bludgeoning noise, so monitor at a lower level than normal, unless you want to give your speakers or your ears a permanent lo-fi sound. If you have a spare compressor, stick it across the stereo mix to catch any sudden peaks. Echo, echo, echo, echo, echo, echo, echo That's the annoying thing about digital delays, they're too bloody good. You want something that mangles your sound with each repeat, not replays the same sample quieter. Something along the lines of "echo... eko... grecko... grackle... growing". To find the remedy we'll have to go back to the early 70s and the birth of dub. Starting as an offshoot of reggae, dub's sparse off-beat sound was one of the first musical styles rooted in the studio. The emptiness of the basic tracks left room for long evolving delays, feeding back into themselves. The trademark sound of dub delays can be heard on many records; for example the recent(ish) Portishead remix of Karmacoma, or almost any track on The Orb's classic UFOrb album. But how do you turn a mild-mannered delay into a feedback monster? First, set up a delay as normal, sending to it on auxiliary 1, and bringing the returns back to two mixer channels. Set the delay to the required time, and turn the feedback to zero. Send a sound to the delay at this point and you can hear that it repeats just once and then stops. Now feed the echo back into itself by sending it down aux 1 on the return channels. Be careful, as too high a level will cause an ear-splitting feedback loop as the delay repeats itself louder and louder. By varying the amount of auxiliary 1 being sent from the returns you can set the number of repeats. You'll hear an example of this on the first section of track 19 of the CD which is set for numerous repeats and isn't that different from ordinary delay. Adding EQ to the mix The secret ingredient is EQ. Rolling off the top- and bottom-end of the return channel causes each successive delay to become a bit thinner. (This effect appears on the second slice of track 19 where the echoes soon become noticeably degraded.) The effect is similar to a vintage tape delay, such as the Watkins Copycat or Roland Space Echo. Next, if you have a sweeping EQ, apply a gentle 3 or 4dB boost to the frequency of your choice, and slowly sweep the frequency around as the delay repeats. Fairly soon the echo is almost unrecognisable, as on the third section. Alternatively, cut instead of boost the swept frequency for the phasier sound which you'll hear on section four. Riding the send level by hand you should be able to keep the echoes going on indefinitely. Letting them start to fade and then bringing them back is a great way of speeding up the mutations, or you can even try and overload the delay, as we've recorded on section five. Try recording five minutes of evolving echoes to DAT, and then go through it for interesting samples. This is a good practice to follow when wiring up unpredictable effects chains, as it can be hard to get the same sound twice. A lot of drum 'n' bass artists fill DAT after DAT with bizarre effects for sampling later, so if you want to be unique, do the same. When you're tired of delays, switch to a different effect: try phasing or reverb. Section six of track 19 uses a flanger and reverb, while slice seven features phaser, reverb and delay.

Downbeat and dirty When it comes to beats, compression is everything. The whole Portishead vibe builds on the sound of drums drowning under the weight of the compressors, while the Chemicals' beats are block rockin' with the sound of hard-edged compression. The basic setting for a typical trip hop drum sound has very low threshold values. For big beat, ease up on the threshold (try -20dB) and reduce the ratio to 12:1. Increase the attack time slightly, until you hear the front end of each drum smack out hard, and set the release to between 40 and 80ms to allow a more dynamic sound. The first section of track 20 has the dry loop and on the second section you'll hear three variations. Compressors are normally used in insert points, but there are advantages to using them on a send. Set one up using the same wiring as the dub delay, merely replacing the delay with a compressor. For even better and more interesting sounds leave the delay exactly where it is and put the compressor before it. Just take the cables out of the delay's input sockets and stick them in the compressor's. Then run leads from the compressor's outputs to the delay's inputs and set the delay time to 20ms or so, with no feedback. Now send your drum loop. You'll hear this effect on the third slice of track 20 which starts with just the compressor and then the delay is switched in. You can hear the metallic quality caused by the compressor feeding back into itself. With a compressor in the loop it's impossible to blow your speakers so welly up the gain on the returns until it's well into the red. Grab yourself an EQ and sweep it all over the place. It should sound something like the example on the section four of track 20. Although it's no louder in terms of dBs, the subjective effect is of a huge volume increase. You can hear a feedback tone at the start and end of this track, caused by the compression when nothing is playing. Even a gate can yield creative results when used in this fashion. Spring has sprung A lot of lo-fi sound has old gear at its roots, like the 'fake' tape echo effect created earlier. Before the advent of digital effects many studios used plate reverbs (essentially a resonating chunk of iron in a wardrobe-sized box) or, if they were on a budget, a spring reverb. If you've ever kicked or dropped a guitar amp then you'll probably have heard the brain-shattering crash of a spring reverb. Most modern effects units do a fairly convincing plate reverb but not many offer a spring algorithm. You can make your own with a stereo delay, by setting one delay to about 45ms and the other to about 25ms. Set the feedback very high and set any damping parameters to maximum, giving a very dull echo. Send the effect back into itself, as with the tape echo earlier, and you have something approaching the classic spring reverb. Listen to the first slice of track 21 for an idea of what you're aiming for. An easy way of accessing grungy effects is to get your hands on some guitar pedals. They're mainly designed for live use, so they provide crude larger-thanlife sounds, and they're cheap, as little as 20 or 30 second-hand. And there's not much that can compete with a really crap guitar compressor when it comes to big beat madness. Similarly phasers and flangers tend to be less subtle than their digital counterparts, while analogue delays degenerate into a soupy noise within seven or eight repeats. Highly recommended. Red light district Don't shy away from clipping. People have overdriven everything from valve EQs to analogue tape machines to create a bigger more crunchy sound, so don't panic at the first sight of an overload light. Experiment a little, try overloading your sampler's input, or driving your effects boxes too hard. The classic Josh Wink track Higher State Of Consciousness 303 sound relies on distorting the mixing desk, and the sound of tape saturation can be heard on most 70s rock drums. In the land of lo-fi use your ears and not your lab coat to decide what sounds good. Found sounds Whether it's trip hop or big beat that you're making a lot of lo-fi styles are loop-based so you're gonna need some interesting loops. Using the methods already covered we've got some pretty gritty beats going, but what about atmospheric stuff? You can start by putting the radio on and switching to long wave (for possibly the first time in your life). Now find a station and then detune slightly away. The further you move from the original signal the more the sound degenerates into a clangorous sort of ring modulation. Keep twiddling until you get the sound you like (the less recognisable the better) then record a section to DAT. This isn't always as straightforward as it sounds as it can take a while to strike lucky with a phrase or piece of warped music. I had to sit through George Michael to get the sound on the second section of track 21, so you've been warned. Having got your ideal bit of noise you'll probably need to EQ out any whining tones, then sample it back off the DAT and use it. It's occasionally worth a quick foray into medium wave, but FM rarely produces anything worth hearing (the radio signal that is, not the mag!). Another source of unique sounds is digital feedback. This sound was most famously used by Garbage on Stupid Girl underneath the vocals running up to the

chorus. And while not everybody has access to an 02R, the same effect can be attempted with a sampler or an audio sequencer. Route the outputs to a desk (so you can monitor your results) and then feed them down an auxiliary back to the inputs. Then fiddle with the input gain and/or EQ. The sounds on track 21, slice three were created by looping an 02R out through one of its internal reverbs and back into two inputs or, in layman's terms, stuffing its head up its own arse. Bitty and gritty Many dance acts have an old sampler hanging about, purely for the gritty sounds they produce. The Casio FZ series is particularly renowned for hardening up drum loops with its low sample rate. Fortunately for you, you don't have to buy a second-hand sampler to achieve this effect, as most current machines allow you to reduce the frequency bandwidth and/or bit-rate. In these days of big memories and cheap RAM, most people's samplers are left set to the highest sampling quality at all times, so get in there and set it to the lowest. Sampling uses a process known as anti-alias filtering to try and mask the effects of lower bandwidths. Unfortunately, this is an automatic process on many samplers but if you're one of the lucky few who can switch it off then the grungy effect will be stronger. As mentioned previously this sound is particularly suited to drum loops, giving them an antique flavour similar to crackly old vinyl. It's a sound you can hear a lot in hip hop, frequently used to make a sample stick out from the rest of the beats. Of course you can try it on anything - vocals, crusty old strings, even sections of the whole mix. Lo-coder Vocoders have led a chequered career, swinging from cool (Daft Punk, Underworld) to very sad (The Cylons, Sparky's Magic Piano). Either way they were still largely used to process vocals until they were leapt upon by the more experimental members of the dance fraternity. Despite their name, vocoders are no more suited to voices than to any other sound source, being basically a bank of filters that analyse the EQ content of one sound and impose it on another. As most vocoders only operate on frequencies below 3 or 4kHz they impart a pleasant woolliness to the sounds that they process. Whenever you use a vocoder, always allow yourself five minutes of mumbling "I will exterminate" and "We meet again, Obi Wan" just to get it out of your system, and then route two effects sends into it. Now experiment with sending different sounds from your track to work against each other. A modern classic is the sound of drums being imposed on a slow pad, giving a gating hard-edged movement to the pad sound. Hear it on track 22, section one. This is also an ideal time to use some of your found sounds to impart a little oddness onto more conventional parts of the mix. By varying the depth of the vocoding you can create anything from a synth garble to a gentle organic movement. Track 22, section two starts at maximum effect, reducing to a slight colouration. Of course, you may not own a vocoder and think you can't afford one. Well, you're wrong. Vocoders are popping up cheaply all over the place, in multi-effects units and as software plug-ins so don't worry... you'll be getting them free with cornflakes by the summer. Burn the magic boxes If you're on a real caveman tip you might still be finding all this a bit too modern and hi-tech, so here's a few real medieval tips. Instead of using effects boxes, why not use real acoustics instead? Place a guitar amp at the end of one of your auxiliaries and put a microphone at the other end of the room (preferably in the kitchen or the toilet) and hey presto! Really grotty reverb. If you can't afford a guitar amp use the useless (until now) pair of tiny speakers that came with your Walkman, or buy some; they're pretty cheap. Point a mic at them and try shaking them around, or putting them in different acoustic spaces. Another great technique can be achieved by putting both speakers in an empty fishbowl and moving the mic over the top of the bowl. That's all for now, see you in part two. And I leave you with the endearing image of a man recording a fishbowl at one in the morning. Now that's what I call lo-fi. Thanks to Studiocare Pro-Audio for the loan of the 02R and effects used in this article.

To print: Select File and then Print from your browser's menu.
Click here to return to intermusic.com

Mastering Cubase and Logic

Let us help you improve your MIDI manipulation skills...

There are many great paradoxes in life: when we go to bed at night we can't get to sleep, yet in the morning we can't wake up; those most capable of getting elected to power are often those least suitable to hold such a position. And, first and foremost, one of Cubase's most powerful editors often stays unused at the bottom of a menu due to its apparent complexity, yet is called the Logical Editor. All right, it's not the sort of thing that would keep you up at night (or, if it is, you really do have something better to be worrying about). But if you use Cubase and don't use the Logical Editor, I guarantee you are doing many things the long way round. Getting the most from this editor does require a basic understanding of MIDI and how its messages are structured, but as MIDI is a subject that can fill a book - or many different books - we won't go into it here, and will assume that you have a basic knowledge. The best way to describe the Logical Editor is as a kind of 'MIDI calculator' used to mathematically manipulate raw MIDI data. This could be for a simple task, like selecting all the instances of a particular note in a part, or for more creative uses, such as transforming note data into a controller message for a filter bank. At first glance, the editor can be a bit baffling, with lots of drop -down menus and fields labelled, rather vaguely, as Value 1, Value 2, and so on. Thankfully, there are two modes of operation: Easy and Expert. We will look at the former first. When you open the editor, you will notice that it is split into three distinct areas: Filter, Processing and Functions (Preset is just for storing settings for common pperations, so it doesn't count). The basic principle here is that the filter is used to specify the MIDI events that will be passed along for processing. Here, mathematical functions can be set - or not, as the case may be - and applied to these MIDI events in the manner dictated by the Functions settings. The meaning of the Value 1 and Value 2 fields varies, depending on the type of event being edited. You see? I told you it was logical! Let's move on to an example... Example 1 To achieve a useful result from the editor, you need to have an aim in mind before you start fiddling with the settings, so record a MIDI part into Cubase. Now, let's say that you wanted to delete all the instances of the note C3 that have a velocity value between 30 and 50 from the new part (hey, it's just an example, right?). Clicking the Event Type drop-down gives three choices: Ignore, Equal and Unequal - select Equal. You can now access the drop-down, just below where you select the event type to be processed - for this example, select Note. You've just told the editor that you only want it to deal with note data and nothing more. (Leaving this field on Ignore would mean that all event types - modulation and aftertouch, for instance - would be passed on to the next stage of the filter. Selecting Unequal would mean the editor was to deal with everything but note events.)

...

The next field is Value 1. For a note event, this means the MIDI note number (and this is where the basic MIDI knowledge comes in handy). This time we have a greater choice of filter types to chose from, and all are selfexplanatory - again, select Equal. Now we can enter the value of the note we want to deal with, either with the mouse or by manually entering the value. For this example, set the value to C3, or MIDI note number 60, as it will appear in the field. For a note event, the Value 2 setting represents the note's velocity. As we have decided we want to delete velocities between 30 and 50, we need to use the Inside filter. Now the second numerical field has become active: set upper and lower settings to 50 and 30, respectively. Because the event we are dealing with has been recorded from a single source and doesn't contain events on multiple MIDI channels, the last field can be left as Ignore - if you are dealing with multiple parts, however, this can be useful. Have a glance back through the settings. We have now specified what events to delete. All that remains is to select the Delete option from the Functions drop-down menu and click the Do It button. The logic should be becoming clearer now... Example 2 On to the Processing section: let's say that you had programmed a drum part using a single sound module as the sound source. Now you want to lift out all the events in the hi-hat line that have a velocity greater than 90 and send these MIDI events to a sampler that has a better loud hi-hat sound. The problem is that on the sound module the hi-hat was assigned to note F#1, yet on the sampler it is on C2. To make matters worse, the sampler's hi-hat sound is more sensitive to velocity values than the one on the sound module. (Well, it's more feasible than the last example!) Use your new-found knowledge of the Filter section to set it up appropriately. All the processing fields default to Keep, meaning no processing will be carried out on the data. Have a look through the drop-down menus and see what options are available - again, these are self-explanatory and, on the whole, are simply basic arithmetic. For our example, to change the note number from F#1 to C2 (MIDI note number 42 to MIDI note number 60, in other words) is nothing more complex than setting the Value 1 drop-down to Plus and setting the numerical field to 18. There are various approaches to changing the velocity value in this example: you could use subtract to reduce the velocity data by a set amount; divide by two to half all the velocity values; or use the Dyn option to constrain the velocity value between an upper and lower limit, yet retain the relative difference in velocity. To finish the operation, choose Extract from the functions menu and click Do It. A new part will have been created in the Arrange window containing the new notes, all transformed and separate. Cubase 5 users, however, will be disappointed to find that the Extract function has been dropped (this is probably an oversight rather than a conscious decision to drop the function - so these users would need to perform an operation like this in two stages: use the Select option to select the notes, then cut and paste them to a new part. This part can then be processed with the Logical Editor's Transform function. Once you get the hang of the Easy side of the Logical Editor, have a look at the Expert settings. The new fields of Length and Bar Range add a whole new range of useful functions - the method of operation remains the same as for the Easy side of the editor. Hopefully, I've managed to make the Logical Editor seem somewhat more... well, logical, really. The more you use it, the more sense it will make.

...

MIDI Specification MIDI (ie, Musical Instrument Digital Interface) consists of both a simple hardware interface, and a more elaborate transmission protocol.

Hardware
MIDI is an asynchronous serial interface. The baud rate is 31.25 Kbaud (+/- 1%). There is 1 start bit, 8 data bits, and 1 stop bit (ie, 10 bits total), for a period of 320 microseconds per serial byte. The MIDI circuit is current loop, 5 mA. Logic 0 is current ON. One output drives one (and only one) input. To avoid grounding loops and subsequent data errors, the input is opto-isolated. It requires less than 5 mA to turn on. The Sharp PC -900 and HP 6N138 optoisolators are satisfactory devices. Rise and fall time for the optoisolator should be less than 2 microseconds. The standard connector used for MIDI is a 5 pin DIN. Separate jacks (and cable runs) are used for input and output, clearly marked on a given device (ie, the MIDI IN and OUT are two separate DIN female panel mount jacks). 50 feet is the recommended maximum cable length. Cables are shielded twisted pair, with the shield connecting pin 2 at both ends. The pair is pins 4 and 5. Pins 1 and 3 are not used, and should be left unconnected. A device may also be equipped with a MIDI THRU jack which is used to pass the MIDI IN signal to another device. The MIDI THRU transmission may not be performed correctly due to the delay time (caused by the response time of the opto-isolator) between the rising and falling edges of the square wave. These timing errors will tend to add in the "wrong direction" as more devices are daisy-chained to other device's MIDI THRU jacks. The result is that there is a limit to the number of devices that can be daisy-chained.

Schematic
A schematic of a MIDI (IN and OUT) interface

Messages
The MIDI protocol is made up of messages . A message consists of a string (ie, series) of 8-bit bytes. MIDI has many such defined messages. Some messages consist of only 1 byte. Other messages have 2 bytes. Still others have 3 bytes. One type of MIDI message can even have an unlimited number of bytes. The one thing that all messages have in common is that the first byte of the message is the Status byte. This is a special byte because it's the only byte that has bit #7 set. Any other following bytes in that message will not have bit #7 set. So, you can always detect the start a MIDI message because that's when you receive a byte with bit #7 set. This will be a Status byte in the range 0x80 to 0xFF. The remaining bytes of the message (ie, the data bytes, if any) will be in the range 0x00 to 0x7F. (Note that I'm using the C programming language convention of prefacing a value with 0x to indicate hexadecimal). The Status bytes of 0x80 to 0xEF are for messages that can be broadcast on any one of the 16 MIDI channels. Because of this, these are called Voice messages. (My own preference is to say that these messages belong in the Voice Category). For these Status bytes, you break up the 8-bit byte into 2 4-bit nibbles. For example, a Status byte of 0x92 can be broken up into 2 nibbles with values of 9 (high nibble) and 2 (low nibble). The high nibble tells you what type of MIDI message this is. Here are the possible values for the high nibble, and what type of Voice Category message each represents:

MIDI Specification 8 = Note Off 9 = Note On A = AfterTouch (ie, key pressure) B = Control Change C = Program (patch) change D = Channel Pressure E = Pitch Wheel So, for our example status of 0x92, we see that its message type is Note On (ie, the high nibble is 9). What's the low nibble of 2 mean? This means that the message is on MIDI channel 2. There are 16 possible (logical) MIDI channels, with 0 being the first. So, this message is a Note On on channel 2. What status byte would specify a Program Change on channel 0? The high nibble would need to be C for a Program Change type of message, and the low nibble would need to be 0 for channel 0. Thus, the status byte would be 0xC0. How about a Program Change on channel 15 (ie, the last MIDI channel). Again, the high nibble would be C, but the low nibble would be F (ie, the hexademical digit for 15).Thus, the status would be 0xCF. NOTE: Although the MIDI Status byte counts the 16 MIDI channels as numbers 0 to F (ie, 15), all MIDI gear (including computer software) displays a channel number to the musician as 1 to 16. So, a Status byte sent on MIDI channel 0 is considered to be on "channel 1" as far as the musician is concerned. This discrepancy between the status byte's channel number, and what channel the musician "believes" that a MIDI message is on, is accepted because most humans start counting things from 1, rather than 0. The Status bytes of 0xF0 to 0xFF are for messages that aren't on any particular channel (and therefore all daisy-chained MIDI devices always can "hear" and choose to act upon these messages. Contrast this with the Voice Category messages, where a MIDI device can be set to respond to those MIDI messages only on a specified channel). These status bytes are used for messages that carry information of interest to all MIDI devices, such as syncronizing all playback devices to a particular time. (By contrast, Voice Category messages deal with the individual musical parts that each instrument might play, so the channel nibble scheme allows a device to respond to its own MIDI channel while ignoring the Voice Category messages intended for another device on another channel). These status bytes are further divided into two catagories. Status bytes of 0xF0 to 0xF7 are called System Common messages. Status bytes of 0xF8 to 0xFF are called System Realtime messages. The implications of such will be discussed later. Actually, certain Status bytes within this range are not defined by the MIDI spec to date, and are reserved for future use. For example, Status bytes of 0xF4, 0xF5, 0xF9, and 0xFD are not used. If a MIDI device ever receives such a Status, it should ignore that message. See Ignoring MIDI Messages. What follows is a description of each message type. The description tells what the message does, what its status byte is, and whether it has any subsequent data bytes and what information those carry. Generally, these descriptions take the view of a device receiving such messages (ie, what the device would typically be expected to do when receiving particular messages). When applicable, remarks about a device that transmits such messages may be made.

MIDI Specification

Note Off
Category: Voice Purpose Indicates that a particular note should be released. Essentially, this means that the note stops sounding, but some patches might have a long VCA release time that needs to slowly fade the sound out. Additionally, the device's Hold Pedal controller may be on, in which case the note's release is postponed until the Hold Pedal is released. In any event, this message either causes the VCA to move into the release stage, or if the Hold Pedal is on, indicates that the note should be released (by the device automatically) when the Hold Pedal is turned off. If the device is a MultiTimbral unit, then each one of its Parts may respond to Note Offs on its own channel. The Part that responds to a particular Note Off message is the one assigned to the message's MIDI channel. Status 0x80 to 0x8F where the low nibble is the MIDI channel. Data Two data bytes follow the Status. The first data is the note number. There are 128 possible notes on a MIDI device, numbered 0 to 127 (where Middle C is note number 60). This indicates which note should be released. The second data byte is the velocity, a value from 0 to 127. This indicates how quickly the note should be released (where 127 is the fastest). It's up to a MIDI device how it uses velocity information. Often velocity will be used to tailor the VCA release time. MIDI devices that can generate Note Off messages, but don't implement velocity features, will transmit Note Off messages with a preset velocity of 64. Errata An All Notes Off controller message can be used to turn off all notes for which a device received Note On messages (without having received respective Note Off messages).

Note On
Category: Voice Purpose Indicates that a particular note should be played. Essentially, this means that the note starts sounding, but some patches might have a long VCA attack time that needs to slowly fade the sound in. In any case, this message indicates that a particular note should start playing (unless the velocity is 0, in which case, you really have a Note Off). If the device is a MultiTimbral unit, then each one of its Parts may sound Note Ons on its own channel. The Part that sounds a particular Note On message is the one assigned to

MIDI Specification the message's MIDI channel. Status 0x90 to 0x9F where the low nibble is the MIDI channel. Data Two data bytes follow the Status. The first data is the note number. There are 128 possible notes on a MIDI device, numbered 0 to 127 (where Middle C is note number 60). This indicates which note should be played. The second data byte is the velocity, a value from 0 to 127. This indicates with how much force the note should be played (where 127 is the most force). It's up to a MIDI device how it uses velocity information. Often velocity is be used to tailor the VCA attack time and/or attack level (and therefore the overall volume of the note). MIDI devices that can generate Note On messages, but don't implement velocity features, will transmit Note On messages with a preset velocity of 64. A Note On message that has a velocity of 0 is considered to actually be a Note Off message, and the respective note is therefore released. See the Note Off entry for a description of such. This "trick" was created in order to take advantage of running status. A device that recognizes MIDI Note On messages must be able to recognize both a real Note Off as well as a Note On with 0 velocity (as a Note Off). There are many devices that generate real Note Offs, and many other devices that use Note On with 0 velocity as a substitute. Errata In theory, every Note On should eventually be followed by a respective Note Off message (ie, when it's time to stop the note from sounding). Even if the note's sound fades out (due to some VCA envelope decay) before a Note Off for this note is received, at some later point a Note Off should be received. For example, if a MIDI device receives the following Note On:
0x90 0x3C 0x40 Note On/chan 0, Middle C, velocity could be anything except 0

Then, a respective Note Off should subsequently be received at some time, as so:
0x80 0x3C 0x40 Note Off/chan 0, Middle C, velocity could be anything

Instead of the above Note Off, a Note On with 0 velocity could be substituted as so:
0x90 0x3C 0x00 Really a Note Off/chan 0, Middle C, velocity must be 0

If a device receives a Note On for a note (number) that is already playing (ie, hasn't been turned off yet), it the device's decision whether to layer another "voice" playing the same pitch, or cut off the voice playing the preceding note of that same pitch in order to "retrigger" that note.

MIDI Specification

Aftertouch
Category: Voice Purpose While a particular note is playing, pressure can be applied to it. Many electronic keyboards have pressure sensing circuitry that can detect with how much force a musician is holding down a key. The musician can then vary this pressure, even while he continues to hold down the key (and the note continues sounding). The Aftertouch message conveys the amount of pressure on a key at a given point. Since the musician can be continually varying his pressure, devices that generate Aftertouch typically send out many such messages while the musician is varying his pressure. Upon receiving Aftertouch, many devices typically use the message to vary a note's VCA and/or VCF envelope sustain level, or control LFO amount and/or rate being applied to the note's sound generation circuitry. But, it's up to the device how it chooses to respond to received Aftertouch (if at all). If the device is a MultiTimbral unit, then each one of its Parts may respond differently (or not at all) to Aftertouch. The Part affected by a particular Aftertouch message is the one assigned to the message's MIDI channel. Status 0xA0 to 0xAF where the low nibble is the MIDI channel. Data Two data bytes follow the Status. The first data is the note number. There are 128 possible notes on a MIDI device, numbered 0 to 127 (where Middle C is note number 60). This indicates to which note the pressure is being applied. The second data byte is the pressure amount, a value from 0 to 127 (where 127 is the most pressure). Errata See the remarks under Channel Pressure.

Controller
Category: Voice Purpose Sets a particular controller's value. A controller is any switch, slider, knob, etc, that implements some function (usually) other than sounding or stopping notes (ie, which are the jobs of the Note On and Note Off messages respectively). There are 128 possible controllers on a MIDI device. These are numbered from 0 to 127. Some of these controller numbers are assigned to particular hardware controls on a MIDI device. For example, controller 1 is the Modulation Wheel. Other controller numbers are free to be arbitrarily interpreted by a MIDI device. For example, a drum box may have a slider controlling Tempo

MIDI Specification which it arbitrarily assigns to one of these free numbers. Then, when the drum box receives a Controller message with that controller number, it can adjust its tempo. A MIDI device need not have an actual physical control on it in order to respond to a particular controller. For example, even though a rackmount sound module may not have a Mod Wheel on it, the module will likely still respond to and utilize Modulation controller messages to modify its sound. If the device is a MultiTimbral unit, then each one of its Parts may respond differently (or not at all) to various controller numbers. The Part affected by a particular controller message is the one assigned to the message's MIDI channel. Status 0xB0 to 0xBF where the low nibble is the MIDI channel. Data Two data bytes follow the Status. The first data is the controller number (0 to 127). This indicates which controller is affected by the received MIDI message. The second data byte is the value to which the controller should be set, a value from 0 to 127. Errata An All Controllers Off controller message can be used to reset all controllers (that a MIDI device implements) to default values. For example, the Mod Wheel is reset to its "off" position upon receipt of this message. See the list of Defined Controller Numbers for more information about particular controllers.

Program Change
Category: Voice Purpose To cause the MIDI device to change to a particular Program (which some devices refer to as Patch, or Instrument, or Preset, or whatever). Most sound modules have a variety of instrumental sounds, such as Piano, and Guitar, and Trumpet, and Flute, etc. Each one of these instruments is contained in a Program. So, changing the Program changes the instrumental sound that the MIDI device uses when it plays Note On messages. Of course, other MIDI messages also may modify the current Program's (ie, instrument's) sound. But, the Program Change message actually selects which instrument currently plays. There are 128 possible program numbers, from 0 to 127. If the device is a MultiTimbral unit, then it usually can play 16 "Parts" at once, each receiving data upon its own MIDI channel. This message will then change the instrument sound for only that Part which is set to the message's MIDI channel. For MIDI devices that don't have instrument sounds, such as a Reverb unit which may have several Preset "room algorithms" stored, the Program Change message is often used to select which Preset to

MIDI Specification use. As another example, a drum box may use Program Change to select a particular rhythm pattern (ie, drum beat). Status 0xC0 to 0xCF where the low nibble is the MIDI channel. Data One data byte follows the status. It is the program number to change to, a number from 0 to 127. Errata On MIDI sound modules (ie, whose Programs are instrumental sounds), it became desirable to define a standard set of Programs in order to make sound modules more compatible. This specification is called General MIDI Standard. Just like with MIDI channels 0 to 15 being displayed to a musician as channels 1 to 16, many MIDI devices display their Program numbers starting from 1 (even though a Program number of 0 in a Program Change message selects the first program in the device). On the other hand, this approach was never standardized, and some devices use vastly different schemes for the musician to select a Program. For example, some devices require the musician to specify a bank of Programs, and then select one within the bank (with each bank typically containing 8 to 10 Programs). So, the musician might specify the first Program as being bank 1, number 1. Nevertheless, a Program Change of number 0 would select that first Program.

Channel Pressure
Category: Voice Purpose While notes are playing, pressure can be applied to all of them. Many electronic keyboards have pressure sensing circuitry that can detect with how much force a musician is holding down keys. The musician can then vary this pressure, even while he continues to hold down the keys (and the notes continue sounding). The Channel Pressure message conveys the amount of overall pressure on the keys at a given point. Since the musician can be continually varying his pressure, devices that generate Channel Pressure typically send out many such messages while the musician is varying his pressure. Upon receiving Channel Pressure, many devices typically use the message to vary all of the sounding notes' VCA and/or VCF envelope sustain levels, or control LFO amount and/or rate being applied to the notes' sound generation circuitry. But, it's up to the device how it chooses to respond to received Channel Pressure (if at all). If the device is a MultiTimbral unit, then each one of its Parts may respond differently (or not at all) to Channel Pressure. The Part affected by a particular Channel Pressure message is the one assigned to the message's MIDI channel. Status

MIDI Specification 0xD0 to 0xDF where the low nibble is the MIDI channel. Data One data byte follows the Status. It is the pressure amount, a value from 0 to 127 (where 127 is the most pressure). Errata What's the difference between AfterTouch and Channel Pressure? Well, AfterTouch messages are for individual keys (ie, an Aftertouch message only affects that one note whose number is in the message). Every key that you press down generates its own AfterTouch messages. If you press on one key harder than another, then the one key will generate AfterTouch messages with higher values than the other key. The net result is that some effect will be applied to the one key more than the other key. You have individual control over each key that you play. With Channel Pressure, one message is sent out for the entire keyboard. So, if you press one key harder than another, the module will average out the difference, and then just pretend that you're pressing both keys with the exact same pressure. The net result is that some effect gets applied to all sounding keys evenly. You don't have individual control per each key. A controller normally uses either Channel Pressure or AfterTouch, but usually not both. Most MIDI controllers don't generate AfterTouch because that requires a pressure sensor for each individual key on a MIDI keyboard, and this is an expensive feature to implement. For this reason, many cheaper units implement Channel Pressure instead of Aftertouch, as the former only requires one sensor for the entire keyboard's pressure. Of course, a device could implement both Aftertouch and Channel Pressure, in which case the Aftertouch messages for each individual key being held are generated, and then the average pressure is calculated and sent as Channel Pressure.

Pitch Wheel
Category: Voice Purpose To set the Pitch Wheel value. The pitch wheel is used to slide a note's pitch up or down in cents (ie, fractions of a half-step). If the device is a MultiTimbral unit, then each one of its Parts may respond differently (or not at all) to Pitch Wheel. The Part affected by a particular Pitch Wheel message is the one assigned to the message's MIDI channel. Status 0xE0 to 0xEF where the low nibble is the MIDI channel. Data Two data bytes follow the status. The two bytes should be combined together to form a 14-bit value. The first data byte's bits 0 to 6 are bits 0 to 6 of the 14-bit value. The second data byte's bits 0 to 6 are really bits 7 to 13 of the 14-bit value. In other words, assuming that a C program has the first byte in the variable First and the second data byte in the variable Second, here's how to combine them into a 14-bit

MIDI Specification value (actually 16-bit since most computer CPUs deal with 16-bit, not 14-bit, integers):
unsigned short CombineBytes(unsigned char First, unsigned char Second) { unsigned short _14bit; _14bit = (unsigned short)Second; _14bit<<=7; _14bit|=(unsigned short)First; return(_14bit); }

A combined value of 0x2000 is meant to indicate that the Pitch Wheel is centered (ie, the sounding notes aren't being transposed up or down). Higher values transpose pitch up, and lower values transpose pitch down. Errata The Pitch Wheel range is usually adjustable by the musician on each MIDI device. For example, although 0x2000 is always center position, on one MIDI device, a 0x3000 could transpose the pitch up a whole step, whereas on another device that may result in only a half step up. The GM spec recommends that MIDI devices default to using the entire range of possible Pitch Wheel message values (ie, 0x0000 to 0x3FFF) as +/- 2 half steps transposition (ie, 4 half-steps total range). The Pitch Wheel Range (or Sensitivity) is adjusted via an RPN controller message.

System Exclusive
Category: System Common Purpose Used to send some data that is specific to a MIDI device, such as a dump of its patch memory or sequencer data or waveform data. Also, SysEx may be used to transmit information that is particular to a device. For example, a SysEx message might be used to set the feedback level for an operator in an FM Synthesis device. This information would be useless to a sample playing device. On the other hand, virtually all devices respond to Modulation Wheel control, for example, so it makes sense to have a defined Modulation Controller message that all manufacturers can support for that purpose. Status Begins with 0xF0. Ends with a 0xF7 status (ie, after the data bytes). Data There can be any number of data bytes inbetween the initial 0xF0 and the final 0xF7. The most important is the first data byte (after the 0xF0), which should be a Manufacturer's ID . Errata

MIDI Specification Virtually every MIDI device defines the format of its own set of SysEx messages (ie, that only it understands). The only common ground between the SysEx messages of various models of MIDI devices is that all SysEx messages must begin with a 0xF0 status and end with a 0xF7 status.In other words, this is the only MIDI message that has 2 Status bytes, one at the start and the other at the end. Inbetween these two status bytes, any number of data bytes (all having bit #7 clear, ie, 0 to 127 value) may be sent. That's why SysEx needs a 0xF7 status byte at the end; so that a MIDI device will know when the end of the message occurs, even if the data within the message isn't understood by that device (ie, the device doesn't know exactly how many data bytes to expect before the 0xF7). Usually, the first data byte (after the 0xF0) will be a defined Manufacturer's ID . The IMA has assigned particular values of the ID byte to various manufacturers, so that a device can determine whether a SysEx message is intended for it. For example, a Roland device expects an ID byte of 0x41. If a Roland device receives a SysEx message whose ID byte isn't 0x41, then the device ignores all of the rest of the bytes up to and including the final 0xF7 which indicates that the SysEx message is finished. The purpose of the remaining data bytes, however many there may be, are determined by the manufacturer of a product. Typically, manufacturers follow the Manufacturer ID with a Model Number ID byte so that a device can not only determine that it's got a SysEx message for the correct manufacturer, but also has a SysEx message specifically for this model. Then, after the Model ID may be another byte that the device uses to determine what the SysEx message is supposed to be for, and therefore, how many more data bytes follow. Some manufacturers have a checksum byte, (usually right before the 0xF7) which is used to check the integrity of the message's transmission. The 0xF7 Status byte is dedicated to marking the end of a SysEx message. It should never occur without a preceding 0xF0 Status. In the event that a device experiences such a condition (ie, maybe the MIDI cable was connected during the transmission of a SysEx message), the device should ignore the 0xF7. Furthermore, although the 0xF7 is supposed to mark the end of a SysEx message, in fact, any status (except for Realtime Category messages) will cause a SysEx message to be considered "done" (ie, actually "aborted" is a better description since such a scenario indicates an abnormal MIDI condition). For example, if a 0x90 happened to be sent sometime after a 0xF0 (but before the 0xF7), then the SysEx message would be considered aborted at that point. It should be noted that, like all System Common messages, SysEx cancels any current running status. In other words, the next Voice Category message (after the SysEx message) must begin with a Status. Here are the assigned Manufacturer ID numbers:
Sequential Circuits 1 Big Briar 2 Octave / Plateau 3 Moog 4 Passport Designs 5 Lexicon 6 Kurzweil 7 Fender 8 Gulbransen 9 Delta Labs 0x0A Sound Comp. 0x0B General Electro 0x0C Techmar 0x0D Matthews Research 0x0E Oberheim 0x10 PAIA 0x11

MIDI Specification
Simmons DigiDesign Fairlight Peavey JL Cooper Lowery Lin Emu Bon Tempi S.I.E.L. SyntheAxe Hohner Crumar Solton Jellinghaus Ms CTS PPG Elka Cheetah Waldorf Kawai Roland Korg Yamaha Casio Akai 0x12 0x13 0x14 0x1B 0x15 0x16 0x17 0x18 0x20 0x21 0x23 0x24 0x25 0x26 0x27 0x28 0x29 0x2F 0x36 0x3E 0x40 0x41 0x42 0x43 0x44 0x45

The following 2 IDs are dedicated to Universal SysEx messages (ie, SysEx messages that products from numerous manufacturers may want to utilize). Since SysEx is the only defined MIDI message that can have a length longer than 3 bytes, it became a candidate for using to transmit long strings of data. For example, many manufacturers make digital samplers. It became desirable for manufacturers to allow exchange of waveform data between each others' products. So, a standard protocol was developed called MIDI Sample Dump Standard (SDS). Of course, since waveforms typically entail large amounts of data, SysEx messages (ie, containing over a hundred bytes each) seemed to be the most suitable vehicle to transmit the data over MIDI. But, it was decided not to use a particular manufacturer's ID. So, a universal ID was created. There's a universal ID meant for realtime messages (ie, ones that need to be responded to immediately), and one for non-realtime (ie, ones which can be processed when the device gets around to it).
RealTime ID Non-RealTime ID 0x7F 0x7E

A general template for these two IDs was defined. After the ID byte is a SysEx Channel byte. This could be from 0 to 127 for a total of 128 SysEx channels. So, although "normal" SysEx messages have no MIDI channel like Voice Category messages do, a Universal SysEx message can be sent on one of 128 SysEx channels. This allows the musician to set various devices to ignore certain Universal SysEx messages (ie, if the device allows the musician to set its Base SysEx Channel. Most devices just set their Base Sysex channel to the same number as the Base Channel for Voice Category messages). On the other hand, a SysEx channel of 127 is actually meant to tell the device to "disregard the channel and pay attention to this message regardless". After the SysEx channel, the next two bytes are Sub IDs which tell what the SysEx is for. Microsoft has an oddball manufacturer ID. It consists of 3 bytes, rather than 1 byte, and it is 0x00 0x00 0x41. Note that the first byte is 0. The MMA sort of reserved the value 0 for the day when it would run out of the maximum 128 IDs available with using a single byte, and would need more bytes to represent

MIDI Specification IDs for new manufacturers. Besides the SDS messages (covered later in the SDS section), there are two other defined Universal Messages: GM System Enable/Disable This enables or disables the GM Sound module or GM Patch Set in a device. Some devices have built-in GM modules or GM Patch Sets in addition to non-GM Patch Sets or non-GM modes of operation. When GM is enabled, it replaces any non-GM Patch Set or non-GM mode. This allows a device to have modes or Patch Sets that go beyond the limits of GM, and yet, still have the capability to be switched into a GM-compliant mode when desirable.
0xF0 0x7E 0x7F 0x09 0xNN 0xF7 SysEx Non-Realtime The SysEx channel. Could be from 0x00 to 0x7F. Here we set it to "disregard channel". Sub-ID -- GM System Enable/Disable Sub-ID2 -- NN=00 for disable, NN=01 for enable End of SysEx

Master Volume This adjusts a device's master volume. Remember that in a multitimbral device, the Volume controller messages are used to control the volumes of the individual Parts. So, we need some message to control Master Volume. Here it is.
0xF0 0x7F 0x7F 0x04 0x01 0xLL 0xMM 0xF7 SysEx Realtime The SysEx channel. Could be from 0x00 to 0x7F. Here we set it to "disregard channel". Sub-ID -- Device Control Sub-ID2 -- Master Volume Bits 0 to 6 of a 14-bit volume Bits 7 to 13 of a 14-bit volume End of SysEx

A manufacturer must get a registered ID from the IMA if he wants to define his own SysEx messages, or use the following:
Educational Use 0x7D

This ID is for educational or development use only, and should never appear in a commercial design. On the other hand, it is permissible to use another manufacturer's defined SysEx message(s) in your own products. For example, if the Roland S-770 has a particular SysEx message that you could use verbatim in your own design, you're free to use that message (and therefore the Roland ID in it). But, you're not allowed to transmit a mutated version of any Roland message with a Roland ID. Only Roland can develop new messages that contain a Roland ID.

MIDI Specification

MTC Quarter Frame Message


Category: System Common Purpose Some master device that controls sequence playback sends this timing message to keep a slave device in sync with the master. Status 0xF1 Data One data byte follows the Status. It's the time code value, a number from 0 to 127. Errata This is one of the MIDI Time Code (MTC) series of messages. See MIDI Time Code.

Song Position Pointer


Category: System Common Purpose Some master device that controls sequence playback sends this message to force a slave device to cue the playback to a certain point in the song/sequence. In other words, this message sets the device's "Song Position". This message doesn't actually start the playback. It just sets up the device to be "ready to play" at a particular point in the song. Status 0xF2 Data Two data bytes follow the status. Just like with the Pitch Wheel, these two bytes are combined into a 14bit value. (See Pitch Wheel remarks). This 14-bit value is the MIDI Beat upon which to start the song. Songs are always assumed to start on a MIDI Beat of 0. Each MIDI Beat spans 6 MIDI Clocks . In other words, each MIDI Beat is a 16th note (since there are 24 MIDI Clocks in a quarter note). Errata Example: If a Song Position value of 8 is received, then a sequencer (or drum box) should cue playback

MIDI Specification to the third quarter note of the song. (8 MIDI beats * 6 MIDI clocks per MIDI beat = 48 MIDI Clocks. Since there are 24 MIDI Clocks in a quarter note, the first quarter occurs on a time of 0 MIDI Clocks, the second quarter note occurs upon the 24th MIDI Clock, and the third quarter note occurs on the 48th MIDI Clock). Often, the slave device has its playback tempo synced to the master via MIDI Clock. See Syncing Sequence Playback.

Song Select
Category: System Common Purpose Some master device that controls sequence playback sends this message to force a slave device to set a certain song for playback (ie, sequencing). Status 0xF3 Data One data byte follows the status. It's the song number, a value from 0 to 127. Errata Most devices display "song numbers" starting from 1 instead of 0. Some devices even use different labeling systems for songs, ie, bank 1, number 1 song. But, a Song Select message with song number 0 should always select the first song. When a device receives a Song Select message, it should cue the new song at MIDI Beat 0 (ie, the very beginning of the song), unless a subsequent Song Position Pointer message is received for a different MIDI Beat. In other words, the device resets its "Song Position" to 0. Often, the slave device has its playback tempo synced to the master via MIDI Clock. See Syncing Sequence Playback.

Tune Request
Category: System Common Purpose

MIDI Specification The device receiving this should perform a tuning calibration. Status 0xF6 Data None Errata Mostly used for sound modules with analog oscillator circuits.

MIDI Clock
Category: System Realtime Purpose Some master device that controls sequence playback sends this timing message to keep a slave device in sync with the master. A MIDI Clock message is sent at regular intervals (based upon the master's Tempo) in order to accomplish this. Status 0xF8 Data None Errata There are 24 MIDI Clocks in every quarter note. (12 MIDI Clocks in an eighth note, 6 MIDI Clocks in a 16th, etc). Therefore, when a slave device counts down the receipt of 24 MIDI Clock messages, it knows that one quarter note has passed. When the slave counts off another 24 MIDI Clock messages, it knows that another quarter note has passed. Etc. Of course, the rate that the master sends these messages is based upon the master's tempo. For example, for a tempo of 120 BPM (ie, there are 120 quarter notes in every minute), the master sends a MIDI clock every 20833 microseconds. (ie, There are 1,000,000 microseconds in a second. Therefore, there are 60,000,000 microseconds in a minute. At a tempo of 120 BPM, there are 120 quarter notes per minute. There are 24 MIDI clocks in each quarter note. Therefore, there should be 24 * 120 MIDI Clocks per minute. So, each MIDI Clock is sent at a rate of 60,000,000/ (24 * 120) microseconds). A slave device might receive (from a master device) a Song Select message to cue a specific song to play (out of several songs), a Song Position Pointer message to cue that song to start on a particular beat,

MIDI Specification a MIDI Continue in order to start playback from that beat, periodic MIDI Clocks in order to keep the playback in sync with the master, and eventually a MIDI Stop to halt playback. See Syncing Sequence Playback.

Tick
Category: System Realtime Purpose Some master device that controls sequence playback sends this timing message to keep a slave device in sync with the master. A MIDI Tick message is sent at regular intervals of one message every 10 milliseconds. Status 0xF9 Data None Errata While a master device's "clock" is playing back, it will send a continuous stream of MIDI Tick events at a rate of one per every 10 milliseconds. A slave device might receive (from a master device) a Song Select message to cue a specific song to play (out of several songs), a Song Position Pointer message to cue that song to start on a particular beat, a MIDI Continue in order to start playback from that beat, periodic MIDI Ticks in order to keep the playback in sync with the master, and eventually a MIDI Stop to halt playback. See Syncing Sequence Playback.

MIDI Start
Category: System Realtime Purpose Some master device that controls sequence playback sends this message to make a slave device start playback of some song/sequence from the beginning (ie, MIDI Beat 0). Status

MIDI Specification 0xFA Data None Errata A MIDI Start always begins playback at MIDI Beat 0 (ie, the very beginning of the song). So, when a slave device receives a MIDI Start, it automatically resets its "Song Position" to 0. If the device needs to start playback at some other point (either set by a previous Song Position Pointer message, or manually by the musician), then MIDI Continue is used instead of MIDI Start. Often, the slave device has its playback tempo synced to the master via MIDI Clock. See Syncing Sequence Playback.

MIDI Continue
Category: System Realtime Purpose Some master device that controls sequence playback sends this message to make a slave device resume playback from its current "Song Position". The current Song Position is the point when the song/sequence was previously stopped, or previously cued with a Song Position Pointer message. Status 0xFB Data None Errata Often, the slave device has its playback tempo synced to the master via MIDI Clock. See Syncing Sequence Playback.

MIDI Stop
Category: System Realtime

MIDI Specification Purpose Some master device that controls sequence playback sends this message to make a slave device stop playback of a song/sequence. Status 0xFC Data None Errata When a device receives a MIDI Stop, it should keep track of the point at which it stopped playback (ie, its stopped "Song Position"), in the anticipation that a MIDI Continue might be received next. Often, the slave device has its playback tempo synced to the master via MIDI Clock. See Syncing Sequence Playback.

Active Sense
Category: System Realtime Purpose A device sends out an Active Sense message (at least once) every 300 milliseconds if there has been no other activity on the MIDI buss, to let other devices know that there is still a good MIDI connection between the devices. Status 0xFE Data None Errata When a device receives an Active Sense message (from some other device), it should expect to receive additional Active Sense messages at a rate of one approximately every 300 milliseconds, whenever there is no activity on the MIDI buss during that time. (Of course, if there are other MIDI messages happening at least once every 300 mSec, then Active Sense won't ever be sent. An Active Sense only gets sent if there is a 300 mSec "moment of silence" on the MIDI buss. You could say that a device that sends out Active Sense "gets nervous" if it has nothing to do for over 300 mSec, and so sends an Active Sense just

MIDI Specification for the sake of reassuring other devices that this device still exists). If a message is missed (ie, 0xFE nor any other MIDI message is received for over 300 mSec), then a device assumes that the MIDI connection is broken, and turns off all of its playing notes (which were turned on by incoming Note On messages, versus ones played on the local keyboard by a musician). Of course, if a device never receives an Active Sense message to begin with, it should not expect them at all. So, it takes one "nervous" device to start the process by initially sending out an Active Sense message to the other connected devices during a 300 mSec moment of silence on the MIDI bus. This is an optional feature that only a few devices implement (ie, notably Roland gear). Many devices don't ever initiate this minimal "safety" feature. Here's a flowchart for implementing Active Sense. It assumes that the device has a hardware timer that ticks once every millisecond. A variable named Timeout is used to count the passing milliseconds. Another variable named Flag is set when the device receives an Active Sense message from another device, and therefore expects to receive further Active Sense messages. The logic for active sense detection

Reset
Category: System Realtime Purpose The device receiving this should reset itself to a default state, usually the same state as when the device was turned on. Often, this means to turn off all playing notes, turn the local keyboard on, clear running status, set Song Position to 0, stop any timed playback (of a sequence), and perform any other standard setup unique to the device. Also, a device may choose to kick itself into Omni On, Poly mode if it has no facilities for allowing the musician to store a default mode. Status 0xFF Data None Errata A Reset message should never be sent automatically by any MIDI device. Rather, this should only be sent when a musician specifically tells a device to do so.

Controller Numbers

MIDI Specification A Controller message has a Status byte of 0xB0 to 0xBF depending upon the MIDI channel. There are two more data bytes. The first data byte is the Controller Number . There are 128 possible controller numbers (ie, 0 to 127). Some numbers are defined for specific purposes. Others are undefined, and reserved for future use. The second byte is the "value" that the controller is to be set to. Most controllers implement an effect even while the MIDI device is generating sound, and the effect will be immediately noticeable. In other words, MIDI controller messages are meant to implement various effects by a musician while he's operating the device . If the device is a MultiTimbral module, then each one of its Parts may respond differently (or not at all) to a particular controller number. Each Part usually has its own setting for every controller number, and the Part responds only to controller messages on the same channel as that to which the Part is assigned. So, controller messages for one Part do not affect the sound of another Part even while that other Part is playing. Some controllers are continuous controllers, which simply means that their value can be set to any value within the range from 0 to 16,384 (for 14-bit coarse/fine resolution) or 0 to 127 (for 7-bit, coarse resolution). Other controllers are switches whose state may be either on or off. Such controllers will usually generate only one of two values; 0 for off, and 127 for on. But, a device should be able to respond to any received switch value from 0 to 127. If the device implements only an "on" and "off" state, then it should regard values of 0 to 63 as off, and any value of 64 to 127 as on. Many (continuous) controller numbers are coarse adjustments, and have a respective fine adjustment controller number. For example, controller #1 is the coarse adjustment for Modulation Wheel. Using this controller number in a message, a device's Modulation Wheel can be adjusted in large (coarse) increments (ie, 128 steps). If finer adjustment (from a coarse setting) needs to be made, then controller #33 is the fine adjust for Modulation Wheel. For controllers that have coarse/fine pairs of numbers, there is thus a 14-bit resolution to the range. In other words, the Modulation Wheel can be set from 0x0000 to 0x3FFF (ie, one of 16,384 values). For this 14-bit value, bits 7 to 13 are the coarse adjust, and bits 0 to 6 are the fine adjust. For example, to set the Modulation Wheel to 0x2005, first you have to break it up into 2 bytes (as is done with Pitch Wheel messages). Take bits 0 to 6 and put them in a byte that is the fine adjust. Take bits 7 to 13 and put them right-justified in a byte that is the coarse adjust. Assuming a MIDI channel of 0, here's the coarse and fine Mod Wheel controller messages that a device would receive (coarse adjust first):
0xB0 0x01 0x40 Controller on chan 0, Mod Wheel coarse, bits 7 to 13 of 14-bit value right-justified (with high bit clear). 0xB0 0x33 0x05 Controller on chan 0, Mod Wheel fine, bits 0 to 6 of 14-bit value (with high bit clear).

Some devices do not implement fine adjust counterparts to coarse controllers. For example, some devices do not implement controller #33 for Mod Wheel fine adjust. Instead the device only recognizes and responds to the Mod Wheel coarse controller number (#1). It is perfectly acceptable for devices to only respond to the coarse adjustment for a controller if the device desires 7-bit (rather than 14-bit) resolution. The device should ignore that controller's respective fine adjust message. By the same token,

MIDI Specification if it's only desirable to make fine adjustments to the Mod Wheel without changing its current coarse setting (or vice versa), a device can be sent only a controller #33 message without a preceding controller #1 message (or vice versa). Thus, if a device can respond to both coarse and fine adjustments for a particular controller (ie, implements the full 14-bit resolution), it should be able to deal with either the coarse or fine controller message being sent without its counterpart following. The same holds true for other continuous (ie, coarse/fine pairs of) controllers. Here's a list of the defined controllers. To the left is the controller number (ie, how the MIDI Controller message refers to a particular controller), and on the right is its name (ie, how a human might refer to the controller). To get more information about what a particular controller does, click on its controller name to bring up a description. Each description shows the controller name and number, what the range is for the third byte of the message (ie, the "value" data byte), and what the controller does. For controllers that have separate coarse and fine settings, both controller numbers are shown. MIDI devices should use these controller numbers for their defined purposes, as much as possible. For example, if the device is able to respond to Volume controller (coarse adjustment), then it should expect that to be controller number 7. It should not use Portamento Time controller messages to adjust volume. That wouldn't make any sense. Other controllers, such as Foot Pedal, are more general purpose. That pedal could be controlling the tempo on a drum box, for example. But generally, the Foot Pedal shouldn't be used for purposes that other controllers already are dedicated to, such as adjusting Pan position . If there is not a defined controller number for a particular, needed purpose, a device can use the General Purpose Sliders and Buttons, or NRPN for device specific purposes. The device should use controller numbers 0 to 31 for coarse adjustments, and controller numbers 32 to 63 for the respective fine adjustments. Defined Controllers
0 1 2 4 5 6 7 8 10 11 12 13 16 17 18 19 32 33 34 36 37 38 39 40 42 43 44 45 Bank Select Modulation Wheel (coarse) Breath controller (coarse) Foot Pedal (coarse) Portamento Time (coarse) Data Entry (coarse) Volume (coarse) Balance (coarse) Pan position (coarse) Expression (coarse) Effect Control 1 (coarse) Effect Control 2 (coarse) General Purpose Slider 1 General Purpose Slider 2 General Purpose Slider 3 General Purpose Slider 4 Bank Select (fine) Modulation Wheel (fine) Breath controller (fine) Foot Pedal (fine) Portamento Time (fine) Data Entry (fine) Volume (fine) Balance (fine) Pan position (fine) Expression (fine) Effect Control 1 (fine) Effect Control 2 (fine)

MIDI Specification
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 91 92 93 94 95 96 97 98 99 100 101 120 121 122 123 124 125 126 127 Hold Pedal (on/off) Portamento (on/off) Sustenuto Pedal (on/off) Soft Pedal (on/off) Legato Pedal (on/off) Hold 2 Pedal (on/off) Sound Variation Sound Timbre Sound Release Time Sound Attack Time Sound Brightness Sound Control 6 Sound Control 7 Sound Control 8 Sound Control 9 Sound Control 10 General Purpose Button 1 (on/off) General Purpose Button 2 (on/off) General Purpose Button 3 (on/off) General Purpose Button 4 (on/off) Effects Level Tremulo Level Chorus Level Celeste Level Phaser Level Data Button increment Data Button decrement Non-registered Parameter (fine) Non-registered Parameter (coarse) Registered Parameter (fine) Registered Parameter (coarse) All Sound Off All Controllers Off Local Keyboard (on/off) All Notes Off Omni Mode Off Omni Mode On Mono Operation Poly Operation

Bank Select
Number: 0 (coarse) 32 (fine) Affects: Some MIDI devices have more than 128 Programs (ie, Patches, Instruments, Preset, etc). MIDI Program Change messages only support switching between 128 programs. So, Bank Select Controller (sometimes called Bank Switch) is sometimes used to allow switching between groups of 128 programs. For example, let's say that a device has 512 Programs. It may divide these into 4 banks of 128 programs apiece. So, if you want program #129, that would actually be the first program within the second bank. You would send a Bank Select Controller to switch to the second bank (ie, the first bank is #0), and then follow with a Program Change to select the first Program in this bank. If a MultiTimbral device, then each Part usually can be set to its own Bank/Program.

MIDI Specification On MultiTimbral devices that have a Drum Part, the Bank Select is sometimes used to switch between "Drum Kits". NOTE: When a Bank Select is received, the MIDI module doesn't actually change to a patch in the new bank. Rather, the Bank Select value is simply stored by the MIDI module without changing the current patch. Whenever a subsequent Program Change is received, the stored Bank Select is then utilized to switch to the specified patch in the new bank. For this reason, Bank Select must be sent before a Program Change, when you desire changing to a patch in a different bank. (Of course, if you simply wish to change to another patch in the same bank, there is no need to send a Bank Select first). Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF. NOTE: Most devices use the Coarse adjust (#0) alone to switch banks since most devices don't have more than 128 banks (of 128 Patches each).

MOD Wheel
Number: 1 (coarse) 33 (fine) Affects: Sets the MOD Wheel to a particular value. Usually, MOD Wheel introduces some sort of (LFO) vibrato effect. If a MultiTimbral device, then each Part usually has its own MOD Wheel setting. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is no modulation effect.

Breath Controller
Number: 2 (coarse) 34 (fine) Affects: Whatever the musician sets this controller to affect. Often, this is used to control a parameter such as what Aftertouch can. After all, breath control is a wind player's version of how to vary pressure. If a MultiTimbral device, then each Part usually has its own Breath Controller setting. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is minimum breath pressure.

Foot Pedal

MIDI Specification Number: 4 (coarse) 36 (fine) Affects: Whatever the musician sets this controller to affect. Often, this is used to control a parameter such as what Aftertouch may control. This foot pedal is a continuous controller (ie, potentiometer). If a MultiTimbral device, then each Part usually has its own Foot Pedal value. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is minimum effect.

Portamento Time
Number: 5 (coarse) 37 (fine) Affects: The rate at which portamento slides the pitch between 2 notes. If a MultiTimbral device, then each Part usually has its own Portamento Time. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is slowest rate.

Data Entry Slider


Number: 6 (coarse) 38 (fine) The value of some Registered or Non-Registered Parameter. Which parameter is affected depends upon a preceding RPN or NRPN message (which itself identifies the parameter's number). On some devices, this slider may not be used in conjunction with RPN or NRPN messages. Instead the musician can set the slider to control a single parameter directly, often a parameter such as what Aftertouch can control. If a MultiTimbral device, then each Part usually has its own RPN and NRPN settings, and Data Entry slider setting. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is minimum effect.

Volume
Number: 7 (coarse) 39 (fine)

MIDI Specification Affects: The device's main volume level. If a MultiTimbral device, then each Part has its own volume. In this case, a device's master volume may be controlled by another method such as the Univeral SysEx Master Volume message, or take its volume from one of the Parts, or be controlled by a General Purpose Slider controller. Expression Controller also may affect the volume. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is no volume at all. NOTE: Most all devices ignore the Fine adjust (#39) for Volume, and just implement Coarse adjust (#7) because 14-bit resolution isn't needed for this. In this case, maximum is 127 and off is 0.

Balance
Number: 8 (coarse) 40 (fine) Affects: The device's stereo balance (assuming that the device has stereo audio outputs). If a MultiTimbral device, then each Part usually has its own Balance. This is generally when Balance becomes useful, because then you can use Pan, Volume, and Balance controllers to internally mix all of the Parts to the device's stereo outputs. Typically, Balance would be used on a Part that had stereo elements (where you wish to adjust the volume of the stereo elements without changing their pan positions), whereas Pan is more appropriate for a Part that is strictly a "mono instrument". Value Range: 14-bit coarse/fine resolution. 16,384 possible setting, 0x0000 to 0x3FFF where 0x2000 is center balance, 0x0000 emphasizes the left elements mostly, and 0x3FFF emphasizes the right elements mostly. Some devices only respond to coarse adjust (128 settings) where 64 is center, 0 is leftmost emphsis, and 127 is rightmost emphasis. NOTE: Most all devices ignore the Fine adjust (#40) for Balance, and just implement Coarse adjust (#8) because 14-bit resolution isn't needed for this.

Pan
Number: 10 (coarse) 42 (fine) Affects: Where within the stereo field the device's sound will be placed (assuming that it has stereo audio outputs). If a MultiTimbral device, then each Part usually has its own pan position. This is generally when Pan becomes useful, because then you can use Pan, Volume, and Balance controllers to internally

MIDI Specification mix all of the Parts to the device's stereo outputs. Value Range: 14-bit coarse/fine resolution. 16,384 possible positions, 0x0000 to 0x3FFF where 0x2000 is center position, 0x0000 is hard left, and 0x3FFF is hard right. Some devices only respond to coarse adjust (128 positions) where 64 is center, 0 is hard left, and 127 is hard right. NOTE: Most all devices ignore the Fine adjust (#42) for Pan, and just implement Coarse adjust (#10) because 14-bit resolution isn't needed for this.

Expression
Number: 11 (coarse) 43 (fine) Affects: This is a percentage of Volume (ie, as set by Main Volume Controller). In other words, Expression divides the current volume into 16,384 steps (or 128 if 8-bit instead of 14-bit resolution is used). Volume Controller is used to set the overall volume of the entire musical part, whereas Expression is used for doing crescendos and decrescendos. By having both a master Volume and sub-Volume (ie, Expression), it makes possible doing crescendos and decrescendos without having to do algebraic calculations to maintain the relative balance between instruments. When Expression is at 100% (ie, the maximum of 0x3FFF), then the volume represents the true setting of Main Volume Controller. Lower values of Expression begin to subtract from the volume. When Expression is 0% (ie, 0x0000), then volume is off. When Expression is 50% (ie, 0x1FFF), then the volume is cut in half. Here's how Expression is typically used. Let's assume only the MSB is used (ie, #11) and therefore only 128 steps are possible. Set the Expression for every MIDI channel to one initial value, for example 100. This gives you some leeway to increase the expression percentage (ie, up to 127 which is 100%) or decrease it. Now, set the channel (ie, instrument) "mix" using Main Volume Controllers. Maybe you'll want the drums louder than the piano, so the former has a Volume Controller value of 110 whereas the latter has a value of 90, for example. Now if, at some point, you want to drop the volumes of both instruments to half of their current Main Volumes, then send Expression values of 64 (ie, 64 represents a 50% volume percentage since 64 is half of 128 steps). This would result in the drums now having an effective volume of 55 and the piano having an effective volume of 45. If you wanted to drop the volumes to 25% of their current Main Volumes, then send Expression values of 32. This would result in the drums now having an effective volume of approximately 27 and the piano having an effective volume of approximately 22. And yet, you haven't had to change their Main Volumes, and therefore still maintain that relative mix between the two instruments. So think of Main Volume Controllers as being the individual faders upon a mixing console. You set up the instrumental balance (ie, mix) using these values. Then you use Expression Controllers as "group faders", whereby you can increase or decrease the volumes of one or more tracks without upsetting the relative balance between them. If a MultiTimbral device, then each Part usually has its own Expression level. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is minimum effect.

MIDI Specification NOTE: Most all devices ignore the Fine adjust (#43) for Expression, and just implement Coarse adjust (#11) because 14-bit resolution isn't needed for this.

Effect Control 1
Number: 12 (coarse) 44 (fine) Affects: This can control any parameter relating to an effects device, such as the Reverb Decay Time for a reverb unit built into a GM sound module. NOTE: There are separate controllers for setting the Levels (ie, volumes) of Reverb, Chorus, Phase Shift, and other effects. If a MultiTimbral device, then each Part usually has its own Effect Control 1. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is minimum effect.

Effect Control 2
Number: 13 (coarse) 45 (fine) Affects: This can control any parameter relating to an effects device, such as the Reverb Decay Time for a reverb unit built into a GM sound module. NOTE: There are separate controllers for setting the Levels (ie, volumes) of Reverb, Chorus, Phase Shift, and other effects. If a MultiTimbral device, then each Part usually has its own Effect Control 2. Value Range: 14-bit coarse/fine resolution. 0x0000 to 0x3FFF where 0 is minimum effect.

General Purpose Slider


Number: 16, 17, 18, 19 Affects: Whatever the musician sets this controller to affect. There are 4 General Purpose Sliders, with the above

MIDI Specification controller numbers. Often, these are used to control parameters such as what Aftertouch can. If a MultiTimbral device, then each Part usually has its own responses to the 4 General Purpose Sliders. Note that these sliders don't have a fine adjustment. Value Range: 0x00 to 0x7F where 0 is minimum effect.

Hold Pedal
Number: 64 Affects: When on, this holds (ie, sustains) notes that are playing, even if the musician releases the notes (ie, the Note Off effect is postponed until the musician switches the Hold Pedal off). If a MultiTimbral device, then each Part usually has its own Hold Pedal setting. NOTE: When on, this also postpones any All Notes Off controller message on the same channel. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Portamento
Number: 65 Affects: Whether the portamento effect is on or off. If a MultiTimbral device, then each Part usually has its own portamento on/off setting. NOTE: There is another controller to set the portamento time. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Sustenuto
Number: 66 Affects: Like the Hold Pedal controller, except this only sustains notes that are already on (ie, the device has

MIDI Specification received Note On messages, but the respective Note Offs haven't yet arrived) when the pedal is turned on. After the pedal is on, it continues to hold these initial notes all of the while that the pedal is on, but during that time, all other arriving Note Ons are not held. So, this pedal implements a "chord hold" for the notes that are sounding when this pedal is turned on. If a MultiTimbral device, then each Part usually has its own Sustenuto setting. NOTE: When on, this also postpones any All Notes Off controller message on the same channel for those notes being held. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Soft Pedal
Number: 67 Affects: When on, this lowers the volume of any notes played. If a MultiTimbral device, then each Part usually has its own Soft Pedal setting. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Legato Pedal
Number: 68 Affects: When on, this causes a legato effect between notes, which is usually achieved by skipping the attack portion of the VCA's envelope. Use of this controller allows a keyboard player to better simulate the phrasing of wind and brass players, who often play several notes with a single tonguing, or simulate guitar pull-offs and hammer-ons (ie, where secondary notes are not picked). If a MultiTimbral device, then each Part usually has its own Legato Pedal setting. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Hold 2 Pedal
Number: 69

MIDI Specification Affects: When on, this lengthens the release time of the playing notes' VCA (ie, makes the note take longer to fade out after it's released, versus when this pedal is off). Unlike the other Hold Pedal controller, this pedal doesn't permanently sustain the note's sound until the musician releases the pedal. Even though the note takes longer to fade out when this pedal is on, the note may eventually fade out despite the musician still holding down the key and this pedal. If a MultiTimbral device, then each Part usually has its own Hold 2 Pedal setting. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Sound Variation
Number: 70 Affects: Any parameter associated with the circuitry that produces sound. For example, if a device uses looped digital waveforms to create sound, this controller may adjust the sample rate (ie, playback speed), for a "tuning" control. If a MultiTimbral device, then each Part usually has its own patch with its respective VCA, VCF, tuning, sound sources, etc, parameters that can be adjusted with this controller. NOTE: There are other controllers for adjusting VCA attack and release times, VCF cutoff frequency, and other generic sound parameters. Value Range: 0 to 127, with 0 being minimum setting.

Sound Timbre
Number: 71 Affects: Controls the (VCF) filter's envelope levels. This controls how the filter shapes the "brightness" of the sound over time). If a MultiTimbral device, then each Part usually has its own patch with its respective VCF cutoff frequency that can be adjusted with this controller. NOTE: There are other controllers for adjusting VCA attack and release times, and other generic sound parameters. Value Range: 0 to 127, with 0 being minimum setting.

MIDI Specification

Sound Release Time


Number: 72 Affects: Controls the (VCA) amp's envelope release time, for a control over how long it takes a sound to fade out. If a MultiTimbral device, then each Part usually has its own patch with its respective VCA envelope that can be adjusted with this controller. NOTE: There are other controllers for adjusting VCA attack time, VCF cutoff frequency, and other generic sound parameters. Value Range: 0 to 127, with 0 being minimum setting.

Sound Attack Time


Number: 73 Affects: Controls the (VCA) amp's envelope attack time, for a control over how long it takes a sound to fade in. If a MultiTimbral device, then each Part usually has its own patch with its respective VCA envelope that can be adjusted with this controller. NOTE: There are other controllers for adjusting VCA release time, VCF cutoff frequency, and other generic sound parameters. Value Range: 0 to 127, with 0 being minimum setting.

Sound Brightness
Number: 74 Affects: Controls the (VCF) filter's cutoff frequency, for an overall "brightness" control. If a MultiTimbral device, then each Part usually has its own patch with its respective VCF cutoff frequency that can be adjusted with this controller. NOTE: There are other controllers for adjusting VCA attack and release times, and other generic sound parameters.

MIDI Specification Value Range: 0 to 127, with 0 being minimum setting.

Sound Control 6, 7, 8, 9, 10
Number: 75, 76, 77, 78, 79 Affects: These 5 controllers can be used to adjust any parameters associated with the circuitry that produces sound. For example, if a device uses looped digital waveforms to create sound, one controller may adjust the sample rate (ie, playback speed), for a "tuning" control. If a MultiTimbral device, then each Part usually has its own patch with its respective VCA, VCF, tuning, sound sources, etc, parameters that can be adjusted with these controllers. NOTE: There are other controllers for adjusting VCA attack and release times, and VCF cutoff frequency. Value Range: 0 to 127, with 0 being minimum setting.

General Purpose Button


Number: 80, 81, 82, 83 Affects: Whatever the musician sets this controller to affect. There are 4 General Purpose Buttons, with the above controller numbers. These are either on or off, so they are often used to implement on/off functions, such as a Metronome on/off switch on a sequencer. If a MultiTimbral device, then each Part usually has its own responses to the 4 General Purpose Buttons. Value Range: 0 (to 63) is off. 127 (to 64) is on.

Effects Level
Number: 91 Affects: The effects amount (ie, level) for the device. Often, this is the reverb or delay level. If a MultiTimbral device, then each Part usually has its own effects level.

MIDI Specification Value Range: 0 to 127, with 0 being no effect applied at all.

Tremulo Level
Number: 92 Affects: The tremulo amount (ie, level) for the device. If a MultiTimbral device, then each Part Parts usually has its own tremulo level. Value Range: 0 to 127, with 0 being no tremulo applied at all.

Chorus Level
Number: 93 Affects: The chorus effect amount (ie, level) for the device. If a MultiTimbral device, then each Part usually has its own chorus level. Value Range: 0 to 127, with 0 being no chorus effect applied at all.

Celeste Level
Number: 94 Affects: The celeste (detune) amount (ie, level) for the device. If a MultiTimbral device, then each Part usually has its own celeste level. Value Range: 0 to 127, with 0 being no celeste effect applied at all.

Phaser Level

MIDI Specification Number: 95 Affects: The Phaser effect amount (ie, level) for the device. If a MultiTimbral device, then each Part usually has its own Phaser level. Value Range: 0 to 127, with 0 being no phaser effect applied at all.

Data Button increment


Number: 96 Affects: Causes a Data Button to increment (ie, increase by 1) its current value. Usually, this data button's value is being used to set some Registered or Non-Registered Parameter. Which RPN or NRPN parameter is being affected depends upon a preceding RPN or NRPN message (which itself identifies the parameter's number). Value Range: The value byte isn't used and defaults to 0.

Data Button decrement


Number: 97 Affects: Causes a Data Buttonto decrement (ie, decrease by 1) its current value. Usually, this data button's value is being used to set some Registered or Non-Registered Parameter. Which RPN or NRPN parameter is being affected depends upon a preceding RPN or NRPN message (which itself identifies the parameter's number). Value Range: The value byte isn't used and defaults to 0.

Registered Parameter Number (RPN)


Number: 101 (coarse) 100 (fine) Affects:

MIDI Specification Which parameter the Data Button Increment, Data Button Decrement, or Data Entry controllers affect. Since RPN has a coarse/fine pair (14-bit), the number of parameters that can be registered is 16,384. That's a lot of parameters that a MIDI device could allow to be controlled over MIDI. It's up to the IMA to assign Registered Parameter Numbers to specific functions. Value Range: 0 to 16,384 where each value stands for a different RPN. Here are the currently registered parameter numbers: Pitch Bend Range (ie, Sensitivity) 0x0000 NOTE: The coarse adjustment (usually set via Data Entry 6) sets the range in semitones. The fine adjustment (usually set via Data Entry 38) set the range in cents. For example, to adjust the pitch wheel range to go up/down 2 semitones and 4 cents:
B0 B0 B0 B0 65 64 06 26 00 00 02 04 Controller/chan Controller/chan Controller/chan Controller/chan 0, 0, 0, 0, RPN coarse (101), Pitch Bend Range RPN fine (100), Pitch Bend Range Data Entry coarse, +/- 2 semitones Data Entry fine, +/- 4 cents

Master Fine Tuning (ie, in cents) 0x0001 NOTE: Both the coarse and fine adjustments together form a 14-bit value that sets the tuning in semitones, where 0x2000 is A440 tuning. Master Coarse Tuning (in half-steps) 0x0002 NOTE: Setting the coarse adjustment adjusts the tuning in semitones, where 0x40 is A440 tuning. There is no need to set a fine adjustment. RPN Reset 0x3FFF NOTE: No coarse or fine adjustments are applicable. This is a "dummy" parameter. Here's the way that you use RPN. First, you decide which RPN you wish to control. Let's say that we wish to set Master Fine Tuning on a device. That's RPN 0x0001. We need to send the device the RPN Coarse and RPN Fine controller messages in order to tell it to affect RPN 0x0001. So, we divide the 0x0001 into 2 bytes, the fine byte and the coarse byte. The fine byte contains bits 0 to 6 of the 14-bit value. The coarse byte contains bits 7 to 13 of the 14-bit value, right-justified. So, here are the RPN Coarse and Fine messages (assuming that the device is responding to MIDI channel 0):
B0 65 00 B0 64 01 Controller/chan 0, RPN coarse (101), bits 7 to 13 of 0x0001, right-justified (with high bit clear) Controller/chan 0, RPN fine (100), bits 0 to 6 of 0x0001, (with high bit clear)

Now, we've just told the device that any Data Button Increment, Data Button decrement, or Data Entry controllers it receives should affect the Master Fine Tuning. Let's say that we wish to set this tuning to the 14-bit value 0x2000 (which happens to be centered tuning). We could use the Data Entry (coarse and fine) controller messages as so to send that 0x2000:

MIDI Specification
B0 06 40 B0 26 00 Controller/chan 0, Data Entry coarse (6), bits 7 to 13 of 0x2000, right-justified (with high bit clear) Controller/chan 0, Data Entry fine (38), bits 0 to 6 of 0x2000, (with high bit clear)

As a final example, let's say that we wish to increment the Master Fine Tuning by one (ie, to 0x2001). We could use the Data Entry messages again. Or, we could use the Data Button Increment, which doesn't have a coarse/fine pair of controller numbers like Data Entry.
B0 60 00 Controller/chan 0, Data Button Increment (96), last byte is unused

Of course, if the device receives RPN messages for another parameter, then the Data Button Increment, Data Button Decrement, and Data Entry controllers will switch to adjusting that parameter. RPN 0x3FFF (reset) forces the Data Button increment, Data Button decrement, and Data Entry controllers to not adjust any RPN (ie, disables those buttons' adjustment of any RPN).

Non-Registered Parameter Number (NRPN)


Number: 99 (coarse) 98 (fine) Affects: Which parameter the Data Button Increment, Data Button Decrement, or Data Entry controllers affect. Since NRPN has a coarse/fine pair (14-bit), the number of parameters that can be registered is 16,384. That's a lot of parameters that a MIDI device could allow to be controlled over MIDI. It's entirely up to each manufacturer which parameter numbers are used for whatever purposes. These don't have to be registered with the IMA. Value Range: The same scheme is used as per the Registered Parameter controller. Refer to that. By contrast, the coarse/fine messages for NRPN for the preceding RPN example would be:
B0 63 00 B0 62 01

NOTE: Since each device can define a particular NRPN controller number to control anything, it's possible that 2 devices may interpret the same NRPN number in different manners. Therefore, a device should allow a musician to disable receipt of NRPN, in the event that there is a conflict between the NRPN implementations of 2 daisy-chained devices.

All Controllers Off


Number: 121 Affects:

MIDI Specification Resets all controllers to default states. This means that all switches (such as Hold Pedal) are turned off, and all continuous controllers (such as Mod Wheel) are set to minimum positions. If the device is MultiTimbral, this only affects the Part assigned to the MIDI channel upon which this message is received. Value Range: The value byte isn't used and defaults to 0.

Local Keyboard on/off


Number: 122 Affects: Turns the device's keyboard on or off locally. If off, the keyboard is disconnected from the device's internal sound generation circuitry. So when the musician presses keys, the device doesn't trigger any of its internal sounds. But, the keyboard still generates Note On, Note Off, Aftertouch, and Channel Pressure messages. In this way, a musician can eliminate a situation where MIDI messages get looped back (over MIDI cables) to the device that created those messages. Furthermore, if a device is only going to be played remotely via MIDI, then the keyboard may be turned off in order to allow the device to concentrate more on dealing with MIDI messages rather than scanning the keyboard for depressed notes and varying pressure. Value Range: 0 (to 63) is off. 127 (to 64) is on.

All Notes Off


Number: 123 Affects: Turns off all notes that were turned on by received Note On messages, and which haven't yet been turned off by respective Note Off messages. This message is not supposed to turn off any notes that the musician is playing on the local keyboard. So, if a device can't distinguish between notes played via its MIDI IN and notes played on the local keyboard, it should not implement All Notes Off. Furthermore, if a device is in Omni On state, it should ignore this message on any channel. NOTE: If the device's Hold Pedal controller is on, the notes aren't actually released until the Hold Pedal is turned off. See All Sound Off controller message for turning off the sound of these notes immediately. Value Range: The value byte isn't used and defaults to 0.

MIDI Specification

All Sound Off


Number: 120 Affects: Mutes all sounding notes that were turned on by received Note On messages, and which haven't yet been turned off by respective Note Off messages. This message is not supposed to mute any notes that the musician is playing on the local keyboard. So, if a device can't distinguish between notes played via its MIDI IN and notes played on the local keyboard, it should not implement All Sound Off. NOTE: The difference between this message and All Notes Off is that this message immediately mutes all sound on the device regardless of whether the Hold Pedal is on, and mutes the sound quickly regardless of any lengthy VCA release times. It's often used by sequencers to quickly mute all sound when the musician presses "Stop" in the middle of a song. Value Range: The value byte isn't used and defaults to 0.

Omni Off
Number: 124 Affects: Turns Omni off. See the discussion on MIDI Modes. Value Range: The value byte isn't used and defaults to 0. NOTE: When a device receives an Omni Off message, it should automatically turn off all playing notes.

Omni On
Number: 125 Affects: Turns Omni on. See the discussion on MIDI Modes. Value Range: The value byte isn't used and defaults to 0. NOTE: When a device receives an Omni On message, it should automatically turn off all playing notes.

MIDI Specification

Monophonic Operation
Number: 126 Affects: Enables Monophonic operation (thus disabling Polyphonic operation). See the discussion on MIDI Modes. Value Range: If Omni is off, this Value tells how many MIDI channels the device is expected to respond to in Mono mode. In other words, if Omni is off, this value is used to select a limited set of the 16 MIDI channels (ie, 1 to 16) to respond to.Conversely, if Omni is on, this Value is ignored completely, and the device only ever plays one note at a time (unless a MultiTimbral device). So, the following discussion is only relevant if Omni Off. If Value is 0, then the number of MIDI channels that the device will respond to simultaneously will be equal to how many voices the device can sound simultaneously. In other words, if the device can sound at least 16 voices simultaneously, then it can respond to Voice Category messages on all 16 channels. Of course, being Monophonic operation, the device can only sound one note at a time per each MIDI channel. So, it can sound a note on channel 1 and channel 2 simultaneously, for example, but can't sound 2 notes both on channel 1 simultaneously. Of course, MultiTimbral devices completely violate the preceding theory. MultiTimbral devices always can play polyphonically on each MIDI channel. If Value is 0, what this means is that the device can play as many MIDI channels as it has Parts. So, if the device can play 16 of its patches simultaneously, then it can respond to Voice Category messages on all 16 channels. If Value is not 0 (ie, 1 to 16), then that's how many MIDI channels the device is allowed to respond to. For example, a value of 1 would mean that the device would only be able to respond to 1 MIDI channel. Since the device is also limited to sounding only 1 note at a time on that MIDI channel, then the device would truly be a Monophonic instrument incapable of sounding more than one note at a time. If a device is asked to respond to more MIDI channels than it has voices to accomodate, then it will handle only as many MIDI channels as it has voices. For example, if an 8-voice synth, on Base Channel 0, receives the value 16 in the Mono message, then the synth will play messages on MIDI channels 0 to 7 and ignore messages on 8 to 15. Again, MultiTimbral devices violate the above theory. A value of 1 would mean that the device would only be able to respond to 1 MIDI channel (and therefore only play 1 Part), but would do so Polyphonically. If a MultiTimbral device is asked to respond to more MIDI channels than it has Parts to accomodate, then it will handle only as many MIDI channels as it has Parts. For example, if a device can play only 5 Patches simultaneously, and receives the value 8 in the Mono message, then the device will play 5 patches on MIDI channels 0 to 4 and ignore messages on channels 5 to 7. Most devices capable of Monophonic operation, allow the user to specify a Base Channel . This will be the lowest MIDI channel that the device responds to. For example, if a Mono message specifies that the device is to respond to only 2 channels, and its Base Channel is 2, then the device responds to channels 2 and 3.

MIDI Specification NOTE: When a device receives a Mono Operation message, it should automatically turn off all playing notes.

Polyphonic Operation
Number: 127 Affects: Enables Polyphonic operation (thus disabling Monophonic operation). See the discussion on MIDI Modes. Value Range: The value byte isn't used and defaults to 0. NOTE: When a device receives a Poly Operation message, it should automatically turn off all playing notes.

MIDI Modes
Some MIDI devices can be switched in and out of Omni state. When Omni is off, a MIDI device can only respond to Voice Category messages (ie, Status bytes of 0x80 to 0xEF) upon a limited number of channels, usually only 1. Typically, the device allows the musician to pick one of the 16 MIDI channels that the device will respond to. This is then referred to as the device's Base Channel . So for example, if a device's Base Channel is set to 1, and a Voice Category message upon channel 2 arrives at the device's MIDI IN, the device ignores that message. NOTE: Virtually all modern devices allow the musician to manually choose the Base Channel. A device may even define its own SysEx message that can change its Base Channel. Remember that SysEx messages are of the System Common Category, and therefore aren't (normally) tied to the Base Channel itself. When Omni is on, a device doesn't respond to just one MIDI channel, but rather, responds to all 16 MIDI channels. The only benefit of Omni On is that, regardless of which channel any message is received upon, a device always responds to the message. This mades it very foolproof for a musician to hook up two devices and always have one device respond to the other regardless of any MIDI channel discrepancies between the device generating the data (ie, referred to as the transmitter ) and the device receiving the data (ie, referred to as the receiver). Of course, if the musician daisy-chains another device, and he wants the 2 devices to play different musical parts, then he has to switch Omni Off on both devices. Otherwise, a device with Omni On will respond to messages intended for the other device (as well as messages intended for itself). NOTE: Omni can be switched on or off with the Omni On and Omni Off controller messages. But these

MIDI Specification messages must be received upon the device's Base Channel in order for the device to respond to them. What this implies is that even when a device is in Omni On state (ie, capable of responding to all 16 channels), it still has a Base Channel for the purpose of turning Omni On or Off. One might think that MultiTimbral devices employ Omni On. Because you typically may choose (upto) 16 different Patches, each playing its own musical part, you need the device to be able to respond to more than one MIDI channel so that you can assign each Patch to a different MIDI channel. Actually, MultiTimbral devices do not use Omni On for this purpose. Rather, the device regards itself as having 16 separate sound modules (ie, Parts) inside of it, with each module in Omni Off mode, and capable of being set to its own Base Channel. Usually, you also have a "master" Base Channel which may end up having to be set the same as one of the individual Parts. Most MultiTimbral devices offer the musician the choice of which particular channels to use, and which to ignore (if he doesn't need all 16 patches playing simultaneously on different channels). In this way, he can daisy-chain another multitimbral device and use any ignored channels (on the first device) with this second device. Unfortunately, the MIDI spec has no specific "MultiTimbral" mode message. So, a little "creative reinterpretation" of Monophonic mode is employed, as you'll learn in a moment. In addition to Omni On or Off, many devices can be switched between Polyphonic or Monophonic operation. In Polyphonic operation, a device can respond to more than one Note On upon a given channel. In other words, it can play chords on that channel. For example, assume that a device is responding to Voice Category messages on channel 1. If the device receives a Note On for middle C on channel 1, it will sound that note. If the device then receives a Note On for high C also on channel 1 (before receiving a Note Off for middle C), the device will sound the high C as well. Both notes will then be sounding simultaneously. In Monophonic operation, a device can only respond to one Note On at a time upon a given channel. It can't play chords; only single note "melodies". For example, assume that a device is responding to Voice Category messages on channel 1. If the device receives a Note On for middle C on channel 1, it will play that note. If the device then receives a Note On for high C (before receiving a Note Off for middle C), the device will automatically turn off the middle C before playing the high C. So what's the use of forcing a device capable of playing chords into such a Monophonic state? Well, there are lots of Monophonic instruments in the world, for example, most brass and woodwinds. They can only play one note at a time. If using a Trumpet Patch, a keyboard player might want to force a device into Monophonic operation in order to better simulate a Trumpet. Some devices have special effects that only work in Monophonic operation such as Portamento, and smooth transition between notes (ie, skipping the VCA attack when moving from one Note On that "overlaps" another Note On -- this is often referred to as legato and makes for a more realistic musical performance for brass and woodwind patches). That's in theory how Mono operation is supposed to work, but MultiTimbral devices created long after the MIDI spec was designed, had to subvert Mono operation into Polyphonic operation in order to come up with a "MultiTimbral mode", as you'll learn. NOTE: A device can be switched between Polyphonic or Monophonic with the Polyphonic and Monophonic controller messages. But these messages must be received upon the device's Base Channel in order for the device to respond to them. Of course, a MIDI device could have Omni On and be in Polyphonic state. Or, the device could have Omni On but be in Monophonic state. Or, the device could have Omni Off and be in Polyphonic state. Or, the device could have Omni Off but be in Monophonic state. There are 4 possible combinations here, and MIDI refers to these as 4 Modes . For example, Mode 1 is the aforementioned Omni On /

MIDI Specification Polyphonic state. Here are the 4 Modes: Mode 1 - Omni On / Poly The device plays all MIDI data received on all 16 MIDI channels. If a MultiTimbral device, then it often requires the musician to manually select which one Patch to play all 16 channels, and this setting is usually saved in "patch memory". Mode 2 - Omni On / Mono The device plays only one note out of all of the MIDI data received on all 16 MIDI channels. This mode is seldom implemented because playing one note out of all the data happening on all 16 channels is not very useful. Mode 3 - Omni Off / Poly The device plays all MIDI data received on 1 specific MIDI channel. The musician usually gets to choose which channel he wants that to be. If a MultiTimbral device, then it often requires the musician to manually select which one Patch to play that MIDI channel, and this setting is usually saved in "patch memory". Mode 4 - Omni Off / Mono In theory, the device plays one note at a time on 1 (or more) specific MIDI channels. In practice, the manufacturers of MultiTimbral threw the entire concept of Monophonic out the window, and use this for "MultiTimbral mode". On a MultiTimbral device, this mode means that the device plays polyphonically on 1 (or more) specific MIDI channels. The Monophonic controller message has a Value associated with it. This Value is applicable in Mode 4 (whereas it's ignored in Mode 2), and determines how many MIDI channels are responded to. If 1, then on a non-MultiTimbral device, this would give you a truly monophonic instrument. Of course, on a MultiTimbral device, it gives you the same thing as Mode 3. If the Value is 0, then a non-MultiTimbral device uses as many MIDI channels as it has voices. So, for an 8 voice synth, it would use 8 MIDI Channels, and each of those channels would play one note at a time. For a MultiTimbral device, if the Value is 0, then the device uses as many MIDI channels as it has Parts. So, if a MultiTimbral device can play only 8 patches simultaneously, then it would use 8 MIDI Channels, and each of those channels could play polyphonically. Some devices do not support all of these modes. The device should ignore controller messages which attempt to switch it into an unsupported state, or switch to the next closest mode. If a device doesn't have some way of saving the musician's choice of Mode when the unit is turned off, the device should default to Mode 1 upon the next power up. On final question arises. If a MultiTimbral device doesn't implement a true monophonic mode for Mode 4, then how do you get one of its Parts to play in that useful Monophonic state (ie, where you have Portamento and legato features)? Well, many MultiTimbral devices allow a musician to manually enable a "Solo Mode" per each Part. Some devices even use the Legato Pedal controller (or a General Purpose Button controller ) to enable/disable that function, so that you can turn it on/off for each Part over MIDI. NOTE: A device that can both generate MIDI messages (ie, perhaps from an electronic piano keyboard) as well as receive MIDI messages (ie, to be played on its internal sound circuitry), is allowed to have its

MIDI Specification transmitter set to a different Mode and MIDI channel than its receiver, if this is desired. In fact, on MultiTimbral devices with a keyboard, the keyboard often has to switch between MIDI channels so that the musician can access the Parts one at a time, without upsetting the MIDI channel assignments for those Parts.

RealTime Category Messages


Each RealTime Category message (ie, Status of 0xF8 to 0xFF) consists of only 1 byte, the Status. These messages are primarily concerned with timing/syncing functions which means that they must be sent and received at specific times without any delays. Because of this, MIDI allows a RealTime message to be sent at any time, even interspersed within some other MIDI message. For example, a RealTime message could be sent inbetween the two data bytes of a Note On message. A device should always be prepared to handle such a situation; processing the 1 byte RealTime message, and then subsequently resume processing the previously interrupted message as if the RealTime message had never occurred. For more information about RealTime, read the sections Running Status, Ignoring MIDI Messages, and Syncing Sequence Playback.

Running Status
The MIDI spec allows for a MIDI message to be sent without its Status byte (ie, just its data bytes are sent) as long as the previous, transmitted message had the same Status. This is referred to as running status. Running status is simply a clever scheme to maximize the efficiency of MIDI transmission (by removing extraneous Status bytes). The basic philosophy of running status is that a device must always remember the last Status byte that it received (except for RealTime), and if it doesn't receive a Status byte when expected (on subsequent messages), it should assume that it's dealing with a running status situation. A device that generates MIDI messages should always remember the last Status byte that it sent (except for RealTime), and if it needs to send another message with the same Status, the Status byte may be omitted. Let's take an example of a device creating a stream of MIDI messages. Assume that the device needs to send 3 Note On messages (for middle C, E above middle C, and G above middle C) on channel 0. Here are the 3 MIDI messages to which I'm referring.
0x90 0x3C 0x7F 0x90 0x40 0x7F 0x90 0x43 0x7F

Notice that the Status bytes of all 3 messages are the same (ie, Note On, Channel 0). Therefore the device could implement running status for the latter 2 messages, sending the following bytes:
0x90 0x3C 0x7F 0x40 0x7F

MIDI Specification
0x43 0x7F

This allows the device to save time since there are 2 less bytes to transmit. Indeed, if the message that the device sent before these 3 also happened to be a Note On message on channel 0, then the device could have omitted the first message's Status too. Now let's take the perspective of a device receiving this above stream. It receives the first message's Status (ie, 0x90) and thinks "Here's a Note On Status on channel 0. I'll remember this Status byte. I know that there are 2 more data bytes in a Note On message. I'll expect those next". And, it receives those 2 data bytes. Then, it receives the data byte of the second message (ie, 0x40). Here's when the device thinks "I didn't expect another data byte. I expected the Status byte of some message. This must be a running status message. The last Status byte that I received was 0x90, so I'll assume that this is the same Status. Therefore, this 0x40 is the first data byte of another Note On message on channel 0". Remember that a Note On message with a velocity of 0 is really considered to be a Note Off. With this in mind, you could send a whole stream of note messages (ie, turning notes on and off) without needing a Status byte for all but the first message. All of the messages will be Note On status, but the messages that really turn notes off will have 0 velocity. For example, here's how to play and release middle C utilizing running status:
0x90 0x3C 0x7F 0x3C 0x00 <-- This is really a Note Off because of 0 velocity

RealTime Category messages (ie, Status of 0xF8 to 0xFF) do not effect running status in any way. Because a RealTime message consists of only 1 byte, and it may be received at any time, including interspersed with another message, it should be handled transparently. For example, if a 0xF8 byte was received inbetween any 2 bytes of the above examples, the 0xF8 should be processed immediately, and then the device should resume processing the example streams exactly as it would have otherwise. Because RealTime messages only consist of a Status, running status obviously can't be implemented on RealTime messages. System Common Category messages (ie, Status of 0xF0 to 0xF7) cancel any running status. In other words, the message after a System Common message must begin with a Status byte. System Common messages themselves can't be implemented with running status. For example, if a Song Select message was sent immediately after another Song Select, the second message would still need a Status byte. Running status is only implemented for Voice Category messages (ie, Status is 0x80 to 0xEF). A recommended approach for a receiving device is to maintain its "running status buffer" as so: 1. 2. 3. 4. 5. Buffer is cleared (ie, set to 0) at power up. Buffer stores the status when a Voice Category Status (ie, 0x80 to 0xEF) is received. Buffer is cleared when a System Common Category Status (ie, 0xF0 to 0xF7) is received. Nothing is done to the buffer when a RealTime Category message is received. Any data bytes are ignored when the buffer is 0.

Syncing Sequence Playback

MIDI Specification A sequencer is a software program or hardware unit that "plays" a musical performance complete with appropriate rhythmic and melodic inflections (ie, plays musical notes in the context of a musical beat). Often, it's necessary to synchronize a sequencer to some other device that is controlling a timed playback, such as a drum box playing its internal rhythm patterns, so that both play at the same instant and the same tempo. Several MIDI messages are used to cue devices to start playback at a certain point in the sequence, make sure that the devices start simultaneously, and then keep the devices in sync until they are simultaneously stopped. One device, the master, sends these messages to the other device, the slave. The slave references its playback to these messages. The message that controls the playback rate (ie, ultimately tempo) is MIDI Clock. This is sent by the master at a rate dependent upon the master's tempo. Specifically, the master sends 24 MIDI Clocks, spaced at equal intervals, during every quarter note interval.(12 MIDI Clocks are in an eighth note, 6 MIDI Clocks in a 16th, etc). Therefore, when a slave device counts down the receipt of 24 MIDI Clock messages, it knows that one quarter note has passed. When the slave counts off another 24 MIDI Clock messages, it knows that another quarter note has passed. For example, if a master is set at a tempo of 120 BPM (ie, there are 120 quarter notes in every minute), the master sends a MIDI clock every 20833 microseconds. (ie, There are 1,000,000 microseconds in a second. Therefore, there are 60,000,000 microseconds in a minute. At a tempo of 120 BPM, there are 120 quarter notes per minute. There are 24 MIDI clocks in each quarter note. Therefore, there should be 24 * 120 MIDI Clocks per minute. So, each MIDI Clock is sent at a rate of 60,000,000/(24 * 120) microseconds). Alternately, if a sequencer wishes to control playback independent of tempo, it can use Tick messages. These are sent at a rate of 1 message every 10 milliseconds. Of course, it is then up to the slave device to maintain and update its clock based upon these messages. The slave will be doing its own counting off of how many milliseconds are supposed to be in each "beat" at the current tempo. The master needs to be able to start the slave precisely when the master starts. The master does this by sending a MIDI Start message. The MIDI Start message alerts the slave that, upon receipt of the very next MIDI Clock message, the slave should start the playback of its sequence. In other words, the MIDI Start puts the slave in "play mode", and the receipt of that first MIDI Clock marks the initial downbeat of the song (ie, MIDI Beat 0). What this means is that (typically) the master sends out that MIDI Clock "downbeat" immediately after the MIDI Start. (In practice, most masters allow a 1 millisecond interval inbetween the MIDI Start and subsequent MIDI Clock messages in order to give the slave an opportunity to prepare itself for playback). In essense, a MIDI Start is just a warning to let the slave know that the next MIDI Clock represents the downbeat, and playback is to start then. Of course, the slave then begins counting off subsequent MIDI Clock messages, with every 6th being a passing 16th note, every 12th being a passing eighth note, and every 24th being a passing quarter note. A master stops the slave simultaneously by sending a MIDI Stop message. The master may then continue to send MIDI Clocks at the rate of its tempo, but the slave should ignore these, and not advance its "song position". Of course, the slave may use these continuing MIDI Clocks to ascertain what the master's tempo is at all times. Sometimes, a musician will want to start the playback point somewhere other than at the beginning of a song (ie, he may be recording an overdub in a certain part of the song). The master needs to tell the slave what beat to cue playback to. The master does this by sending a Song Position Pointer message. The 2 data bytes in a Song Position Pointer are a 14-bit value that determines the MIDI Beat upon which to

MIDI Specification start playback. Sequences are always assumed to start on a MIDI Beat of 0 (ie, the downbeat). Each MIDI Beat spans 6 MIDI Clocks . In other words, each MIDI Beat is a 16th note (since there are 24 MIDI Clocks in a quarter note, therefore 4 MIDI Beats also fit in a quarter). So, a master can sync playback to a resolution of any particular 16th note. For example, if a Song Position value of 8 is received, then a slave should cue playback to the third quarter note of the song. (8 MIDI beats * 6 MIDI clocks per MIDI beat = 48 MIDI Clocks. Since there are 24 MIDI Clocks in a quarter note, the first quarter occurs on a time of 0 MIDI Clocks, the second quarter note occurs upon the 24th MIDI Clock, and the third quarter note occurs on the 48th MIDI Clock). A Song Position Pointer message should not be sent while the devices are in play. This message should only be sent while devices are stopped. Otherwise, a slave might take too long to cue its new start point and miss a MIDI Clock that it should be processing. A MIDI Start always begins playback at MIDI Beat 0 (ie, the very beginning of the song). So, when a slave receives a MIDI Start, it automatically resets its "Song Position" to 0. If the master needs to start playback at some other point (as set by a Song Position Pointer message), then a MIDI Continue message is sent instead of MIDI Start. Like a MIDI Start, the MIDI Continue is immediately followed by a MIDI Clock "downbeat" in order to start playback then. The only difference with MIDI Continue is that this downbeat won't necessarily be the very start of the song. The downbeat will be at whichever point the playback was set via a Song Position Pointer message or at the point when a MIDI Stop message was sent (whichever message last occurred). What this implies is that a slave must always remember its "current song position" in terms of MIDI Beats. The slave should keep track of the nearest previous MIDI beat at which it stopped playback (ie, its stopped "Song Position"), in the anticipation that a MIDI Continue might be received next. Some playback devices have the capability of containing several sequences. These are usually numbered from 0 to however many sequences there are. If 2 such devices are synced, a musician typically may set up the sequences on each to match the other. For example, if the master is a sequencer with a reggae bass line for sequence 0, then a slaved drum box might have a reggae drum beat for sequence 0. The musician can then select the same sequence number on both devices simultaneously by having the master send a Song Select message whenever the musician selects that sequence on the master. When a slave receives a Song Select message, it should cue the new song at MIDI Beat 0 (ie, reset its "song position" to 0). The master should also assume that the newly selected song will start from beat 0. Of course, the master could send a subsequent Song Position Pointer message (after the Song Select) to cue the slave to a different MIDI Beat. If a slave receives MIDI Start or MIDI Continue messages while it's in play, it should ignore those messages. Likewise, if it receives MIDI Stop messages while stopped, it ignores those.

Ignoring MIDI Messages


A device should be able to "ignore" all MIDI messages that it doesn't use, including currently undefined MIDI messages (ie Status is 0xF4, 0xF5, 0xF9, or 0xFD). In other words, a device is expected to be able to deal with all MIDI messages that it could possibly be sent, even if it simply ignores those messages that aren't applicable to the device's functions.

MIDI Specification If a MIDI message is not a RealTime Category message, then the way to ignore the message is to throw away its Status and all data bytes (ie, bit #7 clear) up to the next received, non-RealTime Status byte. If a RealTime Category message is received interspersed with this message's data bytes (remember that all RealTime Category messages consist of only 1 byte, the Status), then the device will have to process that 1 Status byte, and then return to the process of skipping the initial message. Of course, if the next received, non-RealTime Status byte is for another message that the device doesn't use, then the "skip procedure" is repeated. If the MIDI message is a RealTime Category message, then the way to ignore the message is to simply ignore that one Status byte. All RealTime messages have only 1 byte, the Status. Even the two undefined RealTime Category Status bytes of 0xF9 and 0xFD should be skipped in this manner. Remember that RealTime Category messages do not cancel running status and also could be sent interspersed with some other message, so any data bytes after a RealTime Category message must belong to some other message.

Miking Up Acoustic Instruments

Must-learn secrets on miking and recording acoustic instruments

ACOUSTIC GUITAR The majority of acoustic guitars used by electro-acoustic bands have built-in pick-ups, so they can go through the PA via a backline amp, or direct, usually via a DI box. But you may just be using your acoustic for one or two numbers, then switching back to electric, so it's hardly worth having a pick-up fitted - in this case stick an instrument or general purpose mike (if you have one - most vocal mikes will cope if you haven't), on a boom stand and point it towards the body end of the guitar's neck from about six inches away. Be careful not to point the mike directly at the sound hole as this can lead to feedback problems. Any feedback that does occur will be in the low/mid frequencies, so be prepared for it. You can experiment with mike position until you've got the sort of sound you're looking for, but don't forget that, the further towards the headstock you put the mike the more finger-on -string noise you're likely to get. And too much movement by the guitarist won't do a lot of good for the consistency of the overall sound. MADOLINS, BANJOS, ETC Treat these instruments as small guitars and mike up similarly. Do take note that they can be somewhat harsh at the top end and lacking in the mid range, so you'll need to sweeten the mid and pull the top end somewhat to get a nice rounded tone. DOUBLE BASS If you have to mike up a double bass (many these days are already fitted with pick-ups), you need to get the mike as close as possible , and don't point it at the F holes [Arf! He said, "F holes!"] - just below the bridge yields best results, but for a bright sound you can point the mike at the body end of the fretboard (not that it's got frets, so who knows what you call it - fingerboard perhaps?). Any feedback that does occur will be in the lower frequencies, unsurprisingly. VIOLIN Fiddles tend to be best dealt with by fitting some form of pick-up (there are lots around, varying from cheap piezos to expensive condenser mike-based models like the Hurford Studio Bug), or a tie-clip type mike - if you're going for the latter, try and get a cardiod one, or be prepared for feedback fighting. An omni-directional mike always seems superior, but you really do have to sacrifice alot of volume. See my later comments on Beyer's tie-clip mikes for flute - they're good for fiddles too. PIANO Pianos are never the easiest of instruments to mike up to achieve a decent sound across the full range. One way is to use a boom stand and position the mike over the strings somewhere between middle C and the top end, with the front panel removed on an upright, and, obviously, the lid open on a grand. If you've got plenty of mikes (and mixer channels), then use two of them, one near the bass end, one near the top, particularly if the piano is one of your lead instruments. On a grand piano, position one mike halfway down the inside (under the lid), and the other underneath the piano in the middle - seems like a weird idea, but it works. BODHRAN Much used in electro-acoustic folk bands, and often called an Irish drum, you could describe the bodran as a big tambourine without the jangly bits. Clip-on drum mikes like Beyer's TG-X 5, or AKG's C418 from their Micro Mic series work very well with the bodhran, but if you can't afford such luxuries, an SM58 or similar, positioned about a foot away from the skin and slightly above, will do the job. BRASS

...

Stick a mike on a straight or boom stand and treat your blowist like you would your vocalist. With seriously powerful brass you have to make sure your mike is capable of handling fairly high sound pressure levels - the good ol' SM58 works well. Most mike manufacturers make a range of fitments for brass to go with the mikes they recommend for these instruments (Shure's SM98 is a good example and gives excellent results). For the really active player, add a wireless transmitter (and receiver, of course).With powerful, high -end brass instruments like soprano saxes, keep the players well away from the mike or they'll drown everyone else as well as overloading the PA in a very nasty fashion indeedy. WOODWIND You're unlikely to come across many of these other than the flute. A decent vocal or general purpose mike will do the job very adequately, but look out for high end feedback. If your flautist wants a bit more freedom to move about, Beyer's MCE 5 and MCE 10 mikes are very popular, although not cheap, and Beyer make a mount specifically for flutes. The MCE 5 is omnidriectional so you have to be very careful avoiding feedback, but the sound sure is sweet. HARMONICA Unless you already have a specialist mike for this, a vocal mike will do just fine, but most harp players like to carry their own Shure Green Bullet, or a look -a-like (not necessarily always a sound -a-like), which will hopefully give them the sound they want, and you the feedback you don't. Be prepared to work hard at battling this, particularly if you harmonica player is an active sort. It may not be popular, but cutting back on the top end helps alot, and keep it well away from monitors.

...

Page 1 of 3

Mixing tips

Top-flight engineer and mastering king Chris Lewis reveals some tricks of the trade...

The mission is enticing, but just like a busty Bond lovely, it's not all it seems. Go South 007, and meet a man called Chris Lewis who will give you an insight into the 'black art' of mastering. For not only is this man an acclaimed live and studio engineer, he also masters other people's mixes for a living, and is therefore the ideal candidate to provide The Mix with top tips on how to make tracks more 'radio friendly'. What's more, he does this mastering from his living room, which shows that professional results are possible from home. Hmm... sounds intriguing, M. Accompanied by Q and his array of photographic gadgets, we set off cautiously for our target: MELT 2000 studios. However, when we arrive at our assigned meeting place, a converted chicken shed that's one of MELT 2000's three studios in their farmhouse complex in deepest rural Sussex, it soon becomes apparent that there's a lot more to this mission than simply a lesson in mastering. Lewis is putting the finishing touches to an MP3 file, which he is 'ISDNing' to Johannesburg for a concert that evening. "It's a female artist called Busi Mhlongo, who I recorded on my last trip to South Africa," he explains. "She is singing at the Kora festival tonight, and she needed a backing track - so I have taken a mix of one of her songs off Pro Tools". After starting out as a roadie in the early '70s, Lewis graduated to becoming a live sound engineer for bands like Eddie and Sunshine, and Central Line during the punk era. From 1982, a long residency at Ronnie Scott's Jazz Club in London began. In 14 years at the club he engineered and recorded live some of the jazz greats: Art Blakey, Freddie Hubbard, and Clem Curtis, just to name a few, afterwards moving on to projects with Dr. John and Arturo Sandaval, before getting involved in B&W Records (now called Melt 2000) and former owner of B&W Robert Trunz. "I joined forces with Robert about four years ago and helped him set up this studio. It really was a derelict chicken shed, so we had to start from scratch, installing the equipment and getting Recording Architecture to redesign the mastering room, put in floating walls, soundproofing and so on. The surprising thing is that although you can hear virtually nothing as soon as you leave the studio, we still get complaints from some of the neighbours up the road! I think its just more a case of them not liking muso types in the vicinity, but in terms of noise we have more problem from farm machinery, than they have ever had from us!" The background to Melt 2000 itself makes interesting reading coming out of Lewis' and Trunz's shared interest in 'world' music, and the fusion of styles from different cultures. "It something that I became interested in at Ronnie Scott's," Lewis explains. "I engineered several Cuban bands during that time and thought they were fantastic". The informative MELT 2000 website (check it out at www.melt2000.com) also documents Trunz's conversion from equipment manufacturing boss to label owner while on a visit to South Africa, with the aim of "reinvigorating modern music by injecting the passion of some of the world's greatest players". In terms of Lewis' career, this has involved several trips to South Africa, both accompanying MELT 2000 artists on tours and collaborations, and recording new artists for release on the MELT 2000 label. One of the striking things about Lewis' South African projects is the minimal amount of gear he uses, especially when recording in the field. "On my last trip I only took a few flight cases with me," he confesses. "I had a Mac Pro Tools system with a 23 gig hard drive and a Mackie HUI to control it. The only outboard was a DBX 1066 compressor, an ATI mic preamp and a Tascam DA-38, which I barely used. I think some of the bands I recorded were initially really disappointed when they walked in. They were expecting a huge great mixing desk and reel-to-reel tape machine and there they were confronted with just a computer!" To many enthusiasts and professionals the likelihood of coming up with successful commercial results from such a minimal set-up would seem to be small. However, Lewis believes that the methods he employs are the ideal solution to the music he is dealing with. "Because all of the people in the bands are such good players the real job is capturing the essence of the performance as accurately as possible. You don't really need loads of effects for

Page 2 of 3

that, perhaps just a little compression. I would say the most important part of my job is miking up the musicians well and coaxing the best performance out of them. Unfortunately, with the advent of electronic music, many of the skills of mic placement are being lost, yet it can make all the difference between a brilliant and terrible recording." It seems that the man-management element of Lewis' job is as important as any. "Many of these bands have never recorded in a traditional studio before, so anything you can do to put them at ease really helps. For example, when I worked with a band called Amampondo (from the East of the country and supposedly one of Nelson Mandela's favourites) on my last trip we actually recorded the entire album in an old barn, complete with dirt floor. I have also recorded projects outdoors and even in an old British fort from 130 years ago! Recording on location often makes the bands feel at home and therefore their performance sound more natural. "The other important element in recording in this way is to try not to let the technology get in the way of the performance. Never having been in a studio, some of the 'traditional' bands, especially, find it difficult to record using headphones. In this case I may hook up a pair of out-of-phase speakers behind the musicians instead, to make them feel more comfortable. Another trick is to avoid telling the musicians you are recording whenever possible. For some reason, as soon as they know, the performance immediately takes on a tentative quality. Quite often I will secretly tape the practice runs - which end up being better than the actual take itself." Listening to some of the South African material, what's striking is how incredibly warm and intimate it is, whilst maintaining the highest standards of recording. It's a quality that's hard to define; this music could have been recorded in a studio yet you simply can't place it there. What's more surprising is the knowledge it was all recorded on Pro Tools. Lewis agrees. "I have become a complete fan because the quality of the Pro Tools DSPs and effects is now so good. Of course I may still record analogue in the studio, but it's no longer a compromise, just different. The main thing is that you have to watch for digital clipping all the time and run your levels a little shy, whereas you actively want to drive tape hard with the extra warmth you get from a little saturation. However there are so many other things to be gained from using the Pro Tools system. Obviously its mobility, but also the fact that it's so efficient. You can be editing, mixing or striping to silence whilst at the same time recording. It's easy to adjust and very good at drop-ins." Well aware that we are two hours into the interview and we haven't even touched on mastering yet, we set off hot foot for Lewis' mastering studio: a lounge, complete with three-man sofa and coffee table. Isn't this taking the 'mastering at home' concept to the extreme? "Well it may look like a living room", says Lewis, "but I wouldn't really call it a 'home studio'. There's actually several tens of thousands of pounds of equipment in here, and as you can see, we have also spent money on the room's acoustic design. It's simply a case that Robert and I wanted a comfortable environment in which to master - somewhere that we could be relaxed - and this seemed appropriate." Again what is striking about Lewis' set-up is the minimal amount of equipment used. Apart from the computer that sits upon the converted coffee table 'rack', there are no more than about 10 pieces of mastering equipment, neatly segmented into analogue devices on the right and digital devices on the left. "I really do believe that a combination of analogue and digital devices is best," explains Lewis. "For example, I often find that analogue EQ is better and more forgiving for the top end of tracks, whilst I will often use digital devices for the bottom end." In the bottom left of the rack is the studio's latest toy: a Drawmer Masterflow digital mastering processor. "I really like the Masterflow because it's so flexible and easy to use. The display is particularly good because it visualises all the different frequency bands in an intuitive way, allowing you to control crossover points and manipulate bandwidth swiftly. In addition, the 3-band compressor, width control, and valve emulator allow you to process elements of the mix discreetly, say by fattening and narrowing the bass end with the valve emulator and the width control, whilst broadening the top end. I particularly like using these elements to create a tight pumping bass sound on dance tracks. "By adjusting the crossover points its also possible to start effecting the sub-bass frequencies at the same time, which is a powerful combination. The compression is really powerful too. Even using a ration of 2:1 really blows your socks off! The only drawback with the Masterflow is that it doesn't have a remote control - otherwise it's spot on." In general, Lewis' mastering methods seem to be very flexible, but he has one golden rule that he is very rarely willing to break: "I tend to avoid mastering projects I have also produced and engineered. It's simply a case of remaining fresh. When you have been that close to a project all the way along, you may miss things that a fresh pair of ears may pick up. So I will tend to take the work to a third party. However, in the case of my most recent South African recordings I'm the mastering engineer as well. I'm aware that my mixes tend to come out a little 'down' - so I like to have someone to bounce off. Luckily having the studio in Robert's house means I always have another set of ears to call upon".

Page 3 of 3

Apart from this one proviso, a mastering project can take any number of forms; "Sometimes the artist or producer will come along to the mastering session, but some people like to remain well clear. I am fairly open to either approach". However it seems that the second method of working can require some additional man-management skills. "Mastering-wise there's definitely a trend for weird effects at the moment, especially with dance music. Some guys have a tendency to want to overdo it and you sometimes have to calm them down a bit." This draws us onto the subject of mastering at home against going to a professional mastering house. With all the cost-effective equipment that's on the market at the moment surely there is a case for doing the job at home if you are working to a budget? Lewis disagrees. "This studio may be in someone's house but it's definitely not a home studio. In the end you cannot beat the experience of a professional mastering engineer. I have lost count of the number of 'commercial' recordings I have had to remaster because not enough care was taken with them in the first place. The art of the mastering engineer is to get the recording as loud as possible without killing the dynamics, yet often I receive tracks with levels that are one long red line, so there's absolutely nowhere else left to go. Even worse, some tracks suffer so much from digital clipping that they take on an overly aggressive artificial feel about them". Before taking our leave we listen to some of Lewis' latest projects including new tracks by Juno Reactor, and Greg Hunter, who mixes traditional Egyptian musical styles with dance music. Both manage to be in-yer-face while retaining a warm subtlety and naturalness that belies the treatment they have received. Both tracks elegantly make the point that the gear may be affordable, but - just like mixing - that's no guarantee that it's going to produce the results you want. Success still remains a matter of skill, subtlety and patience.

Page 1 of 3

Percect Monitor Placement

How to handle bass in the control room...

It's amazing how some home and professional users alike can spend hours and hours tweaking sounds, programming a reverb, or editing in Pro Tools, but only about five minutes flat to set up their monitors on a couple of breeze blocks or tea chests. It sounds amusing now, but if your livelihood depends on the music or sound you're producing, then your HeathRobinson antics won't seem quite so funny when the TV soundtrack you've been commissioned to write sounds completely awful when played through the Director's system, and you get the sack. This is the kind of scenario that might await you if you don't sort out any problems with the set-up of your studio monitoring system. You think I'm scaremongering? Well, maybe, but why take a chance on something that's so easy to fix? In this instalment, we're going to look at how to deal with bass frequencies in the studio, but first let's have a recap of the points we covered last time (for those of you who missed it!). (1) Nearfield positioning: make sure that both speakers are pointing directly at the spot where you'll be mixing. Ideally you need a 60 angle between the monitors, and if they are above the height of your head, make sure they're pointing down towards your ears. Remember that for critical work (such as mixing) you need to be sitting in the sweet spot to get the correct impression of the music you're listening to. Moving out of the sweet spot will detrimentally affect the sound you're listening to, so don't make any major decisions while away from your desk. Also, make sure that your nearfields are, indeed, in the nearfield; anything between 0.7 and 2 metres between your speakers and your ears is okay. (2) Nearfields, in most cases, should be mounted on stands. Mounting them on the meterbridge may be convenient, but you'll get a whole load of reflected sound off the panel of the mixing desk, whereas you really only want to listen to direct sound. Mounting them on stands will avoid this, and is also a solution to problems caused by mixing in front of a computer monitor. It's also preferable to mount nearfields vertically, though of course some models are designed specifically for horizontal use. (3) Symmetry of installation: it goes without saying (I hope) that your monitors should be the same distance from the listening point, but they should also be symmetrical within the room (i.e. the same distance from the side walls). Why? Because you'll hear reflected sound off these walls, and you need to hear the same on either side, or it'll mess up your stereo imaging. The same goes for gear in front of the monitors, which should also, if possible, be symmetrical, so if you have a rack in front of the desk on one side, try to have an identical one on the other side. Okay then, let's move onto some more issues, this time regarding bass. Bring on the bass Why is it that you always seem to hear people talking about problems with bass in their studio, and rarely with high frequencies? Why does it seem to be that those low frequencies are so problematic? Or is it just myth? Well, it is true that low frequencies are more problematic, and the reason has to do with the radiation space in a room. The radiation space is the geometrical surroundings of a speaker, and is frequency dependent. If the speaker is in the middle of a field (for example), there are no limitations to its radiation space and it's known as 'free standing'. As soon as you place it in a room, however, the radiation space starts to be limited. If your speakers are soffit (wall) mounted, or right next to the wall, they can 'see' only half the space now, and if placed by two walls, the radiation space is divided by four. A typical corner, with two walls and a floor would cut this to an eighth of the radiation space of our monitor in the middle of a field. But what does this mean in practise? A decrease in the radiation space means an increase in energy density in that immediate area (i.e. higher SPL levels), and the theoretical value is 6dB for every time the space is halved. So soffit-mounting your speakers will give 6dB of boost, and in a corner (with two walls plus floor) this will give 18dB of boost. But remember that we said earlier that this is frequency dependent, and the way it works is that the radiation space is limited by 'reflecting surfaces that are large compared to the wavelengths'. So for low frequencies this means the walls, the

Page 2 of 3

floor and the ceiling, for mid frequencies it's mainly the speaker baffle and objects near the speaker, while for high frequencies it's entirely the speaker baffle and driver itself. Now you understand why it's the bass frequencies that cause all the trouble? 4. +6dB bass boost at boundaries: Now we've understood the effects of limiting the radiation space of a speaker (i.e. placing it in a room), and that it's the low frequency effects that are most apparent, let's look at some practical examples. Many big studios choose to soffit-mount their monitors (more of which next month), and by doing this they will experience a 6dB boost in the low frequencies, since the radiation space is halved. In this case, the bass response of the speaker should be cut by at least 6dB to compensate. Placing a speaker very close to the back wall will also have a similar result, and the same action should be taken. If your speakers are in the corners of a room, you'll experience a bass boost of between 12dB and 18dB, depending on how close they are to the floor (above about 1.3m the floor won't have a significant effect). Again, this needs to be rectified by cutting the bass response of the speaker. Great - your bass frequencies are under control? Or are they? There's another factor that has to be taken into account when deciding how far away from the walls you place them... 5. Frequency cancellation caused by wall reflections: Another reason why many studios wallmount their speakers is to avoid cancellation of frequencies caused by wall reflections. This effect is easy to explain: if a speaker is a quarter wave-length away from a reflective wall, the reflected wave returns to the speaker with a halfcycle phase difference (2 x 1/4 cycle, kapiche?) - i.e it's in antiphase. If this is a perfect reflection then the cancell-ation will be complete, though in reality this is unlikely to be the case (due to absorption by the surface). Nevertheless, the effect can be very audible indeed, and a problem that needs to be solved. Naturally, this will occur in more than one direction too, so several frequencies may experience cancellation. The solution? Apart from flush mounting (which eliminates reflections) you can either place the speaker very close to the wall, or considerably away from the wall. In the first case, the cancellation frequency is so high (work it out yourself using the following equation: wavelength = velocity of sound (343m/s) / frequency) that the effect is overlapped by higher directivity and higher density of room resonance modes. If you take this option, remember that placing your speaker close to the wall will result in 6dB of bass boost (covered in point 4) and you'll need to take corrective action. The other solution is to move your speakers well away from the walls, where the cancellation frequency is so low that it'll have little effect on the music anyway. But just how far away? Well, using the above equation, a distance of three metres will give a cancellation frequency of 29Hz, which isn't going to cause anyone problems. Even at two metres, the cancellation frequency is an acceptable 43Hz (2 x 4 = 8, 343/8 = 42.875), but of course this all depends on how much space you have to spare. 6. Positioning of subwoofers: Subwoofers have been used increasingly in recent years in conjunction with nearfields to give an extended frequency response in the absence of (a) a large room (b) budget for a pair of fullrange speakers and (c) both of the former. But does it really matter where you place the subwoofer? Frequencies in this bandwidth are omni-directional anyway, so surely it doesn't matter where they're placed? Well, not quite... (you knew I was going to say that, didn't you?). Firstly we should look at the +6dB effect at boundaries (covered in point 4). If you have a large room, you can place the subwoofer 3m away from the closest walls, and you'll avoid the 1/4 wavelength frequency cancellations covered in point 5. The subwoofer can be either facing the floor, or into the room, but you will have to adjust the unit's response to take into account the effect of the boundary (the floor) by taking off 6dB of gain. In a smaller room, the unit should be placed close to the front wall(s) (between 20 and 80 cm) to avoid any frequency cancellation due to the 1/4 wavelength phenomenon. If this is in the centre of the room, with two boundaries (floor and wall) then the response of the unit should be adjusted by 12dB, and in a corner (three boundaries) it should be adjusted by 18dB. There is another factor to take into account, and that's the general room behaviour, its standing waves and axial modes. We will be covering these issues in next month's instalment, so we won't go into too much detail here. However, a quick explanation would be that by placing the subwoofer in a pressure minima (which, coincidentally, would generally be the case if you placed the unit on the floor behind the mixing desk and between the nearfields in a small room) it causes the subwoofer problems in generating high enough SPLs. One solution is to move the sub slightly off the centre line, which puts it in a more balanced sound pressure zone.

Page 3 of 3

In reality, experimentation will help you get the right sub position, and this means moving it around the studio and listening closely to the result. It should be quite obvious if there is 'something missing' or if you indeed have a full low-frequency response. Whatever you do, don't just shove the unit anywhere and expect to get a satisfactory result.

Chris kempster The Mix 04/00

Page 1 of 4

Recording Vocals (pt I)

How to get the very best vocals, starting with mic types and other hardware...

Ask people what the most important element in a recording is, and most will agree that it's the vocals. And no matter how good your music is, a sub-standard vocal can kill it quicker than Prince Naseem can duck n' dive. Whilst they certainly help, top-quality microphones and state-of-the-art recording equipment are not a prerequisite for a great vocal recording. Excellent results can be obtained using more modest equipment. Nor do you need the voice and talent of Frank Sinatra or Aretha Franklin - with a little patience and the use of a few simple techniques, any singer can end up with their best possible recorded vocal. Over the next few issues of The Mix this series will cover the basics of recording vocals, with techniques, hints and tips to help get the best possible performance recorded with the best possible sound. So let's start from the beginning - what equipment to use, and how to set it up. Microphones Both condenser and dynamic microphones can be used to record vocals. Generally in professional studios, largediaphragm condensers are used, as they have a refined sound with a wide dynamic range and extended frequency response. Many excellent vocals, however, have been recorded on commonly available dynamics like the Shure SM58. Choice of mic is down to what you have available, but in a situation where you have several different models, make your choice based on which mic suits the singer's voice for a particular song. Many engineers and producers will put up several mics initially to check which one sounds best. Having chosen a microphone, it's preferable to mount it on a stand. Most mics come supplied with a mount, and the more expensive ones will have a suspended cradle mounting to isolate the microphone from shock and vibration. It is possible to record vocals using a hand-held mic like an SM58 but, in terms of the sound being recorded, there are several reasons why this is not ideal. Firstly, there is handling noise to consider - the sound of the singer moving their grip on the microphone and moving it around will be picked up. Secondly, unless the singer is very experienced with mic technique, the mic will be held at different distances from the mouth at various times, resulting in small changes in timbre and level. And, thirdly a hand-held mic rules out the possibility of using a pop shield. Now, having pointed out the drawbacks, it must be said that there will always be some singers who feel most comfortable using a hand-held mic. In this case, recording with a hand-held is the way to go, because a relaxed and confident singer is going to turn in a better performance than one who is uptight about having to stand still and sing into a stand-mounted mic. A slightly less-than-perfect sound is a small price to pay for a great vocal performance. Arctic rolls The next thing to consider about a microphone is its polar pattern. Many mics, especially less expensive ones, have a fixed polar pattern, usually cardioid. Others have a switchable polar pattern, but unless you are after a certain effect, switching it to cardioid is preferable. Cardioid mics are the norm for recording vocals as they accept sound from directly in front and reject much of what comes from the back and sides. This is important in the context of the room where the vocal recording is to take place, as reflections from the walls may be picked up by the microphone, adding some of the ambient sound of that room to the vocal sound. This would obviously be more pronounced if an omni pattern was selected on the mic. Now, there may be occasions where you will want to record in a particular room to pick up the sound of the room

Page 2 of 4

or the reflections from, say, a window or wall in the room, if that will suit the track you are working on. But in most cases it is probably best to record a vocal in a dead-sounding area and add any ambience at the mixing stage using a reverb unit, because once ambience is recorded with a vocal, you are stuck with it Some studios have acoustically-treated vocal booths, but if you have to record your vocals in a normal-sized room, a dead area can be created by siting acoustic screens around the microphone. The DIY approach to this for home recording is to hang curtains, duvets, blankets or something similar around the singing area. A couple of self-assembly bedroom tidy rails from the Argos catalogue make an inexpensive and practical framework to hang material on and construct a functional vocal booth. These can be disassembled and stored away when they're not needed. 6" is ideal A singer's distance from the mic can make a lot of difference to the sound recorded. A distance of 6" or so is perhaps a good starting point, although experienced singers will work the mic by leaning into it for some passages and moving back for louder sections. Sing too far away from the mic and more of the room ambience will be picked up; sing closer to the mic and more of the proximity effect comes into play. Proximity effect is a pronounced boost in low frequencies which results in the voice sounding bassier when singing very close to the mic, and it can be successfully exploited by an experienced vocalist. It is best to try to keep a vocalist at a consistent distance from the mic, particularly when doing multiple takes and where he/she has to leave the booth to listen to playbacks and then go back and sing the odd line. If the same distance from the mic is maintained, variations in volume and timbre between takes is minimised, and dropped-in lines will sound more natural. Once a singer is at the optimum distance from the microphone, mark the position of their feet on the floor with gaffa tape so that they can go back to the same position each time, and don't forget to mark the position of the mic stand at the same time in case it is accidently moved. The height of the microphone on its stand in relation to the singer is also a factor to take into consideration. Some like to sing up to a mic suspended a little higher than them, but this can strain the voice if the head, neck and shoulders are stretched up. A mic that is suspended too low is also not ideal if it causes the singer to hunch over, although this at least puts less strain on the neck and shoulders. An advisable starting position is to have the capsule level with the singer's mouth and then move it if necessary to suit the singer's most comfortable stance. Having the capsule level with the singer's mouth creates its own problems, as it is more susceptible to blasts of air, but there are methods to counter this, the most important of which is the use of a pop shield. A pop shield is generally put up a couple of inches in front of the mic and its basic function is to stop plosives, which are the popping sounds from blasts of air usually produced by singing the vowels 'B' and 'P'. The pop shield also serves to protect microphones from spit and moisture produced by the singer. Commercially available pop shields, which usually have a gooseneck and a clamp allowing direct fixing to the mic stand, are fairly expensive. However a home-made substitute can easily be constructed from a pair of tights stretched over a bent wire coathanger - just remember to wash them first if they've been previously worn! If you cannot attach the pop shield directly to the mic stand, try using a second mic stand purely as support for the pop shield. One useful trick is to fix a pencil vertically down the centre of the pop shield, as this tends to dissipate the energy of blasts of air before they reach the mic. If popping problems still persist, try getting the singer to sing slightly to the side, above or below the mic. If a singer has difficulty doing this and needs to focus directly on the mic, put up another mic that is not plugged in and site it right next to the real vocal mic. Then let the singer sing into this dummy mic. Whisper to a scream Microphones have to be connected to a pre-amplifier, and there are two options available. Connection can either be into the mic amp in a mixing desk's input channel, or into a standalone mic pre-amp. These provide a higher quality signal path to the recording medium than that provided by the average mixing desk. A budget mixing desk will have identical mic amps on all of its input channels and these are built to a price, so a standalone pre-amp, relatively more expensive than one desk input channel, ought to have better quality components and a cleaner signal path. Also, a shorter signal path to the recorder is usually provided by a

Page 3 of 4

standalone pre-amp which can connect directly to it, whereas the signal through a mixing desk may have to pass through input channel, group busses and patchbay. Compression is near-essential to even out the performance when recording vocals. The human voice has a huge dynamic range (from a whisper to a scream, to use the old clich), and a compressor will 'squash' that range a little. Don't go over the top, though; reducing the peaks by a few dB ought to be sufficient. Once compression is recorded you can't take it off, so it's best to err on the side of caution. More compression can, of course, be added as needed at the mixing stage. Several of the standalone pre-amps on the market have their own compressor built in. When recording through a desk's input channel, a compressor should be connected via the channel's insert points. EQ can also be applied when recording vocals, perhaps to remove a bit of nasal honk from a voice, to brighten up the sound a little, or, most usefully, to filter out some of the bottom end of the spectrum. Real low frequency sounds, such as outside traffic rumble or the sound of the singer's feet moving on the floor can be transmitted up the stand to the microphone, and an increase in bass due to the previously mentioned proximity effect can also be a problem. To get around this, switch in a high-pass or bass roll-off filter. Most mic pre-amps and desk channels, and some mics, will have a switchable filter operating at somewhere between 75Hz and 100Hz, cutting out most of the low end below that figure. EQ should, however be applied with caution. Adding too much top-end boost, for example, can often exaggerate the sibilance of the voice. It's best to record a vocal flat, but if you feel the need for EQ, use it sparingly. And while you may be tempted to use a noise gate or downward expander to cut out noise between phrases, our advice is not to. It's too easy to chop the end off notes and make the vocal sound unnatural. Processing of this sort should be left to the mix stage, when time can be taken to set it up accurately. Condenser or dynamic? Although there are other designs, the microphones most commonly used in studios today fall into one of two categories - condenser or dynamic. A microphone is simply a device which converts acoustic energy (sound waves) into electrical energy, and the dynamic and the condenser each do that in their own way. This has consequences for the sound produced, and hence the use to which each is put. A condenser mic, also known as a capacitor mic, has a thin diaphragm that is supported around its rim at a small distance from a thicker backplate. The theory is that the two form the two electrodes of a simple capacitor, and are oppositely charged by the application of a polarising voltage. When the diaphragm moves in response to sound waves, the spacing of the diaphragm and backplate (and hence the capacitance) will vary, and this is used to generate the output voltage. Because a voltage has to be supplied to the backplate and diaphragm, a mic of this nature needs a power supply. This usually comes in the form of 48V phantom power supplied from the mixing desk or mic pre-amp. Condenser mics are more difficult to manufacture than dynamics and are therefore more expensive; they are also not as rugged and are more susceptible to changes in atmospheric conditions, so should be stored, and handled, with care. In use, a condenser is generally more sensitive than a dynamic and has a better transient response. It also has a wider frequency response, so can pick up more top end than a dynamic, making it very useful for instruments like cymbals, acoustic guitars, and vocals. Condensers can be built with two diaphragms, and by changing the voltage of the second diaphragm in relation to the first, the mic is capable of several different polar patterns - from omni-directional through cardioid to figure-ofeight. Some condensers are designed with a valve in the circuitry; these do not need phantom power as they usually come with their own power supply. Valve mics provide a different tonality than the standard condenser, with an added warmth in the sound. Another variation from the standard condenser design is the electret mic, which uses a permanently charged electret material to charge the capsule. These mics are usually cheaper than condensers and can often be run from a battery if you do not have a phantom power source. Dynamic mics work because of the electromagnetic interaction between the field of a magnet and a moving coil conductor. A coil of wire, surrounded by magnets, is fixed to the back of the diaphragm, the motion of which results in the coil cutting through the magnetic field, inducing an electric current in the coil.

Page 4 of 4

Unlike condensers, dynamic mics do not require any power supply. They are more robust, and can cope with high sound pressure levels. Because dynamics are pressure- operated, their polar response can only be either omnidirectional or cardioid, and most handheld dynamic vocal mics are cardioids. Dynamics are also limited in their high frequency response, some having an upper limit of 16k (a good capacitor will go up to 20k). Mics designed for stage use will often have a bass end roll-off built in to counteract the proximity effect, and many have a presence peak built into their frequency response somewhere up around 5k. This is designed to help vocals cut through a mix. Some very well-known rock singers rcord their vocals with dynamic mics for that particular punchy sound. Microphone Polar Patterns There are four basic options when considering a mic's polar pattern: A cardioid (or unidirectional) mic is so named because of its heart-shaped response. It will pick up sound mostly from the front. Dynamic cardioid microphones are popular for vocals because of their off-axis exclusion, and robustness, but condenser cardioids are much better for the studio vocalist. A figure-of-eight microphone picks up sound from both front and rear of the diaphragm, but because the opposite sides are out of phase, side-on sources get cancelled out. Figure-of-eight microphones have the potential for very accurate recordings. A circle is the polar response of an 'ideal' omni-directional microphone. In practice, the response favours the 'open' side of the capsule at higher frequencies, so off-axis sources can be dull. Omni mics are particularly resistant to wind and handling noise. A hypercardioid microphone mixes the cardioid and figure-of-eight patterns to produce a 'thin' cardioid with an out-of-phase area at the rear. Because of this, the hypercardioid is good for reducing the effect of reflected or offaxis sounds, such as room reflections.

Vocal Mics 2,000 + Some 1950's valve vocal microphones are still in use today. The Neumann U47 and the AKG C12 are universally regarded as classics and sell for well in excess of 3,000 on the used market. AKG now produce a reissue of the C12, in the form of the C12 VR, M149, and the soon-to-be-released M147. Below 2,000 The Neumann U87 and AKG C414 are the two most common vocal condensers used in studios today. Below 1,000 Anyone wanting to buy a decent condenser under 1,000 for vocal use is spoilt for choice. AKG's SolidTube incorporates a valve in the design. Beyer's 834 and Audio Technica's 4033 and 4050 are all respected, and many of the eastern European imports give great results for a reasonable price. Australian-made Rode Microphones represent excellent value for money, and AKG weigh in with several of inexpensive contenders: the C4000B, C3000, C2000 and C1000S. Below 100 There are loads of inexpensive dynamic mics available, but think of a dynamic vocal mic and you invariably come up with the Shure SM58. This rugged workhorse is the industry standard hand-held stage mic, but its partner, the SM57, will also give good recorded results. (In part II, we look at setting up a headphone mix, and getting the best out of a singer.)

Trevor Curwen The Mix

Page 1 of 9

GOOD REFERENCES
Brian Knave Electronic Musician, Jun 1, 2001

Judging by the steady flow of letters and phone calls we get asking our advice about what gear to buy, a good number of readers are well acquainted with cognitive overload. That's the term psychologists use to describe the paralysis that can set in when we are confronted by too many options (or too much information). Freedom of choice is great, but clearly, too many options can bewilder. Case in point: the EM 2001 Personal Studio Buyer's Guide lists 40 companies presently offering reference monitors, with more than 200 models to choose from. Bewildered? If so, you've come to the right place. This article will cover the various designs, components, and properties (including terminology) of reference monitors, as well as how they work in short, all you need to know to make informed decisions when selecting close-field reference monitors for your personal studio. (Though many of the concepts discussed here apply equally well to monitors for surround arrays, those interested specifically in monitoring for 5.1 should also see You're Surrounded in the October 2000 EM.) PRE ROLL Speakers used in recording studios are called monitors and generally fall into two categories: main monitors and compact or close-field reference monitors. Mains, as they are called, are mostly found in the control rooms of large commercial studios, often flush-mounted in a false wall (called a soffit ); close-field reference monitors are freestanding and usually sit atop the console bridge or on stands directly behind the console. Most personal studios don't have the space or funds for main monitors, so this article will focus on the compact reference monitor a relatively recent studio tool. The first compact monitor to see widespread use in recording studios was the JBL 4311, a 3-way design introduced in the late 1960s. The 4311 was quite large, however (it had a 12-inch woofer, a 5-inch midrange speaker, and a 1.4-inch tweeter), and today would qualify more as a mid-field monitor. As engineers increasingly realized the importance of hearing how their mixes sounded on car and television speakers, smaller reference monitors gained in popularity. One of the earliest favorites (around the mid-1970s) was the Auratone cube, which had a single 5-inch speaker. Car and home-stereo speakers kept improving, of course, so engineers were always on the lookout for better close-fields. One compact model that caught on big was the Yamaha NS-10M (see Fig. 1). A bookshelf-type speaker introduced in 1978 for home use, the NS-10M soon became a familiar sight in commercial studios, and it remains popular or at least ubiquitous to this day. Another significant development was the introduction in 1977 of the MDM-4 near-field monitor, made by audio pioneer Ed Long's company, Calibration Standard Instruments. The MDM-4s were great

Page 2 of 9

monitors, but it was the then-revolutionary concept of near-field monitoring that secured a chapter in audio history for Long. (Long also originated the concept of time alignment for speakers and trademarked the term Time Align; more on this later.) Though no one could have predicted how prophetic the term near-field monitor would prove, Long clearly understood its significance and so had it trademarked. (That is why EM uses the term close-field monitor instead). ENVIRONMENTAL ISSUES Curiously, because close-field reference monitors have become increasingly accurate during the course of time, the original rationale for using them to generate a good indication of how mixes will translate to low-cost car and home-stereo speakers has waned. But there are also other good reasons close-field monitors have become all but indispensable in music production. For one, professional mix engineers are typically hired on a project-by-project basis, which means they may end up in a different studio from one day to the next. Close-field monitors, because they are portable enough to be carted from studio to studio, make for an ideal solution and guarantee, at the minimum, some level of sonic consistency, regardless of the room. But don't the monitors sound different in different rooms? To a degree, they do. But another advantage of close-field monitors is that they can partially mitigate the effect of the room on what you hear. As their name makes clear, they are meant to be used in the near field, typically about three feet from the engineer's ears. At that distance, assuming the monitors are well positioned and used correctly, the sound can pass to the ears largely unaffected by surface reflections (from the walls, ceiling, console, and so forth) and the various sonic ills they can wreak. For the same reason, close-field monitoring is also a good solution for the personal studio, where sonic anomalies are the norm. As engineer, consultant, and all-around acoustics wizard Bob Hodas has so well demonstrated, however, it's foolhardy to think close-field monitors entirely spare you from the effects of room acoustics. near-field monitors can be accurate, explains Hodas, only if care is taken in the placement of the speakers and room issues are not ignored. (Find more information at www.bobhodas.com/pub1.html.) DIFFERENT WORLDS A common misconception among those new to music production is that home-stereo speakers are adequate for monitoring. That is, in fact, not the case. The problem is one of purpose: whereas manufacturers design reference monitors to reproduce signals accurately, home-stereo speakers are specifically designed to make recordings sound better. Typically, that perceived improvement is accomplished by boosting low and high frequencies. Although it may sound like an enhancement to the average listener, such hype is really a move away from accuracy. Home-stereo speakers may also be engineered to de-emphasize midrange frequencies so as to mask problems in this critical range. That makes it difficult to hear what's going on in the midrange, which can tempt mixers to overcompensate with EQ. It can also lead to fatigue because the ear must strain to hear the mids. Yet another reason home-stereo speakers are inappropriate for monitoring is that they are meant to be listened to in the far field, where much of the sound is reflected. But as we've seen, close-field monitors are designed to be used in the near field, in order to help minimize the effects of room acoustics. Of course, it's important not to sit too close to near fields. Rather, they should be positioned far enough back to allow the sound from the speakers to blend into an apparent point source and stereo soundstage.

Page 3 of 9

As you move in closer than three feet or so, the sound from each speaker becomes distinguishable separately, which is not what you want. ELUSIVE BULL'S-EYE Everyone can agree that reference monitors are meant to reproduce signals accurately. But what is accuracy? For our purposes, there are three objective tests that can be performed to help quantify accuracy in reference monitors. The tests measure frequency response, transient or impulse response, and lastly, distortion. Frequency response is a measure of the changes in output level that occur as a monitor is fed a full spectrum of constant-level input frequencies. The output levels can be plotted as a line on a graph called a frequency response plot in relation to a nominal level represented as a median line typically marked 0 dB (see Fig. 2 ). The monitor is said to have a flat or linear frequency response when that line corresponds closely to the median line that is, does not fluctuate much above or below from one frequency to the next. When they are written out, frequency-response specifications first designate a frequency range, which is typically somewhere between 40 and 60 Hz on the low end and 18 to 22 kHz on the high end. To complete the specification, the frequency range is followed by a range specifier, which is a plus/minus figure indicating, in decibels, the range of output fluctuation. For example, the spec 50 Hz 20 kHz (1 dB) means that frequencies produced by the monitor between 50 Hz and 20 kHz will vary no more than 1 dB up or down (louder or quieter) from the input signal. (That spec would suggest a very flat monitor, by the way!) Note that the range specifier may also be expressed as two numbers, for example +1/-2 dB, which is useful when the response varies more one direction than the other. Primary frequency-response measurements are made on-axis, that is, with the test mic directly facing the monitor, often at a distance of one meter. Also helpful are off-axis frequency response plots (measured with the mic at a 30-degree angle to the monitor, for example), which give an indication of how accurate the response will be or how much it might change as you reach for controls or gear located outside of the sweet spot. (The sweet spot is the ideal position to sit at in relation to the monitors; it is calculated by distance, angle, and listening.) Transient or impulse response is a measure of the speaker's ability to reproduce the fast rise of a transient and the time it takes for the speaker to settle or stop moving after reproduction of the transient. Obviously, the first characteristic is critical to accurate reproduction of instrument dynamics and transients (such as the attack of a drum hit or a string pluck). The second is important because a speaker that is still in motion from a previous waveform will mask the following waveform and thus muddle the sound (see Fig. 3). Distortion refers to undesirable components of a signal, which is to say, anything added to the signal that was not there in the first place. For monitors it can be divided into two categories: harmonic distortion and intermodulation distortion (IM). Harmonic distortion is any distortion related in some way to the original input signal. It includes second- and third-harmonic distortion, total harmonic distortion (THD), and noise (which are the types most commonly measured; see Fig. 4), as well as higher harmonic distortions (fifth, seventh, ninth, and so on). Intermodulation distortion is a form of self-noise that is generated by the speaker system in response to being excited by a dynamic, multifrequency signal; typically, it is more audible and more annoying than harmonic distortion. Frequency response, impulse response, and distortion levels should all be taken into account to get an

Page 4 of 9

idea of a monitor's accuracy. However, frequency response is often the only measure mentioned in product literature and reviews, and even it gets short shrift on occasion. (In many instances, I have seen frequency specs given with no range specifier and of course, without it the specification is meaningless). Few manufacturers provide an impulse response graph (even assuming they have measured impulse response), and often the only distortion specification given is THD + noise. In fact, the lack of established and agreed-upon standards for monitor (and for microphone) specifications for both measuring them and reporting them is a long-standing industry issue. Though it is true that specs don't tell the entire story, they are useful for corroborating what our ears tell us, and as such they can help educate us so that we can more exactingly listen. MIRROR IMAGE Now that we've established the raison d' tre of the close-field monitor, let's take a look at its anatomy. We'll start with the internal components and work our way outward to the enclosure. Understanding how monitors are put together will help you know what to look for when deciding which best suit your needs. Interestingly, the devices on either end of the recording signal chain microphones and monitors are very similar. Both are types of transducers , or devices that transform energy from one form into another. The difference is in the direction of energy flow: microphones convert sound waves into electrical signals and speakers convert electrical signals into sound waves. However, the components and operating principles of monitors and mics are essentially the same. The speakers most commonly used in close-field monitors work in the same way as moving-coil dynamic microphones do, only in reverse. (Actually, there is a correlative speaker for other types of microphones as well, including ribbons and condensers. However, we will limit the discussion to the moving-coil type in this article.) In a moving-coil dynamic microphone, a thin, circular diaphragm is attached to a fine coil of wire positioned inside a gap in a permanent magnet. Sound waves move the diaphragm back and forth, causing the attached coil to move in its north/south magnetic field, thus generating a tiny electric current within the coil of wire. In a loudspeaker, the coil of wire is known as the voice coil. As the electric current (audio signal) fluctuates in the wire, it generates an oscillating magnetic field that pushes and pulls against the magnet, causing the voice coil and attached diaphragm (in this case, the speaker cone; see Fig. 5) to vibrate. In turn, the vibrating speaker cone agitates nearby air molecules, creating the sound waves that reach our ears. (The ear, by the way, is also a transducer. It has a diaphragm the timpanic membrane or eardrum that converts acoustic sound waves into tiny electrochemical impulses which the brain then interprets as sound.) DRIVING LESSONS A loudspeaker's magnet, voice coil, and diaphragm form, collectively, an assembly called a driver . (The moving-coil driver is the most common type, but there are other kinds as well.) Close-field monitors usually contain either two or three drivers, and thus are designated 2-way or 3-way , respectively. Standard 2-way monitors contain a woofer and tweeter; standard 3-ways contain a woofer, a tweeter, and a midrange driver. The woofer, of course, reproduces lower frequencies and the tweeter, the higher frequencies. Cones and domes are the two most common types of diaphragms used in monitor drivers. Woofers and most midrange drivers employ cone diaphragms, typically made of treated paper, polypropylene, or

Page 5 of 9

more exotic materials such as Kevlar. (Note that the dome-shaped piece in the center of a woofer cone is a dust cap, not a dome.) Most moving-coil tweeters use a small dome, typically measuring one inch in diameter. One advantage of a small dome is that it exhibits fast transient response and a wide dispersion pattern, both of which are critical to the reproduction of upper frequencies. Domes are routinely made of treated paper too, but may also be made from a metal such as aluminum or titanium, or sometimes from stiffened silk, which some people believe sounds less harsh than metal. When monitors employ separate drivers, as 2-way and 3-way monitors do, the design is termed discrete . In discrete designs, the drivers are usually mounted on the front face of the enclosure as close together as possible, which helps the sound blend into a coherent point source at the sweet spot. Depending on the monitors, the sound can change dramatically as you move away from the sweet spot. IT'S ABOUT TIME Some companies, for example Tannoy, employ an alternative driver design in some of their monitors in which the tweeter is mounted in the center of the woofer cone (see Fig. 6). Though more expensive, this coaxial design is naturally more time coherent than discrete designs because the drivers are positioned on the same axis (as well as closer together). Indeed, the coaxial driver arrangement is one of the design elements (among others) that manufacturers have used to meet Ed Long's Time Align specification, mentioned before. Before we can understand how time alignment can improve a monitor's accuracy, we must first understand the timing problems inherent in conventional monitor designs. Discrete loudspeakers cause minute delays that spread sounds out in time, resulting in lost detail and a blurred or smeared sound. Specifically, sound from the woofer is delayed more than sound from the tweeter. This problem has two main sources, one structural, the other electronic. In a discrete monitor with a flat-face enclosure, the woofer voice coil is naturally set back further than the tweeter voice coil because of the extra depth of the cone in relation to the dome. The tweeter is therefore closer to your ears, causing the high frequencies to arrive slightly ahead of the lows. The problem is compounded by the crossover , an electronic circuit that splits the incoming signal into separate frequency bands and directs each band to the appropriate driver (more on crossovers momentarily). As it happens, crossovers also tend to delay low frequencies more than highs. With his Time Align scheme, Long was the first to specify corrections for these problems, including physically lining up the drivers and adjusting driver and crossover delay parameters. When correctly implemented, Time Alignment ensures that the time relationships of the fundamentals and overtones of sounds are the same when they reach the listener as they were in the electrical signal at the input terminals of the monitor. Over the years, some manufacturers have devised their own time-alignment schemes. You may recall, for example, the now-discontinued JBL 4200 series monitors, which employed protruding woofers designed to deliver low frequencies to the listener's ears simultaneous with highs from the tweeters. WHEN I CROSS OVER As mentioned, the crossover's job is to divide the incoming signal into separate bands and then send each band to the appropriate driver. In inexpensive monitors, this is typically accomplished using simple lowpass and highpass filters that split the signal coming from the power amp. This is called a passive crossover. In more sophisticated systems, an active crossover splits the line-level signal before

Page 6 of 9

it gets to the power amp. This requires each driver to have its own power amp, and is called biamping in 2-way monitor, triamping in a 3-way, and so on. Typically, monitors that have active crossovers incorporate internal power amps. These are called powered monitors. The terms active and powered, though often used interchangeably, actually refer to different things: active refers to the crossover, and powered to the fact that the amplifiers are part of the package. In other words, although active monitors are almost always powered, not all powered monitors are active. For example, Event Electronics at one time offered three versions of its popular 20/20 monitors: the straight 20/20 was unpowered and had a passive crossover; the 20/20p was powered but used a passive crossover; and the 20/20bas (biamplified system) was both powered and active. In addition to giving a more exacting crossover performance, powered, active monitors offer other advantages over passive designs. Perhaps most importantly, because the amps and electronics are specifically designed to match the drivers and enclosure, powered monitors eliminate the guesswork and the potential pitfalls of matching an external amp to your monitors. (For a discussion of matching power amps to passive monitors, see the sidebar A Good Match.) This means reduced risk of blowing the drivers and virtually no risk of overtaxing the amps. In addition, the internal wiring is much shorter, which cuts down on frequency loss, noise induction, and other gremlins attributable to long cable runs. The upshot is that a power, active system provides a more reliable reference no matter where you take the monitors, you can be sure the only variable is room acoustics. BOX SET The enclosure is a critical part of any reference monitor design. Compact monitors present a particular challenge to designers because diminutive enclosures do not support low frequencies well. For many small monitors, the lowest practical frequency is around 60 Hz. However, certain techniques allow manufacturers to extend the low-frequency response of their boxes. A common solution is to vent or port the enclosure (see Fig. 6). The concept of porting is quite complex, involving not only one or two visible holes, but also other acoustic-design constructions inside the cabinet. In this design, often termed a bass reflex system, the port helps tune the enclosure to resonate at frequencies lower than the woofer's natural rolloff. That is, as the frequencies drop below the monitor's lowest practical note, the enclosure begins to resonate at yet lower frequencies, essentially providing a bass boost. Although porting can extend the low-frequency response of the monitor well below a similarly sized but completely sealed enclosure (called an infinite baffle or acoustic suspension design), some people feel that the resulting bass extension is not a trustworthy reflection of what is really going on in the low end. (One noteworthy solution here is the incorporation of a subwoofer.) Ports tend to be round, ovular, or slit-shaped, and usually are located on either the front or rear panel of compact monitors. Rear ports allow for a smaller front face, and therefore a more compact monitor, but they can also lead to sonic imbalances the main one being excessive bass in cases where the monitor is mounted too close to a wall or corner. Front ports help avoid this problem, but require a larger front face on the enclosure. Another problem with front ports is that they can reduce the structural integrity of the front baffle (which is already weakened by at least two large holes, one each for the woofer and tweeter). Some ported monitors provide port plugs, which can be helpful for reducing low-frequency output in case you are forced to mount the monitor near a wall or corner. (A different solution for this problem is increasingly found in powered/active monitors contour switches that let you adjust the monitor's

Page 7 of 9

low- and high-frequency output to compensate for acoustical imbalances in the listening space.) Nowadays, most manufacturers build their enclosures from medium-density fiberboard (MDF), a material that offers better consistency and lower cost than wood. Grille cloths may or may not be provided with the monitors; but these are a cosmetic enhancement at best, and traditionally are removed for monitoring. Because an enclosure's front baffle shapes the sound as it leaves the drivers, all aspects of the baffle must be taken into account by the designers. For this reason, designers often round off corners and sharp edges, and the face of the enclosure is kept as smooth and spare as possible in order to minimize interferences like diffraction (breaking up of sound waves). One critical acoustic-design feature on the front baffle is the wave guide a shallow, contoured cup surrounding the tweeter. The structure and the shape of the wave guide both affect high-frequency dispersion, which in turn affects other sound qualities such as imaging (see Fig. 7). PERFORMANCE ISSUES Now that we've laid the groundwork, let's tally up what constitutes a superior monitor. Specifically, what do you hear in better monitors that you don't hear in lower-quality ones? We already know one answer: accuracy. More than anything, the purpose and goal of a reference monitor is to transduce signals accurately. Monitoring is the last step in a long journey through the various processes required to get your music to its destination. Therefore reference monitors are your ultimate feedback system and the basis of all of the decisions you make about how to shape and process a mix. As we've seen, the technical recipe for accuracy has three basic ingredients: accurate frequency response, accurate impulse response, and low distortion. Superior monitors boast a very flat frequency response, typically within 3 dB of a nominal level. In addition, the frequency response should roll off smoothly at either end of the spectrum, as well as fall off evenly as you move away or off axis from the monitor. Also critical is a monitor's impulse response. Ideally, this should be a direct analog to changes in air pressure in response to transient electrical signals; a superior monitor keeps all the time domain qualities of a signal intact, reproducing them in exactly the same time relation as they appear at the monitor's input terminals. In addition, in a superior monitor the frequencies issuing from discrete drivers are time aligned so as to compensate for the time misalignment inherent in discrete designs, as described earlier. That way, the highs, mids, and lows reach the listener's ear simultaneously. Both impulse response and time alignment (among other things) figure prominently into two other critical sonic qualities of a reference monitor: soundstage and imaging. Soundstage refers to the imaginary stage that forms between two speakers (including width and depth), and imaging refers to how well the monitors can localize individual instruments on the soundstage. Obviously, a good soundstage and precise imaging are necessary for accurate positioning of instruments within the stereo field. Distortion levels vary considerably from system to system. Whereas home-stereo speakers typically exhibit as much as 1 percent distortion above bass frequencies, some high-quality reference monitors may deliver as little as 0.1 percent. Though a low distortion spec is always desirable, some monitors with less-than-spectacular distortion specs still excel thanks to superiority by other measures. The

Page 8 of 9

human ear, however, is very sensitive to distortion, especially in the midrange (distortion is often a major contributor to ear fatigue). Another helpful specification is speaker sensitivity or efficiency , which shows the monitor's output sound pressure level (in dB SPL) at a distance of 1 meter with an input signal of 1W. All things being equal (which they rarely are), speaker sensitivity has no determining effect on sound quality. However, if you are doing an A/B comparison of two or more sets of passive monitors and running them from the same power amp through a switching box, it is important to be aware of differing sensitivities. Our ears can readily perceive even slight differences in SPL, and our brains naturally perceive louder sources as sounding better. If you fail to compensate for any sensitivity differences that is, to ensure that each monitor is playing back at the same level you are more prone to reach incorrect assessments of monitors while comparing them. FAITHFUL TRANSLATOR Accuracy is important because, ostensibly at least, it guarantees that what we hear from our monitors is the audio truth. Unfortunately, though, objective measures don't really guarantee accuracy. As helpful as specs may be, they are not really an indicator of how a monitor sounds; two similar monitors with near-identical specs can sound very different, for example. Therefore, as in all things audio, careful listening must be the final measure. After all, monitoring is inherently subjective. But even if monitoring weren't subjective and reliable standards for accuracy could be decided on and agreed upon, the problem of wide-ranging sonic differences among playback systems would still persist. More important than accuracy is knowing how your mixes will translate to other speakers in other environments. That's the real bottom line. And the only way to gain that certainty is from experience. As they say, practice makes perfect and it's no different with reference monitors than with musical instruments. After all, a monitor is a musical instrument of sorts. Thus the need to spend many hours, many days, many months working with a set of monitors, practicing on them, listening to your results on countless playback systems, always fine tuning, adjusting, figuring out what the quirks are, where the bumps and holes are, and how every little thing translates, until you reach a level of familiarity that allows you to work undaunted, confident that the mix you dial in will bear a strong resemblance to what the end-user ultimately hears. Regardless of what monitors you use, until you are intimately familiar with them, mixing will remain something of a guessing game. This point was brought home to me recently as I chatted with ace mix engineer Chris Lord-Alge. With multiple platinum credits to his name, Lord-Alge certainly qualifies as an expert on the subject of monitoring, at least in the sense that he knows what it takes to turn out mixes that sound great across the board, from boom box to high-end audiophile system. And just as surely, Lord-Alge has attained success enough to acquire and use any monitor he wants. So what monitors does he use? The latest, greatest, most expensive ones available? Not at all. Rather, Lord-Alge uses the same monitors he has mixed on for most of his career: a pair of Yamaha NS-10Ms. The key thing with any monitors, explains Lord-Alge, is that you get used to them. That's ultimately what makes them work for you. And 25 years on NS -10s hasn't led me wrong yet. CAN OF WORMS This brings us to a can of worms I'd just as soon not open but open it we must if we're to inquire seriously into the nature of reference monitoring. Anyone who has searched for the perfect monitor has run smack into this dilemma, which is best summed up by these questions: Who, ultimately, are you mixing for? The snooty audiophile with speakers that cost more than most folks' cars? Or the

Page 9 of 9 masses who listen to music on cheap systems? Lord-Alge's answer is enlightening: Ninety-five percent of people listen to music in their car or on a cheap home stereo; 5 percent may have better systems; and maybe 1 percent have a $20,000 stereo. So if it doesn't sound good on something small, what's the point? You can mix in front of these huge, beautiful, pristine, $10,000 powered monitors all you want. But no one else has those monitors, so you're more likely to end up with a translation problem. Similarly, I learned a few years ago that John Leventhal, who was one of my heroes at the time, did the bulk of his mixing on a pair of small Radio Shack speakers. (Leventhal, a New York City-based guitarist, songwriter, and engineer, made his mark by producing Shawn Colvin's acclaimed 1989 record, Steady On.) Leventhal owns both a pair of Yamaha NS -10Ms and a pair of Radio Shack Optimus 7s. But he prefers the latter. SIDEBAR: Now What? by Scott Wilkinson Once you have selected your monitors, it's time to place them in
2001, IndustryClick Corp., a PRIMEDIA company. All rights reserved. This article is protected by United States copyright and other intellectual property laws and may not be reproduced, rewritten, distributed, redisseminated, transmitted, displayed, published or broadcast, directly or indirectly, in any medium without the prior written permission of IndustryClick Corp.

Page 1 of 2

Sequencing

Does your sequencer music come out sounding... well, a bit mechanical? Here's how to inject some funk...

Music has secrets. You won't find them in the manuals, but they can take a really dull, average John Major stream of MIDI bytes and transform them into a monstrous chillin' slab of speaker-slamming noise that everyone wants to hear again. And again. And again. One of the best-kept professional secrets is feel. There's a lot of skill involved in creating a good feel, but a feel is always based on the groove - the music's foundation - and your sequencer has some wonderful groove editing features just waiting to help you. Before heading off to mouse-click and menu land, it's worth absorbing the golden rule of feel, the one that separates the computer programmers from the professionals: feel first, think later. When you've finished editing, forget completely about everything you've just done and concentrate on the final effect. How does it make you feel? Give the music a chance, and then ask yourself honestly - do you feel like it's grabbed you by somewhere sensitive and is trying to dance around the room with you? It's far too easy to lose sight of this magic when working with a sequencer. Whatever else may be happening, everything you do should improve the feel in some way. To demonstrate this, take a Steinberg Cubase drum edit page programmed with a rhythm part, kept simple for the sake of clarity. Take a few moments to program this rhythm yourself on your own sequencer. The pattern doesn't start on the downbeat. This is deliberate. When working on a feel it's a good idea to leave some space at the start of the pattern so you can shift some of the instruments into this space. When you hit Play and listen to this pattern, you'll notice that everything is playing exactly in time. The result is Zzzzz... This is not what you want. In a good groove most of the instruments dance around the beat rather than playing exactly on it. They breathe and bop about a bit, just like real sweaty human musicians when they're having a good time. Although the relentless computer quality of the rhythm is good at battering its way into your braincells, too much order makes your ears impatient and bored because you know exactly what's going to happen next. An exciting feel relies on surprise and creates a completely different effect, one which keeps you hanging on for the next few notes because the last few didn't do quite what they were supposed to. It doesn't take much to start creating this effect. Make a copy of the track(s) and mute the original(s). Tweak the hi-hats slightly and see what happens. Turn off both the hi-hat display quantise and the master Snap and Quantise settings. Select all the closed hi-hats click outside them, hold the mouse button down and drag a box around them to "rubber-band" them - and drag them six clocks to the left so they play early. Do the same to the half-open hats, but move them (say) ten clocks. Listen to the rhythm again. The change isn't spectacular, but you should find that the result feels just that little bit more interesting. Do a comparison between the tracks if you aren't convinced. You probably won't be able to put your finger on the difference exactly. Don't worry about this. See if you can feel the difference instead. If you're not used to listening to music in this way it may take a while for the ecu to drop, but keep at it till you can feel the change. Spend some time playing with the different parts of the pattern, shifting them both ahead of the beat and behind it. You can get some pleasantly chilly effects by pushing the snare hits just slightly ahead of the bass drums, making a very fast double hit (called a "flam") that makes both sound bigger. Again, see if you can work out what each change does to the feel. The secret is to push or pull the sounds just far enough to make the pattern breathe, but not so far that it starts to sound weird or awkward. Once you've mastered this level, have a go at something a little bit more advanced - changing the position of each hit. You can do this by dragging the individual hits a few clocks either way with the mouse. Remember that you're still trying to make small and subtle changes rather than attempting to re-arrange the whole pattern. One way to start creating a compulsive groove is to pull some of the sounds in the first half of the pattern ahead of the beat and then pull the rest not quite so far ahead. Spend a while experimenting with different variations on this and some of the other ideas on these pages, listening for how the feel changes each time. Try to get this pattern sounding as good as you can possibly make it. Another way to change the groove is by using accents - small changes in the velocity and perhaps volume of each sound. The original pattern had all the sounds programmed at the same velocity. Try changing these and see what happens. Usually these kinds of edits work best if you pump up the downbeat and then nudge up the third beat slightly less. Don't accent all the sounds on a single beat - this usually sounds amateurish. Play mostly with the bass and maybe the snare, but try some off-beat accents on the 'hats just for fun. And then when you're ready... put it all together and see if you can transform that completely inoffensive and fairly dull pattern into an outrageously catchy and infectious groove that will have you leaping for the rest of your gear in

Page 2 of 2

a creative frenzy and churning out the next foot-stompingly famous monster track. And so to... quantisation, the art of tidying up the timing of live playing. Quantisation used artlessly kills 99% of all known grooves completely stone-cold dead. Used creatively it can be the breath of life that saves what may be an otherwise dull groove. At its very best it takes a heavenly rhythm and makes it much much better. Some basics: straight 100% plain vanilla quantisation should be treated with extreme suspicion. Remember how boring that drum pattern was before you started getting it to groove? 100% quantise will undo all that careful work quicker than you can say "edit buffer," because it takes all your wonderful random-ish humanness and slams it right back exactly on to the mindlessly dull metronome beat. There are ways around this. If your sequencer only has a 100% quantise option with no frills, start saving your pennies for a better sequencer. For now though, if you're serious about your music you'll have to get used to the idea of putting real feel back in by hand, event by event, after you've quantised, or even improving your keyboard playing skills until you can consistently produce results that sound professional. Better sequencers usually have quantise options called something like "strength" and "capture." Strength sets how effective the quantise is: 100% pulls the notes dead on the beat; anything less pulls the notes less and less strongly towards it. To keep the feel of your playing, leave it set at about 10% to start with and quantise repeatedly till you get something that's in time but still has some live-sounding freshness. The "capture range" (or "window") option decides how far out of time a note has to be before it's changed. At a setting of 100% all notes are quantised. At 75% only those notes which are way off the beat are moved, while the ones which are more-or-less in time are left in peace. This means that if you use this setting and have made a really serious timing error the quantise will correct it, but it will leave just about everything else blissfully unchanged. These features can make your quantise operations sound more human, but they are still trying to strait-jacket your music into a metronomic feel. On the better sequencers you can quantise straight to a groove. (This has nothing to do with the "humanise" feature that some sequencers have, which is a special sort of random anti-quantise that takes your music and mucks around with the timing in a rather mindlessly useless way.) Instead, in the same way that you pushed some beats away from the "ideal" time when you were changing a feel by hand, you can set up a timing template for the quantisation that does exactly the same - only much more quickly. This simple swing pattern repeats over only two beats, but it is possible to program a template that subtly changes over one or more bars - in the same way as the rhythm pattern grooved by pushing the first half of the bar and then relaxing the second half slightly. You can even combine different techniques to create a swing that breathes, changing subtly over the bar so that each beat varies slightly. Take a lot of time to experiment with this feature it's well worth it. A final tip: to keep a feel tight, use exactly the same feel on the bass line as on the bass drum part. The more percussive a sound is, the more important it becomes for it to play in time with everything around it. Ideally all the bass parts should play at exactly the same times to keep the music from sounding messy.

Page 1 of 3

Stereo Monitoring

Our guide to buying, setting up and using monitors...

With one speaker on top of the wardrobe and the other behind the sofa, it's no surprise your mixes sound odd. Don't panic, let pro engineer Mick Williams guide you through buying, setting up and using monitors... Some people say the only valid excuse for a bad mix is bad monitoring. While the level of expertise of the person mixing is an obvious contributing factor, even experienced engineers would be hard-pressed to do their optimum work when faced with a monitoring system that doesn't accurately reproduce the frequency spectrum. In a nutshell, if you can't hear it properly you can't mix it properly, so accurate and effective monitoring is essential for any studio. But the complexities don't stop there. Having decided to buy some studio monitors, which do you choose? The point of having good quality monitors is so you can hear your music accurately enough to mix it so it will sound good on whatever system it's played back on. Since music can be played back over various different speaker systems, from large club sound systems to domestic hi-fi equipment or pocket-sized transistor radios, what you need are monitors that can accurately represent these myriad playback systems. However, there is no such thing as a standardised monitor, as all monitors sound different. Even two sets of the same type of monitor can sound different from each other in different rooms or when being driven by different amplifiers, so it's a case of buying a decent set of monitors and getting to know and trust them. The pros of a pro Professional studios usually tackle mixing by having several sets of monitors to switch between. There will be large monitors, usually soffit mounted (that's fixed in the wall to you and me) which represent the full sound spectrum including the bass end. And there will be smaller nearfield monitors placed closer to the mixing position which, due to the limitations of cabinet size, have a more limited bass response. Nearfield monitors simulate playback conditions comparable with your home listening environment, reproducing the quality of sound played back on, say, a standard hi-fi system. As few of us have the budget or space for huge monitors, it's nearfields that must be the speakers of choice. So, the first question must be, if nearfields are meant to sound like domestic hi-fi speakers, why not use your existing hi-fi speakers and save some money? The truth is, hi-fi speakers are often deliberately designed with 'colouration' that flatters the music, rather than reproducing it, warts 'n' all. Nearfield monitors reproduce the entire frequency range as accurately as possible with a minimum of distortion and colouration, so they're much truer to the real sound.

If you mix music on hi-fi speakers tuned to make the bass sound louder, you might not add enough bass to your mix, so it'll sound lightweight and lacking in bass when played on other systems. Studio monitors are also built more robustly to take higher sound levels; useful when you want to solo a particularly raucous sound at high volume. Passive and active When looking for monitors, you have a choice between passive and active systems. Passive speakers need a separate power amplifier whereas active speakers have the amp (or amps in the case of bi-amped systems where the tweeter and bass drivers have separate amps) built into the speaker enclosure. Active speakers mean you can't choose your own power amp, but their built-in amps are specifically designed to work with their speakers, creating an efficient, matched system. Most nearfield monitors have little bass reproduction below, say, 80Hz, so you won't hear the real low end. Still, if you need to hear these frequencies, the bass response can be extended with the addition of a sub bass unit, which can sit out of the way under your mixing desk.

...

Page 2 of 3
Monitor placement When it comes to setting the position of your monitors, following a few basic rules will result in an accurate stereo image and reproduction of the frequency spectrum from your mixing position. Firstly, both speakers should be at the same level, and they should preferably be placed on a level with your head, when you're at your favoured mixing position, with the tweeters around ear height. It's not always physically possible to place speakers at a height level with your head so it's quite acceptable to mount them higher up, but in this case tilt them down so they're pointing at your head. Secondly, the speakers should be angled slightly towards your listening position so the sound focuses towards your head. The recommended textbook starting position is usually to have the speakers positioned to subtend an angle of 60 degrees to the listener. Basically, you sit at the apex of an equilateral triangle formed by yourself and the speakers; this is the 'sweet spot' where you'll find the most accurate representation of the sound (see diagram above). It's usually better to mount monitors vertically so the sound from the tweeter and the bass driver arrives at the ear at the same time, although some monitors, such as the Yamaha NS10Ms, are designed to be placed horizontally. Also, the distance between the speakers shouldn't be more than about two metres or the central stereo image may suffer. You could also run into problems if the distance between the two speakers is greater than the distance between the speakers and the listener.

Position in the room Unless you monitor solely on headphones, it's a fact of life that the room you're in will affect the sound you hear. The size and shape of the room, together with the materials on the walls, ceiling and floor, can all exert an influence on the sound, as can any objects in the room. Sound from the speakers will be reflected from and absorbed by the various surfaces and objects which can result in distinct echoes, reverb and certain frequencies being cancelled or reinforced. All these things, if they present a problem, can be tackled by acoustic treatment such as bass traps or heavy fabric draped on the walls, but any room influences can also be minimised by using nearfield speakers positioned correctly. Nearfield speakers tend to reduce any room effects, as they are closer to the listener, so the direct sound from the speakers dominates rather than any reflected sound. Whatever speakers you use though, it's always useful to minimise the effects of reflected sound as much as you can. Symmetrical positioning of the speakers in relation to the room is important. If the distance between the speakers and their adjacent walls is not identical on both left and right then any reflections from the walls will be different and may disrupt the stereo image. By the same token, any nearby racks of gear could cause reflections, so, if possible, try to arrange the two monitors on both sides of your mixing position rather than on just one. Reflections can also come from the surface of your mixing desk, but placing your speakers on stands behind the desk rather than sitting them over the meter bridge can minimise this problem. Monitoring tips Now your monitors are nicely set up, here are a few practical tips to help your mixing. The first one is don't monitor too loud for extended periods. Protracted listening at high volume can not only cause permanent ear damage but it can also wear out your concentration more quickly and dull your perception of top-end frequencies. There is always a temptation to turn things up because, let's face it, music usually sounds more exciting that way, but you may soon get immune to the constant high level. It's far better to monitor at a reasonably low level and just turn it up occasionally for a quick high volume check. Monitoring at different volume levels is good practice anyway, as turning the level right down allows you to hear if things are jumping out of the mix. It's also a good way to check if, for example, the vocal or snare is too loud. Also - and this may seem a bit strange - listening to the track while standing outside the room gives you a different perspective that may prove useful. Try it and see. As there is no such thing as a standard monitor speaker, each speaker design provides its own version of the truth, so it stands to reason that, to get the best results, you need to know your own speakers inside out and to trust what they're telling you. The easiest way to get familiar with them is to play your favourite CDs through your monitors, both in isolation and while mixing your own tracks, and compare the sound. Presumably you'll have some music in your collection mixed in a professional studio on a top-class monitoring system, so comparing this to your mixes in progress, checking not only the overall sound but also specific areas, will do no harm. I'm not talking about making slavish copies here, but it will help check things like if there is enough top end, if the bass is too boomy, if the mid range sounds too harsh, if you've added enough reverb, if the vocal sits well with the music and if the drums are too loud. If you have access to several sets of speakers so much the better. Switch between them from time to time to see

...

Page 3 of 3
how the music sounds on each set and occasionally check how things are sounding on headphones. If you have just one set of studio monitors you can always run off mixes in progress every so often to play back on a ghetto blaster, domestic hi-fi system, car stereo or personal stereo. If you can get your mix to sound good on really crap speakers as well as decent ones, then you must be doing something right. Mick Williams

...

Page 1 of 4

Synth Programming (pt V)

Our final look at effects, arpeggiators and more...

OVER THE LAST four months, if you've been following this series and trying out some of the ideas on your own kit, you should have some idea how to make more interesting sounds than average. You should by now be able to make sounds which are completely average on their own, but still fit perfectly into a track. This month we're going to end by looking at effects, arpeggiators, and some of the more extreme synthesis options you can use if you want to try some more advanced ideas. So let's start with those effects. Effects can be as much a part of synth programming as the oscillators and filters can. Especially now that many effects options are just as sophisticated as those you'll find in a separate effects box. The usual suspects include: Delay: usually just echo-o-o-o. You can set the time, the mix (full effect 'wet' or no effect 'dry') and the amount of feedback, which sets the number of repeats. Often useful on basslines to add some extra in-between-notes interest - use a time that's in sync with the tempo - and on pads to thicken them out. Flange: jet-plane style whooshy sound. Have this sweeping away on a sequence line, or on a pad. The more treble there is the in the original sound, the more obvious the effect is. Settings are modulation amount and rate (controlled by an internal LFO), and depth. And resonance - also known as feedback. Turn this up for a metallic effect. Chorus: like flange, only thicker and runnier. Mostly used on pads. Uses similar controls as flange. Phasing: like flanging, only more analogue sounding. Almost always great on basslines. Distortion: there are basically three sorts. Amp simulators pretend to sound like guitar amps. Overdrive and 'pure' distortion just grunge the sound up. Digital distortion eliminates the detail in the sound by throwing most of the (digital) bits away and sounds really nasty; it's great for crunching up samples. EQ: tone controls or, in other words, bass and treble. Sometimes you get a mid as well. Enhancer: makes the sound more brash and bright. Reverb: creates an artificial space around the sound - from subway to concert hall to aircraft hangar. Compression: fattens up the sound. What a combination Some synths have combined effects as well, which give you separate effects on the left and right output channels, or two effects in series. Some also have individual effects for each voice in a multitimbral setup, and then one or two 'global' effects that add that extra something to the whole mix. For example, the Access Virus has an effects chain that includes delay, distortion, a phaser, and chorus on the main mix. On each individual voice you can have a phaser, distortion, bass boost, chorus/flanger and a ring modulator. So how do you put all these together? And are they icing or are they cake? That depends on the kind of patches you're using. It's tempting to throw everything into a patch to try and make it memorable, but that's usually a bad idea. The key thing to remember here is - as mentioned last month - don't listen to single sounds, but to how to they sit together in a mix. As a rule of thumb, chorus and reverb tend to push things back a little, so they're great for adding atmosphere.

Page 2 of 4

Distortion tends to push things forwards so they stand out more. A second rule of thumb is to think of effects as being an essential part of the patch, rather than something stuck on afterwards. Try dialling in the effects while you're programming, rather than as an afterthought. Otherwise, a complete guide to everything that's possible would take an entire article, but for some starters take a look at the Effects specs box on p102. And then try them. Into the rhythm No synthesis series would be complete without mentioning arpeggiators. They're excellent tools, very easy to use and program, and a lot of synths have them. Some kinds of dance music - Goa and other flavours of trance, ambient and some older rave styles - would be impossible without them. So here's the beginner's guide. The basic, bog standard plain vanilla arpeggiator works by playing a sequence based on notes you're holding on the keyboard. (Some synth modules have arpeggiators too - they work instead on MIDI information you send to them.) Slam down any chord pattern, and the arpeggiator will play it - up, down, up and down, at random, over a single octave, or over two or three or four. Typically there's a 'hold' option, which means the arpeggiator keeps playing even when you let go. And MIDI clock sync can sometimes be divided down, so the arpeggiator plays half, quarter, eight, 16 or 32 to the bar. More sophisticated arpeggiators let you use 3/2, 3/8, 3/16 subdivisons for triplets for more interesting rhythms. Most people tweak and twiddle with arpeggiators for a while, and then get bored with them. They're not that easy to play by hand. If you play lots of chords with three notes, and then play one with four notes, the rhythm changes awkwardly. The same thing happens if you hit an extra note by accident. The way round this is to sequence the notes, instead of playing them by hand. 90% of this year's (and every year's) Ibiza classics use this technique. You can re-record the MIDI output of the arpeggiator if you want to edit it. In fact, this is an excellent thing to do anyway, because arpeggiators can be a fantastic source of ideas for bass and sequence lines. Some synths have more interesting arpeggiation options. The Quasimidi Sirius has a range of preprogrammed patterns (which Quasimidi calls 'motifs') that either repeat a chord, or split it up in more interesting ways that go beyond playing the component notes. You can also program your own motifs. Another variation is to create a step-time rhythm pattern that only plays on certain beats. If you combine this with the riffing abilities of the typical arpeggiator, you can create hypnotic sequences that never repeat. But things become really colourful when you combine all of these to create a multitimbral sequence. Synths like the Waldorf Microwave and the various larger Novation machines let you do this. On these you can have a different type of arpeggiation happening on lots of different patches at once. You can combine drum patches with synth and bass sounds to create a mega-patch that plays an impressive mini-sequence. All you have to do is hit a single note on the keyboard to start it off. Confused? Here's an example. Let's start with a bass drum patch on channel 1. Set this up so the arpeggiator plays on quarter notes. Then hi-hats on channel 2 playing 16ths. This gives an excellent foundation for the rest of the multi-patch. What you do next is up to you. A good option is to create a rhythm motif, and use it with a bassline. Another excellent option is to create a patch with a very slow LFO sweep on the filter, and some resonance to give it a bit of bite. Better still, do both. Using trial and error you can soon build up a complex rhythmic pattern that you can play with one finger. If you have other synths with arpeggiators you can use those as well, to build up an even more complex effect. It doesn't take long for a complete track to more or less write itself. The hard stuff Can you journey further into the world of weird sound possibilities and create limpid landscapes of shimmeringly seductive gorgeousness, drums with more aggression than an SAS unit facing down an alien invasion force, and basslines so chunky you could quarry out some rooms and live in them? In a literal sense, probably not. But if you want to go further than the average sound designer, here are some things to do, techniques to try, and tools to buy. Vocoders: vocoders work by splitting an input ('analysis') sound into frequency bands, working out the volume in each band, and then applying those volumes to an equivalent set of frequency bands in a different ('synthesis') sound. When you do this, magic things happen. If the analysis sound is a voice, and the synthesis sound is a string pad, you get that talking pad effect that's been

Page 3 of 4

done to death. Try talking over basslines and drum parts too. Or don't talk at all; you can more or less vocode anything with anything (even itself) and still get an interesting result. You can also get interesting noises by retuning the different filters so they don't work on the same range of frequencies. Resynthesis: this is where you pull a sound apart, a bit like sending light through a prism, hack around with the 'colours', and then put it back together again. With resynthesis you can morph sounds, exaggerate the way they change, tune unpitched sounds so they ring like a bell, blend sounds (so you can make talking drum loops - only with much higher quality than you'll get from a vocoder), create totally bizarre noises by randomising the 'colours', or vintage retro effects by throwing away a big chunk of the sound, like a kind of mega EQ. Additive synthesis: is a bit like resynthesis. When you do resynthesis it turns out that the colours you get can be reduced down to sets of sine waves, each with their own frequency and level. In other words, if you had enough sine wave oscillators, each with an independent pitch and level control, you could make any sound you wanted real, or imaginary. In practice, you need hundreds or thousands of oscillators to do this well. That's for each note. And if you're trying to synthesize real instruments, you also need to remember that the sounds changed according to pitch and velocity. So high quality synthesis of real instruments isn't possible yet. But you can still get some interesting and unusual effects like this. Granular synthesis. If you've ever timestretched a sample you've already used granular synthesis. Granular synthesis works by chopping a sound up into tiny segments. You can then control the envelope, volume and the pitch of each segment. You can 'granulise' samples or start from scratch with little oscillator-produced blips. It sounds a simple technique, but it has a lot of applications. By scrunching up or expanding the gaps between granules you get timestretch. By changing their pitch you get pitchshift. By randomising the pitches you get a kind of 'sound cloud' effect. By randomising the times you can get monster reverb-like sounds. By using sine waves and precisely controlling the envelopes you can create interesting vocal-like effects. (This is sometimes known as formant synthesis.) Physical modelling: Instead of using waveforms, physical modelling actually simulates in software how all the mechanical or electronic bits of a synth work together. So to make a guitar sound you create a complicated mathematical model of how the strings vibrate (this bit is called the 'exciter'), and how the soundboard amplifies the sound (this is known as the 'resonator'), and so on. It's very heady stuff, which is why most physical-modelling synths just give you a set of presets to play with, that you can tweak in a minimal kind of way. And in spite of the complications, it's not that hard to get your head round; 'exciters' work like oscillators, 'resonators' work like filters. The rest is more or less trial and error. How to get hands-on Apart from vocoding, which is available in many mid- to high-end analogue-style synths, these and other effects are mostly stuck in the realm of the experimental. This is bad news for anyone who wants instant gratification. But good news for people who want to make sounds that are literally like nothing ever heard before. As for getting your hands dirty with all this synthesis business, you really need to get your hands on some serious synths and for this the Internet is a blessing. Some of these toys are free, some are cheap, a couple are seriously expensive but here's a rough list: Csound. Big, bad, totally obscure, totally free, multi-platform, and really only for ber-nerds. This is (literally) the granddaddy of all other soft synths. It does more than everything, but it's slow, clumsy, almost perversely difficult to use, and maintained by a rather cliquey group of academics who will laugh at you if you don't have a PhD. Most soft synths have oscillators, but Csound has simple oscillators, sample-playback oscillators, high-quality oscillators, click oscillators, additive synthesis and resynthesis oscillators, wavetable oscillators and an option which creates oscillator-like sounds by calculating the orbit of a planet in a binary star system. To get the best from it you need to wrap it up in some smarter high-level software. Linux weenies or Mac users will want to use something called Cecilia, which sorts out some of Csound's shortcomings. (Unfortunately there's no Windows version of Cecilia.) More information can be found at www.notam.uio.no/ internt/csound/Csound/TITLE.html Composer's Desktop Project (CDP). UK-based resynthesis package. It's command line based (yes, that's right, you have to type in filenames and commands by hand) but it will do most resynthesis-style things. And there are attempts to create a more up to

Page 4 of 4

date Windows-y interface. While it's a bit clunky by modern standards, it's also not too expensive at just over 100. And it can be fun to dabble with. MetaSynth. Mac-only resynthesis toy, with the unique ability to turn pictures into sounds (and vice versa). It's a fantastic product, but there's some confusion about whether it's still available. Try www.uisoftware.com for details. Kyma system. Forget expensive analogue modulars, this is currently the ultimate commercial synth system. At any price. It's a soft synth that uses its own hardware in the form of a rack full of plug-in cards, a bit like the Creamware Pulsar, only bigger and better. You can get started for around 2,000, while a fully expanded model costs around 7,000. Kyma is the thinking person's Csound. It does most things that Csound does, only it's about a million times easier, quicker and more fun to use. For details take a look at: www.symbolic sound.com/kyma.html. Going hard Meanwhile, if you don't fancy donning the lab coat and specs, and going down the intricate software route, why not go hard and get yourself a real synth! Kawai K5000 series. The only hardware synth to do proper additive synthesis the K5000 rack synth is now available second-hand for around 400 or less (check our Marketplace ads starting on p128). It's capable of some unique far-side noises and well worth watching out for as they do come up from time to time. Yamaha VL-series synths. This is the acceptable face of physical-modelling synthesis, starting from the hugely overpriced and commercially unsuccessful VL1, via the VL1M and VL7, and eventually ending up with the VL70 and even some VL-ready options in the Yamaha budget soft synth range. You might find a lot of the sounds suffer from kind of diminishing returns effect, so the closer you get to the sound of a real instrument, the more obvious the shortcomings and differences. But if you can put that niggle aside, there's plenty of physical 'phun' (sorry!) to be had here. Korg Z1. Another physical-modelling synth which achieved 90% and a Platinum award in FM61. Surprisingly it's not as popular as the Trinity before it (or the Triton since) but it's still worth looking out for and trying out. AAS Tassman. OK, so it's not a hardware synth, but it's still physical-modelling even if it is software. It also has some analogue-style features. We'll actually be reviewing this relatively new synth next month, but in the mean time, find out more from www.applied-acoustics.com. Richard Wentk Future Music 09/00

Page 1 of 2

Techno Tips: Smack My Beat Up

Time for some subterranean beat smacking with Matt Thomas

Although you might not guess it from the current music scene, techno has been one of the most influential shaping forces in today's music. Techno was to the 80s what drum n' bass is to the 90s; a breeding ground of completely new sounds and ideas. Many of the sounds and rhythms that have filtered into house and trance originated in techno, and few artists in any genre are held in the same reverential awe as the original Detroit techno innovators: Juan Atkins, Derrick May, Kevin Saunderson and Jeff Mills. Their work is still seen as the techno blueprint; an ironic situation as their own musical agendas were always about innovation. It seems strange that such an integral part of the dance scene never made the transition to the mainstream in the way that house did, but the explanation can be found in the early 90s. As techno took the UK and Europe by storm it became a victim of its own success and the vile Toytown sound of techno-pop hit the charts. Anyone who remembers the aural kick-in-the-goolies that was 2-Unlimited (Techno-techno-techno!!!!) and The Mad Stuntsman will well appreciate why techno went scuttling back to the underground, clutching its nethers. Going underground Things got much healthier in the late 90s, and techno is as much about the European underground sound as it is about Detroit, with musicians like Adam Beyer, Cari Lekebush, Laurent Garnier, Luke Slater and the multi-aliased Paul Mac all helping to re-define the genre. At the same time, Detroit is still home to many of the most respected artists, such as Octave One, Carl Craig and the electro influenced Aux 88 and Underground Resistance. Despite over ten years of musical evolution, techno (more than most styles of music) can honestly be said to have remained true to its roots. Top Tip 1 The use of drum loops is also fairly common. Luke Slater has been using breakbeats in a way that would make many a big-beat act drool with jealousy, while German bright-hope Thomas Schumacher uses all sorts of loops, from hip-hop to old Michael Jackson tracks. If you want to use loops yourself try and take them into new directions, maybe try some drum'n'bass style edits, weird filtering or trip-hop style compression. Remember that techno isn't only about squeezing the last drop of life from an 808, but about individualism of sound. Top Tip 2 My second tip can be summed up in three words: distort your desk. I don't mean that you should bury your track in fuzz (unless you're on a Cari Lekebush tip of course, in which case feel free), but even the cleanest techno tunes tend to have just a hint of grit in there somewhere. Distorting the 909 kick is so common that most people are familiar with the sound, but all sorts of drum samples benefit from the extra edge that comes from overloading your mixer's input. Analogue percussion sounds like rimshot and cowbell are some of my favourites, along with ride cymbals and snare drums. Ignore the flashing red lights, and just keep turning the gain up until you like what you hear. You can always compress the sound afterwards if it's too spiky. What is it? Techno Where did it come from? Detroit (America's version of Manchester: violent, full of gangs and the birthplace of some of the best music of the last 20 years). The origin of the style is best defined as a fusion of George Clintonstyle funk mentality with the electro synth sounds of European bands like Kraftwerk, New Order and obscure Belgian outfit Telex.

Page 2 of 2

When did it start?Like house music, techno arrived gradually during the mid 80s. The earliest tracks appeared during 1985/86. Top tracks: Dave Clark's Red.2; Laurent Garnier's Crispy Bacon; Hood & Mills' Tranquilizer EP; Mr Fingers' Can U Feel It?; Plastikman's Spastik; Rhythim Is Rhythim's Strings Of Life; Thomas Schumacher's When I Rock. Labels: Bush, Peacefrog, Soma, Novamute, F-Comm. bpm range: Some of the minimal stuff clocks in at the low 90s, with the crossover into the realms of gabba and hardcore occurring at about 150bpm. (Gabba is an offshoot of techno, mainly invented for Belgians on speed).

Matt Thomas 03/99

Page 1 of 4

Ultimate MIDI guide

Shedding more light on MIDI, including a MIDI file analysing program...

Your sequencer, as well as being able to store songs using its own (proprietary) disk file arrangements, can doubtless also export the same information (namely all your notes, patch numbers, controllers, pitchbend information, and so on) in so-called MIDI file form. What's the difference? Quite simply, standardisation. Most sequencers tend, by default, to store sequence data in a fashion best suited to the way the sequencer has been designed, so these file structures tend to vary from manufacturer to manufacturer. If you create a song arrangement using some obscure Whammo MkII sequencer package and store it on disk as a Whammo MkII sequencer file, there ain't no way that Joe Bloggs is going to be able to read it using his copy of Cubase. MIDI files on the other hand, or SMFs (Standard MIDI Files) as they are sometimes called, provide a standardised way of storing sequenced music. So, if you store a song in this form and give that disk to someone else, they'll be able to load and play the arrange-ment, even if they have a totally different sequencer! Obviously, for some purposes, like putting MIDI song material on the Web, or creating arrangements to be sold commercially, such standardisation is important. Now, for some musicians, buying ready-made MIDI file song arrangements is seen as the ultimate 'cheat' but let's be realistic, there are obvious benefits. First, you do not have to be able to physically play the songs. Second, the arrangements will almost certainly be as good, if not better, than you could create yourself. Third, you don't have to spend time creating them. Nowadays MIDI files can be purchased and used by anyone, regardless of whether they're a musician or not, and there are plenty of good distributors around with huge, reliable catalogues. So, whether you like the classics, country, pop/rock, or more obscure stuff, the chances are that someone, somewhere, will have produced arrangements to suit. In short, you just buy the files, load them into a sequencer or into a utility such as Microsoft Windows' Media Player, hit the start button and off you go! MIDI files, as anyone involved with MIDI will know, have in fact been about almost as long as MIDI itself. In recent years however the MIDI file scene has suddenly come of age and to appreciate why we need to take a few steps backward and talk firstly about what MIDI was like a few years ago, and then about something called General MIDI.. The almighty MIDI foo bird The benefit of MIDI has always been the communications standardisation which allows instruments from different manufacturers to be linked together easily. MIDI however has not been without problems and one is the relationship between the sounds you hear on one particular synth or sound module (and the voice-memory-slots related to them) and the equivalent characteristics on another manufacturer's unit. MIDI allows synthesizer and sound card voice settings to be changed using special messages called Program Changes but originally manufacturers were left to their own devices as far as the Program Change/voice (instrument sound) correspondences were concerned. The result was an annoying situation whereby a particular Program Change message might select a flute voice on one synthesizer, yet the same message sent to another synth might select a grand piano. The lack of drum-voice/drum-note standardisation made life equally awkward on the percussion front and, as any musician will tell you, re-editing sequence data so that it conforms to alternative voice/channel/drum-note arrangements can be a difficult job. For the lone musician working with his or her own sequence data, these types of snags are not the end of the world, but the difficulties do increase when sequences are written especially for use by other people. It is simply

Page 2 of 4

not practical to keep changing the equipment voice-configurations each time you wish to use someone else's sequence data. These particular areas of difficulty were identified some time ago with many companies realising that the lack of standardisation was holding back the formation of a large 'pre-recorded sequenced music' market. Apart from the obvious things like 'music minus one' type songs (karaoke-like backing sequences where you just add the melody), and MIDI versions of instrumental music, there is of course computer game music, CD+MIDI media formats, music educational and business presentation software, and integrated audio-visual (AV) equipment. These types of applications mean big bucks and so it was hardly surprising that much effort went into finding a solution. The outcome was a standard called General MIDI (GM) and it is this which has put most people in a position where they can 'load and play' MIDI file arrangements just as easily as say playing a tape or CD. Formats for all It's incredibly easy to get confused between file formats and disk formats so here's the bottom line: MIDI files are simply computer data files which are designed specifically to hold MIDI information and there are two different internal arrangements in common use. The first regards a 'song' as one long MIDI sequence and is called a 'type 0' or 'format 0' MIDI file. This is primarily designed for playback only applications. The second arrangement, called a 'type 1' or 'format 1' MIDI file, effectively stores the MIDI data as a series of separate tracks and this means that you can have the drums, bass, keyboard sequences, etc, all kept separately within the file. Obviously keeping the data separated like this makes it easier if you ever have to load the file into a sequencer and edit it. There is, incidentally a third type of MIDI file known as a type 2 format file but I'll skip the details here simply because type 2 files are not commonly seen. Most sequencers can read both type 0 and type 1 MIDI files, and with type 1 you'll end up with the individual instrument tracks being loaded into separate sequencer tracks. What happens with type 0 MIDI files will often vary, according to the sequencer you are using, but even if the whole arrangement ends up being read into a single sequencer track, it's usually possible to tidy things up with a little editing. Even oldish sequencer software (like the Dr T's KCS package) is able to split up a multiple MIDI channel track so data from each individual MIDI channel present gets moved to a separate sequencer track, and this sort of exercise invariably makes any subsequent editing easier. Disk formats have nothing to do with MIDI files as such rather they're connected with the physical characteristics of the disk being used to store the MIDI files. Although most computers, such as PCs, Apple Macs and Commodore Amigas, use three-and-a-half-inch floppy disks, they tend to use different types of arrangements for storing data on them. The end result is that a standard PC can't read a Mac disk, a Mac can't read an Amiga disk and so on. This means you need to make sure, when you buy MIDI files, that you get them on a disk format that your computer/sequencer will be able to physically read. PC- or MS DOS-formatted disks, are particularly common and can be read by both PC and Atari machines. And there are, incidentally, plenty of utilities around which do allow one machine to read a disk prepared on another. The Amiga, for example, contains a CrossDOS utility that allows PC disks to be read. Also, it's important to remember that many commercial MIDI file libraries, by the way, can provide their material in various other formats than PC disks. MIDI file suppliers Most companies that sell MIDI files ensure the channel/voice settings of their arrangements conform to the General MIDI standard and, depending on their intended use, provide songs as either type 0 or type 1 MIDI files. They also include either information sheets, or readme files on disk, that provide details of their use. Many companies, incidentally, use the first track in a MIDI file as a sort of 'global conductor track' for controlling tempo and so on and you'll see this data if you ever need to edit a MIDI file using sequencing software.

For Techie Eyes Only At the highest level, MIDI files consist of blocks of data called chunks that contain an identifying field, followed by a number that tells you how much information is in the chunk. Two types of chunks are currently defined header and track chunks and it's their different arrangements that result in different types of MIDI file. The type 0 format files contain a header chunk followed by a single track chunk, used for storing a sequence as a single stream of events. Type 1 files allow multiple track sequences to be stored and these contain a header chunk followed by separate track chunks which represent tracks to be played simultaneously. One last type of file

Page 3 of 4

was developed to allow sets of independent sequences to be stored and is known as a type 2 file. The header chunk is always the first chunk in the file and it currently holds three bits of information: details of the file format (0, 1, or 2), a count of how many track chunks are present in the file, and some timing interpretation details. The events stored in track chunks all start with a time delay field so-called delta time that specifies the amount of time which should pass before the specified event should be played. It's worth mentioning that delta times, and a few other MIDI file items, are stored in an efficient variable length format that has to be unpacked before it can be turned into a sensible numerical value. MIDI file events can actually be one of three types: MIDI events (channel messages), which are just the MIDI messages (notes, patch and controller information, pitchbend, etc) the sequencer recorded when the piece was first played; SysEx events; and a collection of non-MIDI items, known as meta events. I'm not going to go into too much detail about meta events but you might like to know that they start with a special identifier (FF hex), followed by a 'type' field, a byte count and the data itself. The type field is a number between 0 and 127, with the count field being stored in the same way that delta times are. Two meta events of particular interest, by the way, are those that allow the end of a track, or a change in tempo, to be recognised, although a great many other events have been defined for embedding track names, lyrics, copyright notices and so on. From the programming view-point, MIDI file reading isn't easy. Chunks have to be identified, their contents extracted and unpacked, and MIDI events have to be separ-ated from meta events. With type 0 files these events are time-ordered by virtue of their positions in the file and their delta times. With type 1 files the situation is more complex and, in order to produce a stream of MIDI data, all the events from all of the track chunks have to be merged in time order. When you realise that running status (the use of implied status bytes) is also allowed within streams of stored MIDI events, it's not hard to see that writing MIDI file unpacking code is no small feat! As for sources there are so many companies around nowadays it's impossible to list them all here. The details we have included represent just a cross section of suppliers chosen because we've seen, and used, some of their material and know it to be good. Bear in mind that most companies will only be too happy to supply catalogues and recent addition update sheets, and you'll be able to get up-to-date details about many of the suppliers from their Web sites! J BCK Products Of late BCK has greatly expanded its range and now has in excess of 1,000 titles available covering pop, easy listening, Latin American and even classical. Prices of all BCK supplied disks are 14.95. J Hands On MIDI Software Established in 1989, Hands On was one of the first UK companies to produce commer-cial MIDI files. The sequence disks vary in price from 4.95 to 11.95 per song and, as with most libraries, discounts are available for quantity. J Heavenly Music Another great collection with rhythm patterns, groove disks and song arrangements. If you've got access to the Internet then the Heavenly Music Web site is certainly well worth a visit! J Keyfax Software One Keyfax series of interest is Twiddly Bits. Keyfax has spent 100s of hours recording musicians like Bill Bruford and Steve Hackett in order to build disks of neat, and often difficult to play, performance tricks. Disks vary from 12.95 to 24.95 with 19.95 being the average price. J MIDI Magic International A good collection of MIDI songs including Rob Young's ShortCuts disk (14.95) which provides bass/lead lines, drum rolls, and pitchbend effects. In general, individual songs are 6.50 each with discounts for larger orders. J Newtronic A stunning collection including buskers disks, professional-quality studio loops and effects, and MIDI file construction kits. The average price of disks is 14.95 and if you're on-line you can get up-to-date info from the Web site. J Profile Music Agency As well as almost 1,000 individual titles, there are compilation disks of pop hits, classics (Bach, Mozart, Rodrigo, etc), big band collections and many others. Single songs cost 5 with compilation disks varying from 12.95 to

Page 4 of 4

14.95. J Pro MIDI BFP This company provides another massive collection of individual songs, compilation disks, party packs and so on. Many disks will give you around a dozen songs for just 15. J Stage 1 Music International As well as a load of good pop songs you'll find singalong medleys, rock & roll standards and many more in this company's catalogue. Individual songs cost 4 each with a minimum order of three songs. With your first order you automatically become a member of the Stage 1 club which means you're then entitled to extra discounts. J Words & Music A small library containing some good 'Classical Collection' volumes, plus Ragtime and Christmas song collections. Each disk costs 12.95 and you can get discounts for bulk orders.

You might also like