Parallel Processing

Blending processed and unprocessed sound is a classic and effective technique that can provide drastic improvements – and it can be done in every DAW!

What is it?

The difference between processing a sound and parallel processing is simple. Both start out with an original, unaltered sound and signal path. In a processed sound, there is no amount of the original signal left. The sound passes through the processing and is altered before continuing to the output; all you hear is the processed sound. Typically this is what is done during the mixing phase. We will add EQ, compression, saturation, etc. to a sound.

Parallel processing, on the other hand, leaves the original sound unaltered but adds an amount of processed sound alongside it. It’s the blend of these two elements that constitutes the end result. Adding a reverb or delay effect is not regarded as parallel processing. Parallel processing relates more to generating a whole new sound by means of compressing, equalizing, filtering, distorting, re-amping, and generally using and abusing non time-based audio processors.

Parallel processing is a non-destructive technique. The basic process is done this way: the original sound is on one channel. An auxiliary input track is created next to it, leaving us now with two tracks. On the 2nd track we add whatever effect we want to use, i.e. a distortion effect plugin. Using a bus send on the original track, send this to the new auxiliary track with distortion. You can add a lot of distortion if you want! The channel fader for the aux track will most likely not be at unity (0). While playback is engaged, starting with the fader down all the way at infinity, slowly bring up the fader on the aux track until the distortion is heard. Set the fader where it suits you. Doing things this way allows the original signal to go to the main output, and then the parallel distorted signal is also being sent to the main output, but only the amount we desire to have. Blend these two tracks to taste.

In the screenshots below there is a synth bass track that wasn’t coming through the mix very well. It has a lot of energy below 100 Hz. To help bring it out in the mix better I added some distortion using parallel processing. On the original bass track I added a bus send (bus 1), routing it to a mono aux. track which has the distortion plugin.

Notice that on the bus send I set it to pre-fader send and that the volume fader is set to unity (0). It just so happens that on the aux channel that has the distortion plugin, the channel fader is set quite high, -5 dB or so. Because of the type of preset I used on the distortion plugin I could get away with this strong of a mix level. Usually when doing parallel processing the channel fader is much lower. I did start with it all the way down, though, and brought it up slowly until it made a difference in the mix to my liking.

In the next screenshot I am using a saturation plugin. This is adding some harmonics which will emulate some analog equipment.

Again, because of the type of processing I am doing, some of the settings are a little different from “normal” use. I have never set the saturation to 1.0, but because I am using it in a parallel situation I can get away with that. Many times I will put a saturation plugin directly on a track. When I do it this way, the saturation is set no higher than .4. And again, notice that the aux channel level is set to unity. Again, because what I trying to achieve, this was acceptable.

The next screenshot is the same thing but this time I am using a compressor. This is, of course, known as parallel compression.

All settings for I/O routing are the same. Notice on the compressor I am achieving 6 dB of gain reduction, while also adding 6 dB of makeup gain. The 6 dB gain reduction is almost an arbitrary number. I knew since I was doing parallel processing I could afford to hit the compressor a bit harder, thus 6:1 ratio with 6 dB GR. Remember, I can always “dial in” the amount of the compressed signal I desire alongside the unprocessed signal.

While listening through earbuds, all 3 of these effects of parallel processing worked really quite well (saturation, harmonics, compression). The goal was to get the sub-bass synth bass to come through the mix better. This was definitely achieved with great results.

On a rap track I’m currently mixing I used parallel processing on the hook lead vocal. I set up a compressor on one aux. channel, and a doubler plugin on a 2nd aux. track. I then sent two different bus sends from the original vocal track to each of these two aux. tracks (pre-fader, unity send). Each aux. channel fader was then set appropriately. In this case, they were not at unity. The doubler track was set somewhere near -20 or -30 dB. The compressor track was also set close to the same.

Parallel processing is a great tool to use and one of many in any mixer’s toolbox. You are only limited by your imagination! I read about one of the top mixers who uses parallel processing even for EQ. He prefers not to EQ the original track, instead preferring to do it with a parallel track. Many times we don’t want to alter the original track to drastically, but still add an effect. This is a perfect scenario for parallel processing!

I hope you find this information helpful!

And …… HEY! Make it a great day!

Tim

Advertisement

EQ – Different Frequency Bands [20 Hz – 20 kHz]

Learning what different frequencies sound like and the effect they have on the sound of different instruments is an invaluable skill. These are the names we use to classify the bands – the frequencies are approximate, so use your ears!

> 20 – 60 Hz – Sub-Bass: Gives boom, depth, and richness – too much sounds flabby and out of control. Small speakers don’t reproduce this.

> 60 – 150 Hz – Bass: ‘Thump’ and punch in drums, especially kick and snare, and richness in bass and guitars. Too much sounds woolly.

> 150 – 1 kHz – Lower mid: Important for warmth, but too much sounds thick and congested. The 500 Hz – 1 kHz region especially is crucial for a natural vocal tone, but too much sounds boxy and nasal.

> 1 – 3 kHz – Upper mid: The most sensitive area of the ear, important for edge, clarity and bite, but too much will sound harsh and tinny.

> 3 – 8 kHz – Low Top: Provides fizz and sizzle; and edge and aggression in guitars – too much sounds thin and brittle.

> 8 – 12 kHz – Top: Gives openness, air and clarity – too much sounds over-bright and glassy.

> 12 – 18 kHz – Very high top: These frequencies can add sheen and sparkle and sweeten things up, but too much sounds unnatural, gritty and forced. [FYI – I have the Kush Clariphonic parallel EQ hardware. I add these frequencies on my mixbuss or sometimes use it for vocals. It really opens up that top end. A little goes a long way.]

Tip #1: Don’t solo an instrument when EQ’ing. Set the EQ when playing the instrument in context with the rest of the track. You can solo to quickly check things, but be sure to take out of solo mode fairly quick.

Tip #2: Sometimes when soloing a track or instrument, the EQ we add makes that instrument sound worse! But in context of the whole mix it sounds great. That is what matters. Part of the time you can expect this to happen.

Tip #3: If there are two parts that are fighting in the mix because they occupy the same frequency range, it can sometimes help to boost the EQ on one of them and cut the other at the same frequency, then reverse the strategy and boost the second sound in a different place while cutting the first. This emphasizes the contrast between the two parts, with gentler boosts, and helps stop things sounding unnatural.

Tip #4: In regard to Tip #3 above, this can sometimes be called ‘masking.’ Masking is when two instruments are fighting for the same frequency or frequency space. For example, kick and bass guitar. If when the kick hits, the bass is obscured some, this is masking. Using Tip #3 above will help get rid of this problem. Make sure to ‘cross-EQ’ both ways. In other words, boost instrument 1 and cut instrument 2 in same place. Then boost instrument 2 and cut instrument 1 in same place.

Tip #5: Do a ‘boost & sweep.’ When searching for a frequency that you want to get rid of, use a bell curve EQ (band, or parametric), boost @12 dB with a somewhat narrow bandwidth (high Q). Sweep up and down in frequency until you find (hear) the unwanted or annoying frequency. Then set that band for a cut instead of a boost. How much you cut depends on the specific situation, it might be a little or a lot.

As always – I hope this helps!

And ……. HEY! Make it a great day!

T

The Different EQ bands and What they mean (part 2)

Dyn3 7-band EQ (Avid Pro Tools free plugin)

If you look toward the bottom of the EQ pictured above, you will notice 5 different bands: 1. LF, low frequency, red; 2. LMF, low-mid frequency, orange; 3. MF, mid frequency, yellow; 4. HMF, high-mid frequency, green; 5. HF, high frequency, blue.

In today’s blog I will talk about these five bands. I want to start with band 1 and 5. These are typically used and referred to as “shelves.” Band 1, low frequencies, is the low shelf, and band 5, high frequencies, is the high shelf.

But these two bands each have two different settings. The small left icon, next to the LF and HF, is called a bell-type EQ. It kind of looks like -o-. This will either boost or cut a section of frequencies set by you with the frequency knob. The ‘Q’ knob will determine how wide or narrow the bell curve will be. A low Q setting will give you a wide band of frequencies, and a high Q will render a narrow band of frequencies. A good rule of thumb is wide when boosting and narrow when cutting.

The typical use for this is to, say, boost the lower frequencies to bring out a kick drum or synth bass. On the high end, with the HF knob, we can boost upper ‘air’ frequencies to make guitars or vocals stand out or sound brighter. Of course, we can also cut in these frequency ranges as well.

The other icon setting is called a ‘shelf.’ This is the more common use for these two bands. Typically we use a boost here (low or high). When boosted, it looks just like a “shelf.” If on the low shelf, we set the frequency knob to 125 Hz, then everything from 125 on down (to 20 Hz) is boosted the same amount. On the high shelf, we might add a shelf for vocals starting at 6 kHz. In this case everything from 6 k up will have a boost. Of course, we can also cut using a shelf, but this happens less often then a boost.

The Q factor is a bit more complicated and will have to be reserved for another post.

Bands 2, 3 and 4 allow for bell curve settings only. These are the same as the bell curves on bands 1 and 5. These are used for low-mid, mid, and high-mid frequencies. There are only three knobs: Frequency, Gain and Q. Frequency, of course, sets the frequency that you want to work with. Gain is volume (loudness) and can be plus (positive) or minus (negative). We might say boost 2 kHz 2 dB (2 dB) which is a positive gain. Or cut 1200 Hz 3 dB (-3 dB) which would be a negative gain.

As stated above, Q determines the amount of frequencies being altered by the EQ.

As always, I hope this helps!

And, HEY! Make it a Great day!

Tim

The Different EQ bands and What they mean

Introduction to an EQ

Throughout my blog series on EQs I am going to refer to the free EQ plugin that comes with Pro Tools, the Digirack EQ III 7-band. First, let’s talk about the input/output LED meters and gain controls (top left of the plugin). This simply shows the input and output signal level running through the EQ. Always check to make sure there is no clipping going on. If on the input or output side the signal is clipping, hitting red, simply turn the respective gain knob down until there is no longer any clipping happening. It is normal to adjust these gain knobs. With the input gain knob is a symbol, Ø. This is the polarity switch “button.” This will invert the phase of the incoming signal. If you don’t know what this is, I will cover it in a later post. It’s a little more advanced, but easy to understand and know when to use. For now, it won’t concern us.

Just beneath the input/output section are two filters. There is a high pass filter and low pass filter (HPF, LPF).

High Pass Filter
Low Pass Filter

There is also a notch filter. It looks like a line with a ‘V’ in the middle of it. (I couldn’t find a pic of one.)

-∨- (notch filter)

The high pass filter alllows high frequencies to pass through the EQ, while cutting low frequencies, not allowing them to go through the EQ. Conversely, the low pass filter allows low frequencies to pass through and cuts high frequencies. The notch filter takes a small section of audio and makes a deep cut (-12 dB or more). It takes a “notch” out of a small section of audio frequencies. The frequency can be set by the user. One use for the notch filter is for plosives on a vocal track. When a vocalist pops, say a ‘p,’ set the notch filter at 100 Hz. It should diminish it greatly or make it go away completely. You may have to sweep the EQ up or down a little to take care of it.

The two filters each have an “IN” button to engage them. They will light up blue when engaged. The frequency, of course, can be set to whatever you want.

Lastly, there is a setting that is used for the HPF/LPFs that tells the filter how steep of a cutoff you want. If we are allowing high frequencies to pass through and cut out low frequencies, how strong do we want to cut off those low frequencies? The slope is set per octave. The setting choices are 6 dB per octave, 12 dB/oct, 18 dB/oct, and 24 dB/oct. As an example, let’s say I set a high pass filter at 200 Hz, with a 12 dB/octave slope. What this means is that only frequencies above 200 Hz will pass through the EQ (and anything further in the signal chain), and frequencies one octave down (100 Hz, remember from my previous post?), will be 12 dB quieter. Another octave down, 50 Hz it will be another 12 dB quieter. There are times we want a steep cutoff, like 24 dB/octave and other times when we might want 6 dB/octave.

Look on the left side of the GUI window, which shows the graphic interface. There are small numbers. On the center line is 0. This is where all EQ bands start. In out example, since we’re cutting at 200 Hz, at 100 Hz the downward slope will be at -12 dB. At 50 Hz it will be -24 dB.

I hope this hasn’t been too confusing. Try experimenting with these filters on a mix you’re working on. Keep your ears open when doing this. You can even experiment on a piano or acoustic guitar track. Set the HPF up higher, like 400, 500 Hz. Change the different octave settings. You should hear what’s happening.

As always,

Make it a GREAT day!

Tim

EQ Overview and Introduction

How to use an EQ

In the next series of blog posts, I’m going to go through EQ, or equalization. I will talk about why we use it, and when and how to use it. I think EQ is easier to understand than compression (my last series of blog posts), but when I see EQs added by young producers and engineers, I realize they are just as lost using EQ as a compressor. Partly, this is because they don’t understand frequencies.

In this blog I am going to start with an overview. I think to understand EQ and use it properly, one must understand frequencies, our ear’s perception to frequencies, the frequency spectrum (or range), and frequency specifics of individual instruments.

To start with, the ear hears 20 Hz to 20,000 Hz (or 20 kHz). This is, of course, ideal, but starts to become less (mostly on the high end), soon after we’re born. If you’re serious about a career in music, it would serve you well to NOT listen to loud sources for very long. Personally, I wear ear protection when using my leaf blower and shop vac!

If you think about an acoustic piano, the lowest note is A0, 27.50 Hz, and the highest note is C8, 4186 Hz. I bring this up because I think it helps us equate pitch with frequency. Next chance you get, go play specific notes on a piano (acoustic or digital), and then consult a chart as to the frequency of that note. For instance “middle” C is 262 Hz. A440 (the A just above middle C) is 440 Hz.

Do you know what an octave is? An octave, at least on a piano, is from, say, middle C up or down to the next C. This happens to be 8 white keys, thus octave. Using octaves, the frequency either doubles (up an octave) or halves (down an octave). So middle C (C4), 262 Hz, up an octave goes to C5, 523 Hz. (Technically, C4 is 261.63 and C5 is 523.25 Hz.) The A above middle C, A4 is 440 Hz. Up an octave is 880, down an octave is 220. Down another octave is 110, then 55, then 27.50, the lowest note on the piano. All instruments, of course, can go up or down octaves at a time.

The lowest note on a guitar is E2, 82 Hz. Guess what a bass guitar’s lowest note is? One octave down, 41 Hz. This is important. For one reason, when EQing either of these instruments, I know there is no useable information below those frequencies, so I will use a high pass filter set just below those frequencies. This helps to clean up the sound of these instruments, make them less muddy.

The frequency spectrum (20 Hz – 20 kHz) is broken into ten octaves:

  1. 20 – 40
  2. 40 – 80
  3. 80 – 160
  4. 160 – 320
  5. 320 – 640
  6. 640 – 1280
  7. 1280 – 2560
  8. 2560 = 5120
  9. 5120 – 10, 240
  10. 10,240 – 20, 480

So, the lowest note on a bass guitar, 41 Hz is in octave 2; lowest guitar string is octave 3; middle C on piano is octave 4; A440 is in octave 5. Where do vocals sit? Fullness, for example, is 140 – 440, octave 3 to 5.

A different and more effective way to think about the frequency range is to break it up into five broader ranges:

  1. 20 – 100         Bass (Sub Bass)
  2. 100 – 500       Mid Bass (Upper Bass)
  3. 500 – 2 kHz   Mid Range
  4. 2 k – 8 kHz    Upper Mid Range
  5. 8 k – 20 kHz  High (Treble)

Bass:                           Depth, Power, Thump

Upper Bass:              Warmth, Body, Fullness

Mid Range:               Bang, Nasality, Horn-like, Fullness of high notes

Upper Mid Range:  Presence, Edge, Punch, Brightness, Definition, Excitement

Treble:                        Brilliance, Sizzle, Treble, Crispness, Airiness, Breathiness

As example, electric guitar has too much “edge,” cut in the upper mid region. Vocal sounds a little nasal, cut in mid range area. The overall track needs more power and punch, boost bass region.

————————–

To go deeper, the human ear (and mind) hears and perceives sound differently at different frequencies and levels of loudness. Generally speaking, we are more sensitive to mid range and upper mid range frequencies. The ear is less sensitive to low frequencies at lower volumes, and slightly less sensitive to higher frequencies compared to mid range frequencies at the same volume. Being more sensitive means we hear it easier and more readily.

Another way to say this is at low listening volume, mid range frequencies sound more prominent, while the low and high frequency ranges seem to fall into the background. Conversely, at high listening volumes, the lows and highs sound more prominent, while the mid range seems comparatively softer. Confusing? Yes. But extremely important to understand.

To illustrate – Let’s say you’re working on the EQ of a mix, and as you listen back at low levels, you think the lows and highs could use a boost. So you boost them, and it sounds great. The next day you listen back at a high volume, and notice the lows and highs are too loud, so you cut them back down some. Sound familiar? This is the Equal Loudness Contour effect.

There are two different charts one could consult to dig deeper into this important, albeit nerdy and technical subject – Fletcher-Munson Curves and Equal Loudness Contour. There are many articles available online regarding these two subjects, so I will not get into them. BUT, it is extremely important to realize how important these affect your work as an audio professional!

I went a little more in depth than normal in this post, but I hope it helps you to understand what is involved when learning to become an successful producer or audio engineer.

Peace –

And, HEY! Make it a GREAT day!!

Tim

Compression Dos & Don’ts

To wrap things up regarding compressors, I will offer 3 Dos and Don’ts as my final word for now. These are things to always keep in mind when working with compressors. Some may have been previously stated in an earlier blog post.

DO          Avoid using extreme settings to begin with, if you are just trying to control the dynamics.

DON’T   Add compression to every channel by default. Start off with minimal compression, and carefully choose where to add compressors.

DO         Experiment with different types of compressors – hardware and software. There can be differences in how they sound. Compressors can and do sometimes sound different from one another.

DON’T  Forget to bypass the compressor occasionally while setting to check the results.

DO         Remember to balance the output gain so the level doesn’t change when engaged and bypassed. This way you can accurately compare before and after. Also, typically compression is added AFTER the mix has been balanced. So you don’t want to alter levels with either compression or EQ.

DON’T  Be afraid to experiment. Some of the greatest sounds in the history of recorded music came from misused and abused compressors.

Compressors – What is the Knee and What does it do?

What does the knee do on a compressor?

As you get better with compressors, you will start playing with other knobs and features. One of these is the knee. The knee refers to when and how the ratio starts to change when the compressor starts to take effect. A ‘hard knee’ means the compression becomes immediately active as soon as the input signal hits the threshold. A ‘soft knee’ means the compression becomes audible more gradually. A ‘soft knee’ also means that gentle compression starts happening further below the threshold. Another way to say this is it starts acting before the signal actuall reaches the threshold setting.

Both hard- and soft-knee compression have their uses; two examples: if you want to squash a signal’s transients quickly, you’ll want hard knee compression. If you want to use a compressor to gently glue a mix together by tightening up transients, you’ll want a soft-knee compressor.

Lastly, if you have a compressor, like the Dyn3 Compressor/limiter which comes free with Pro Tools, look at the picture of the knee. It actually looks like a human knee!

As always – I hope this helps!

And…. HEY! Make it a great day!

Tim

6 Recording Myths – Busted!

It is hard to learn how to record and mix music today. With so much information available on the web, sometimes it is hard to know if the information is true or not – whether it can be trusted or not. Here are six myths that are not true! Ask anyone who really knows his stuff and is experienced and successful.

Myth 1 – You can’t use ribbon mics on loud sources

This myth is a good one to start with because like the best myths, there’s just enough of a grain of truth to it to keep it going. It’s true that the actual ribbon element can be more fragile than the diaphragm of a moving coil or condenser microphone. It’s also true that in the early days of ribbon mics, those classic RCA mics from the 1940s would fail readily if you tried to use them on a screaming guitar amp or a kick drum. However, that hasn’t been true for decades. These days, arguably the most venerated guitar cabinet mic, the Royer R-121,  is a ribbon mic. Ribbon mics these days can easily withstand extremely high Sound Pressure Levels (SPL) and can be used on any source. Some ribbon mics such as the Shure KSM313/NE utilize a ribbon made of Roswellite, a substance created using carbon nanofilm technology that is virtually unbreakable and can endure levels up to 146dB SPL.

Myth 2 – Always record as hot as you can

This is another myth that has roots in the early days of recording to tape. Back when your recordings had to stay above the noise floor of the tape, tracking too quietly could render your recording noisy and unusable. Not only that, but recording engineers realized that for rock music, slamming your recording levels produced a very pleasing tape compression and “heat” that could make things sound great. With digital recording, however, both of these are no longer true. With 144dB of dynamic range (24-bit recording) you can even record at -40dB and have 100dB of dynamic range. Early analog-to-digital converters (from decades ago) did sound better when recording near the top of their range but that is no longer the case. In fact, with digital recording, overloading your recording levels is decidedly unpleasant, resulting in a digital distortion when clipping that is ugly and abrasive.

Myth 3 – External digital clocking improves the sound of your audio interface

If you’re interconnecting a lot of digital gear you may want to use a master digital clock. Get the best clock you can afford, and make sure everything is connected properly via Word Clock cables. In many cases, the master clock won’t have a drastic influence on the sound; the uniform clocking simply makes everything work together without digital pops and ticks. Just taking your audio interface and hooking it up to an external clock isn’t going to improve the sound quality of its digital-to-analog and analog-to-digital converters unless the clock in your interface is really poor. If you really want to improve your recorded sound, get the best mics, preamps, and audio interface you can. Only buy an external digital clock after you’ve made sure the rest of your audio chain is the best it can be.

Myth 4 – Egg cartons or mattress foam are good acoustic treatments

No, not even close! And despite what you may read on the internet, they don’t sound-proof anything. Materials such as drywall, insulation, and acoustic foam can be great acoustic treatment materials. With these materials and proper construction and application methods, you can effectively tackle the two general aspects of studio construction: isolation and acoustics. First, if you’re concerned with keeping sound from getting in or out of your recording space, you’ll need to tackle isolation. This is best done with some form of mass-air-mass construction. A wall with drywall and insulation, empty space, then another identical wall with drywall and insulation will provide a great start. For controlling the acoustics inside your space, you’ll need a combination of absorption and diffusion. There are myriad ways and a long list of proper materials to implement this — egg cartons and mattress foam are NOT on the list!

Myth 5 – External hardware always sounds better than digital plug-ins

In the early days of digital, this may have been true, but definitely not today. Sure, there are hardware compressors, equalizers, and effects processors with a certain mojo that sound amazing. But there are also digital software processors that sound incredible and offer a level of precision and recall that you’ll never get with external hardware. There’s a reason that nearly every pro studio has a ton of high-quality plug-ins even if they already have and use great outboard gear. You may like the sound of a piece of hardware, but you may like, or even prefer, the sound of a digital processor. The days of digital being second best are far behind us.

Myth 6 – There’s a “correct” way to record

It might seem counter-intuitive after all these “wrong” myths to proclaim that there’s no “right” way. But it’s true! One way of doing things may not get you the results you’re after, but then there are multiple ways that will. The name of the game is experimentation! Never stop experimenting and searching to find techniques that work for you, your music, your musicians, your studio. If you wonder if something will work, even if it seems patently false, give it a go! At worst you’ll need to redo it. At best you may add another unique tool to your toolbox. And that’s what recording is all about!

These are truths that all of us can learn from. I hope this helps musicians and engineers alike get better at their craft!

Peace – and HEY! make it a great day!

T

Compressors 101 – the Basics (part 1)

Compressors seem to confuse a lot of people in the beginning, they certainly did me! Here is some helpful information concerning using a compressor in your mixing to help get you started. I will have other blogs on compression, so keep a look out!

1.  Decide what you want to achieve. There are really only 4 reasons for using a compressor – control a dynamic signal, add punch or impact, change the sound, create an unusual effect. Make a decision on what your goal is, which one of the four you would like to achieve. Keep listening with your final goal always in mind. Here is a neutral starting point: 2:1 ratio; 75 ms attack; 100 ms release.

2.  Overdo to begin with. Pull down the threshold until it starts working. It can be helpful to start with exaggeration. If you’re having to turn the threshold way down – boost input level instead. Exaggerating can help get settings right.

3.  Listen. Fine tune settings keeping end goal in mind. Once you get close, adjust the threshold.

4.  Listen again and balance different settings against one another. Higher ratios usually need higher thresholds. Lower ratios usually need lower thresholds.

5.  Experiment. Don’t be afraid to change a setting. Just keep listening! Radical amounts are common: 15-20 dB for electric guitars, room mics, drums and even vocals.

For a smoother sound – Use faster attack and higher ratio (But don’t lose energy & excitement)

To reduce ‘bounce’ – Use shorter release time & ease off threshold, or use a lower ratio. Bounce is when you hear the level ducking as the compressor kicks in and then springs back up when it releases.

To add punch – Use a higher ratio, slightly longer attack and shorter release times, but watch out for pumping. Pumping is where the end of the note is louder than the start. Also when adding punch, be careful not to introduce any distortion.

If you add stereo buss compression – be gentle – 1.5:1 and only 2 – 3 dB of gain reduction.

Don’t be afraid of using compressors. Experiment with them until you understand them. Try this experiment: print a bass track with heavy compression. Compare the original audio track with the compressed audio track. This will help you understand just exactly what the compressor is doing. You will see a visual representation of what your ears are telling you.

Compressors are a vital part of making music. We use them while tracking, mixing, and many times both tracking and mixing.

I hope this helps!

Peace – and as always – make it a GREAT day!

T

10 Tips for a great vocal recording

Here are ten quick tips to think about the next time you record vocals:

1)  Warm Up:

Every vocalist needs to warm up. You wouldn’t run a marathon without stretching first, would you? Vocalists should warm up for at least 15 mins. before laying down a great performance.

2)  Don’t record vocals in the morning:

No vocalist is at their best if they’ve just rolled out of bed. If possible, try to schedule the vocalist in the mid-afternoon or evening. Use mornings for setting up and testing ideas. Always try to give the vocalist plenty of notice in advance before the recording session.

3)  Comfort:

Make it your job to ensure that the vocalist has space to move, the room is at the right temperature, and there’s nice ambient lighting to help set the mood.

4) Monitoring:

Spend time getting the balance in the headphones that the vocalist wants. Add reverb to their vocal sound if they want it, and be prepared to adjust levels as the session progresses. Watch out for the vocalist drifting out of tune, this is often because they can’t hear themselves but are too polite to mention that!

5)  Be extra kind and sensitive:

Vocalists are a very sensitive breed! A lot of pressure rides on them to really deliver – on stage and in the vocal booth. One of the greater skills we can possess is the art of encouragement and support. Being able to coax amazing performances using expert direction is a real plus. Patience and confidence building are also important. The ability to keep the vocalist focused is essential. Always use tact!

6)  Phrasing:

Spend time getting the vocal phrasing right. Subtle changes can transform an OK take into something exciting. Make sure the vocalist articulates the end of words as much as the beginning: this is vital for a sense of passion and engagement. Even if some rewriting has to take place, it’s better than compromising with an awkward line.

7)  Vocal ticks:

It’s tempting to edit out breaths and other bits and pieces from the take. These details are an essential component of any vocal performance and can make your track sound more alive, no matter what your style!

8)  Choice of microphone:

Condenser microphones are generally a better choice for vocals than dynamics. A Neumann U87 or TLM 103 are good choices if you have the budget. Experienced vocalists will have their own preferences. Accommodate them if you can.

9)  Compression:

Some engineers swear by compressing a vocal on the way into the DAW. This can work, but you can’t remove compression once it has been recorded. Be sure you have tried this out with good results or you may end up ruining an otherwise perfect take. Another strategy is to set up the vocal mic with lots of headroom and just make sure to avoid any clipping if the vocalist suddenly starts getting loud. You can always add compression during mixing.

10) The room:

I saved the most important one for last! Don’t forget that your recording will only sound as good as your room. If you have any nasty resonance build up, reflective surfaces, closets without acoustical treatment, etc., then steps 1 – 9 are kind of pointless. Obviously, this would need to be taken into consideration long before any vocal tracking were to take place. You can always use something like a Reflexion Filter (by sE Electronics) or something similar to improve your space.

I hope this helps and HEY!, make it a great day!

T