EQ Overview and Introduction

How to use an EQ

In the next series of blog posts, I’m going to go through EQ, or equalization. I will talk about why we use it, and when and how to use it. I think EQ is easier to understand than compression (my last series of blog posts), but when I see EQs added by young producers and engineers, I realize they are just as lost using EQ as a compressor. Partly, this is because they don’t understand frequencies.

In this blog I am going to start with an overview. I think to understand EQ and use it properly, one must understand frequencies, our ear’s perception to frequencies, the frequency spectrum (or range), and frequency specifics of individual instruments.

To start with, the ear hears 20 Hz to 20,000 Hz (or 20 kHz). This is, of course, ideal, but starts to become less (mostly on the high end), soon after we’re born. If you’re serious about a career in music, it would serve you well to NOT listen to loud sources for very long. Personally, I wear ear protection when using my leaf blower and shop vac!

If you think about an acoustic piano, the lowest note is A0, 27.50 Hz, and the highest note is C8, 4186 Hz. I bring this up because I think it helps us equate pitch with frequency. Next chance you get, go play specific notes on a piano (acoustic or digital), and then consult a chart as to the frequency of that note. For instance “middle” C is 262 Hz. A440 (the A just above middle C) is 440 Hz.

Do you know what an octave is? An octave, at least on a piano, is from, say, middle C up or down to the next C. This happens to be 8 white keys, thus octave. Using octaves, the frequency either doubles (up an octave) or halves (down an octave). So middle C (C4), 262 Hz, up an octave goes to C5, 523 Hz. (Technically, C4 is 261.63 and C5 is 523.25 Hz.) The A above middle C, A4 is 440 Hz. Up an octave is 880, down an octave is 220. Down another octave is 110, then 55, then 27.50, the lowest note on the piano. All instruments, of course, can go up or down octaves at a time.

The lowest note on a guitar is E2, 82 Hz. Guess what a bass guitar’s lowest note is? One octave down, 41 Hz. This is important. For one reason, when EQing either of these instruments, I know there is no useable information below those frequencies, so I will use a high pass filter set just below those frequencies. This helps to clean up the sound of these instruments, make them less muddy.

The frequency spectrum (20 Hz – 20 kHz) is broken into ten octaves:

  1. 20 – 40
  2. 40 – 80
  3. 80 – 160
  4. 160 – 320
  5. 320 – 640
  6. 640 – 1280
  7. 1280 – 2560
  8. 2560 = 5120
  9. 5120 – 10, 240
  10. 10,240 – 20, 480

So, the lowest note on a bass guitar, 41 Hz is in octave 2; lowest guitar string is octave 3; middle C on piano is octave 4; A440 is in octave 5. Where do vocals sit? Fullness, for example, is 140 – 440, octave 3 to 5.

A different and more effective way to think about the frequency range is to break it up into five broader ranges:

  1. 20 – 100         Bass (Sub Bass)
  2. 100 – 500       Mid Bass (Upper Bass)
  3. 500 – 2 kHz   Mid Range
  4. 2 k – 8 kHz    Upper Mid Range
  5. 8 k – 20 kHz  High (Treble)

Bass:                           Depth, Power, Thump

Upper Bass:              Warmth, Body, Fullness

Mid Range:               Bang, Nasality, Horn-like, Fullness of high notes

Upper Mid Range:  Presence, Edge, Punch, Brightness, Definition, Excitement

Treble:                        Brilliance, Sizzle, Treble, Crispness, Airiness, Breathiness

As example, electric guitar has too much “edge,” cut in the upper mid region. Vocal sounds a little nasal, cut in mid range area. The overall track needs more power and punch, boost bass region.

————————–

To go deeper, the human ear (and mind) hears and perceives sound differently at different frequencies and levels of loudness. Generally speaking, we are more sensitive to mid range and upper mid range frequencies. The ear is less sensitive to low frequencies at lower volumes, and slightly less sensitive to higher frequencies compared to mid range frequencies at the same volume. Being more sensitive means we hear it easier and more readily.

Another way to say this is at low listening volume, mid range frequencies sound more prominent, while the low and high frequency ranges seem to fall into the background. Conversely, at high listening volumes, the lows and highs sound more prominent, while the mid range seems comparatively softer. Confusing? Yes. But extremely important to understand.

To illustrate – Let’s say you’re working on the EQ of a mix, and as you listen back at low levels, you think the lows and highs could use a boost. So you boost them, and it sounds great. The next day you listen back at a high volume, and notice the lows and highs are too loud, so you cut them back down some. Sound familiar? This is the Equal Loudness Contour effect.

There are two different charts one could consult to dig deeper into this important, albeit nerdy and technical subject – Fletcher-Munson Curves and Equal Loudness Contour. There are many articles available online regarding these two subjects, so I will not get into them. BUT, it is extremely important to realize how important these affect your work as an audio professional!

I went a little more in depth than normal in this post, but I hope it helps you to understand what is involved when learning to become an successful producer or audio engineer.

Peace –

And, HEY! Make it a GREAT day!!

Tim

Advertisement

Calculating File Sizes (How much hard drive space does it take to record a song?)

So . . .  you want to record a song and you’re running out of space on the computer or external hard drive? Wondering if you have enough room? Here’s how to figure out if you do have enough space:

The sample rate and bit depth of the audio you record are directly related to the size of the resulting files. In fact, you can calculate file sizes using these two parameters:

— Sample Rate x Bit Depth = Bits per second

Or, stated another way:

— Sample Rate x Bit Depth x 60 = Bits per minute

In the binary world of computers, 8 bits make a byte; 1, 024 bytes make a kilobyte (KB); and 1,024 KB make a megabyte (MB). Therefore, this equation can be restated as follows:

— (Sample Rate x Bit Depth x 60) / (8 bits per byte x 1,024 bytes per kilobyte x 1, 024 kilobytes per —  megabyte) = Megabytes (MB) per Minute

Reducing terms gives us the following:

— Sample Rate x Bit Depth / 139, 810 = MB per Minute

A lot of folks are recording these days at 44.1/ 24. That’s a sample rate of 44,100 with a bit depth of 24 bits. Here is the calculation:

— 44,100 x 24 / 139,810 = 7.57 MB per minute.

Here is a basic chart of different sample rates and bit depths:

44.1/16 bit  =  5.04 MB/minute
44.1/24 bit  =  7.57 MB/minute
48/  16 bit   =  5.49 MB/minute
48/  24 bit   =  8.24 MB/minute
88.2/16 bit  = 10.09 MB/minute
88.2/24 bit  = 15.14 MB/minute
96/  16 bit   = 10.99 MB/minute
96/  24 bit   = 16.48 MB/minute

If you figure a normal song of 3 1/2 minutes recorded at 44.1 sample rate and 24 bit, you can plan on it taking roughly 26.50 MB of disk space. I am starting to run a lot of my sessions now at 96/24 bit. So a 3 1/2 minute song is costing me 57.68 MB of hard drive space per song.

Considering that terabyte hard drives are now running close to $50 these days, all this math stuff is not nearly as important as it was just a few years ago. But I know a lot of guys who still aren’t purchasing a whole lot of TB hard drives! It’s still useful information if it’s needed in a crunch!

Hope this helps!
HEY!! Make it a great day!!

T