Hello Troubadourians! For the past two months we’ve been talking about Equalization (EQ) and how we can use it to make our amplified guitars and voices sound more like they sound when unamplified. I’ve introduced some terminology that may or may not be familiar to everyone. And even if it is somewhat familiar, we may not fully understand what we’re talking about. This article will attempt to explain and clarify what we’ve been talking about so that we can put our new knowledge to good use.
Let’s start with defining what we mean by “BASS,” “MIDRANGE,” and “TREBLE” as applied to frequencies that our EQ effects. In standard music notation, bass is defined as all frequencies below middle C (on the piano), and treble is defined as all frequencies above middle C. That’s simple enough but can we then use this “standard” as a reference for our definition? As it turns out, no we can’t. But why not?
The audio frequency spectrum is defined by the normal range of human hearing, which is 20Hz to 20,000Hz (Hz = Hertz = cycles per second). This frequency range encompasses everything we are capable of hearing, not just music. In fact music – voices, instruments, and percussion – make up a very limited portion of what we are able to hear. For audio frequency definitions, the terms “BASS,” “MIDRANGE,” and “TREBLE” are still relevant but are applied somewhat differently. Here are the audio definitions for “BASS,” “MIDRANGE,” and “TREBLE” and the associated frequencies: Bass = 10 Hz to 100 Hz, Mid Bass = 100 Hz to 300 Hz, Low Mid = 300 Hz to 600 Hz, Midrange = 600 Hz to 1.2 kHz, High Mid = 1.2 kHz to 2.4 kHz, Low Treble = 2.4 kHz to 4.8 kHz, Mid Treble = 4.8 kHz to 9.6 kHz, High Treble = 9.6 kHz to 20 kHz.
Okay, so now we have some reference for all those numbers we see on the EQ of our mixers and other gear and some additional reference to the general “pitch” of those frequency bands. But let’s put things in an even more familiar perspective. When we think about frequencies in terms of musical pitch of specific instruments, we find that the actual frequencies are much different than what we might expect. For instance; middle C on the piano translates to the pitch found at the first fret on the second string of a standard-tuned guitar. The corresponding frequency is 261.6 Hz! That means that most of the notes on a guitar are in the bass clef, and that the pitches of most of the notes on a guitar are considered “Mid Bass” in the audio frequency spectrum. Wow!
To further blow your mind, while you might think that with the range of the audio frequency spectrum being 20 Hz to 20 kHz, the middle would be around 10 kHz. This is an incredibly high pitch – we’re talking pro-level synthesizer territory. The actual middle of the audio frequency spectrum is about 1 kHz. Though this is still a very high pitch – roughly the 20th fret on the first string of a standard-tuned guitar – it is a much more usable frequency than 10 kHz. In fact, 1 kHz is the industry standard “test tone” for audio equipment calibration.
So why is the middle of this spectrum such a high frequency and why are the frequency bands in the audio frequency spectrum distributed so differently from our musical reference of bass and treble clefs? The answer is that the audio frequency spectrum – and human hearing – is logarithmic rather than linear and as stated before, it encompasses everything we are capable of hearing, which is a significantly more complex palette that musical notes and sounds. It’s further confusing that we commandeered musical terminology (BASS-MIDDLE-TREBLE) to define the audio frequency spectrum. Okay, so how can we relate this new knowledge to something musical? What am I really doing when I turn the TREBLE knob? How do I know which frequency to adjust when my monitors are feeding back? The key to answering those questions – and many other similar questions – is that the numbers we use in the audio frequency spectrum are primary frequencies, while musical instruments and voices create tones that consist of primary frequencies accompanied by a sequence of harmonics (also called overtones) that are above and below the primary frequency. It is these harmonics that give richness to the sound of our voices and instruments – and wreaks havoc with our monitors – and it is in these harmonics that we musicians have learned to discern BASS-MIDDLE-TREBLE tonalities completely separate from the audio frequency spectrum.
I won’t bore you with the math and physics of how harmonics work their magic, but I do think it’s important to explain some of what their effect is on BASS-MIDDLE-TREBLE tonalities and how to exert some control over them. For starters, you need to know that bass frequencies are omni-directional and require more energy to project over distance while treble frequencies are very directional but require much less energy to project over the same distance. The same applies to harmonics, which means that the higher harmonics of any note – or voice, or instrument – will be dominant and that’s where the “tone” lives. Upper harmonics can sometimes be so dominant that they in effect mask the central tone from our perception. They are also the most likely to create problems when we attempt to amplify our voices and instruments.
So, how do we know which control effects which frequency? The easiest thing to remember is that for voices and most instruments, the BASS control of our equipment has the most effect on the frequencies below 300 Hz, the MIDRANGE controls have the most effect on the high ranges and upper harmonics – 300 Hz to 4 kHz, and TREBLE controls are essentially controlling the upper harmonics or notes above 4 kHz. We haven’t even mentioned “resonance” yet but that’s more than enough “tech” for now so we’ll save that discussion for another time.
Need to know? Just ask… Charlie (firstname.lastname@example.org)