Jump to content

Equalization and Compression Question


Recommended Posts

Frequencies range from 20Hz to 20kHz

At least as far as human hearing and audio production is concerned, yes.

 

Each instrument has its own frequency

No. An instrument has a certain frequency range, defined by the lowest notes it can play and the highest overtones it can produce (check out overtones and partials, that's a whole other rabbit hole to dive into). What you're referring to is a mixing technique that says "Every mix element should have its own frequency range" which is not true, as you cannot just cut the vocals below 300 Hz and above 3000 Hz and say "That's yours, deal with it". Actually it means "No two mix elements should compete over the same frequency range." Which is an entirely different thing because *many* instruments' frequency ranges largely overlap and that is why much mixing effort goes into getting kick drum and bass guitar to not clash, to get vocals and rhythm guitar separated, etc.

 

Frequencies from 40Hz to 60Hz are for Bass.

Well, no. The idea is that the entire frequency spectrum can be loosely divided into several sections which serve different purpose and have different problems.

20 - 60 is sub bass (as in, bass register, not bass guitar)

60 - 120 bass (register)

120 - 300 low mids

etc. ....google that too

 

if I played an open E in the 6th string of my guitar the frequency will tend to appear near the 20Hz in my equializer

No. The lowest note on a guitar in standard tuning is an E at 83Hz, like this:

1125349051_Bildschirmfoto2020-04-17um02_19_16.thumb.png.cb6bc8231d014379b7722a1ee72eaae4.png

Note that that E has many overtones up to 10Khz or even higher

 

while if I played an E on the 12th fret it will appear near to 20kHz.

Again, no. The fundamental of the E an octave higher is 2 x 83 = 166Hz. The highest note on a guitar with 24 frets is an E 4 octaves above the lowest E, at 1,328 kHz, like this:

1870765415_Bildschirmfoto2020-04-17um02_26_11.thumb.png.37a3ebaf28861fea2eb3e5cc4aa15675.png

Here too, the overtones of that one note go well beyond 10 kHz

 

what determines where a frequency will show up in my equalizer?

If it's a musical instrument playing single notes, you will see peaks at the fundamental note and its multiples (overtones). The fundamental usually determines the perceived pitch, however, bass guitar often lets you hear the first overtone louder, so you hear it an octave higher than it actually is and bells are notorious for fractionally tuned partials which can lead you totally off with one person hearing one pitch and another hearing something different.

Link to comment
Share on other sites

If you pluck a string and watch it vibrate using some appropriate high-speed photography process – and such videos are out there, of course – you will see that the string does not simply "vibrate as one," sending out only one pure tone. Instead, several different waveforms can be seen, generally running over "half the string's length, repeated twice," "a third of the string's length, repeated three times," and so on. There are many, simultaneous, sine waves on that plucked string. And this is what produces the actual "flavor" of the sound. These are the partials. This is the overtone series. They're what makes the oboe sound different from the clarinet when both are playing the same note.

 

If you have a lot of instruments playing together, all of those complex waveforms are bouncing around the room (and, in your mixer) together. If you "simply sum them all together," many of the frequency ranges in the final mix will be full of mud – nothing really standing out, and maybe the sounds that make one instrument "distinct" are covered-up. So, this is where EQ comes in. It lets you purposely accent some frequency-ranges while attenuating others. Most instruments have one or more "frequency slots" where the truly distinctive sounds that characterize them will be heard – so, they can afford to let the other frequency ranges be attenuated to "make room for" other instruments' sounds, avoiding the "clash." Each instrument sounds clearly in its one-or-more most distinctive frequency ranges, and yields to other instruments elsewhere in the spectrum.

 

"Ducking" (quack ...) is another trick that can be used: when the bass drum hits, the bass guitar "ducks" (is reduced in volume during the hit), yielding its otherwise-conflicting frequency territory to the drum hit.

 

"In a real room," say a performance hall, your amazing ears can focus on one instrument at a time and the sounds produced by each instrument can find their own pathway around the room to reach your ear. But, when you record it, you lose that aspect. Now, the sound is coming from only one source: the loudspeaker. The one-or-more vibrating elements in that speaker are vibrating in a very complex fashion but there is still only one thing that is vibrating, and one path from the loudspeaker to your ears, now carrying a single combined sound. This is where the "mud" comes from if too many sources, now electrically combined into one source, are competing for the same frequency range. If you look at it on a frequency analyzer, the shape in those frequency ranges is indistinct – it looks like noise, and, basically, it is.

Link to comment
Share on other sites

It depends entirely what you want to achieve. The tone controls on a guitar amp help the player to get the sound and colour form the guitar/amp/speaker combination he's looking for. Maybe the distortion from the amp sounds a bit muffled, good if there's a presence knob to dial up. Some amps sound the way they do because of the specific, yet limited amount of tone control they have. Also, in a guitar amp, the EQ is in the preamp stage and thus affects how the power amp stage drives the speaker which in turn also reacts to that in a specific manner. It's all about colour here.

 

Whereas Logic's Channel EQ is designed to impart no colour at all, just direct access to a wide range of controls.

Link to comment
Share on other sites

1. -11dB

 

2. It's because Gain and Volume do have several meanings.

 

Gain in its actual sense is "change in level". So, in the case of electrical voltage and also digital audio level, a gain of 3dB means its 3 dB more/louder than it was before and a gain of -3dB means its 3dB less/quieter than before.

Then, gain is also the knob on a preamp that you turn to match the signal level that you get into the system (too loud>clip>destroyed take - too quiet>less distance from the noise floor, although that's hardly a problem today)

 

Volume in its actual sense is how loud the guitar is in the room and why you can't hear the jaw harp beneath it. Also, volume in the mix is set with the faders (although that's by far not the only means to prioritise your elements in a mix).

 

Now, here's where it comes together - to get different volume with a fader, it applies gain to the signal. So does the compressor. So does the output gain knob on the compressor.

Link to comment
Share on other sites

1. If my ratio was 4:1 this means only 25% of the signals dynamic range will surpass the threshold. So if I start in -12Db when the gain increases to -8Db the compressor will lower it down to -11Db. The difference with the 2:1 ratio is that a larger portion of the signal will be compressed? Because as I understand it, both of those ratios lowered the gain to the same level, i.e -11Db....

Using a ratio of 4:1 will result in -11 (75% reduction above threshold), using 2:1 results in -10 (50% reduction)

Link to comment
Share on other sites

There's also a technical consideration to keep in mind about digital recording: that the recording is captured as a great big file of numbers, not a fluctuating magnetic field on a piece of tape. These numbers have an absolute upper limit – measured in "bits" which is like "number of digits" – which varies from medium to medium. If you attempt to record a signal that is louder than this "maximum integer value," data is irretrievably lost. Your recording "gets a burr haircut" that sounds absolutely horrible: it literally turns into [almost ...] a square wave. "Digitally truncated." Everything above that absolute line-in-the-sand is gone.

 

Logic will fairly scream at you when this is likely to occur. :)

 

Logic expresses these things in "audio terms" of dB, but this does map to the digital medium.

 

When digitally recording, you want to make good and full use of the entire numeric range that is available to you, while never exceeding it (at any point in your "data(!) processing pipeline"). Artful use of compression will help you do that, also ensuring that the range of values used will sound good on the enormous speakers in the back of your listener's car. ;)

 

(Internally, Logic has a very, very wide internal pipeline, and it knows how to "map the numbers down" into the range of various "final production file-formats" such as CD ... which is done as the final step so that data-loss does not actually occur.)

Link to comment
Share on other sites

  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...