Jump to content

Aux Channel Best Practices


Recommended Posts

I understand that using aux channels for effects processing is a way to keep the CPU cool and optimize signal flow between multiple channels using the same effects.

 

It seems like EQ's and Compression is OK for individual channel strips, but for time based effects like reverb, tap delays, chorus' and flangers it is probably best to group these in an aux channel and send several track through if you can.

 

My question is, if you have multiple effects set up in an aux channel, can you vary the degree each effect is utilized on each channel using the aux? Or, will every channel I send through use the combined amount of effects in the same proportion? I am not referring to wet/dry amount, I am referring the ability to customize each effect using the same aux channel.

Link to comment
Share on other sites

Thanks David!

 

Just curious, what's the purpose of doubling up compression on drums? I'll test it, but is it an audible effect? I'd think that would crush the life out of it...

I'm not sure what you mean by doubling up compression?

 

Parallel compression means mixing the compressed signal with the uncompressed one, resulting in less compression. But of course you can dial it in however you'd like resulting in anything from a very subtle barely detectable result to a squashed sound.

Link to comment
Share on other sites

I'm not sure what you mean by doubling up compression?

thinking he means by using two comp w/diff settings on a channel :|

Yes, this is what I imagined. In practice, I assign EQ and compression using individual channels, and time based effects assigned to Aux channels.

 

Is it common practice to compress a signal using a channel strip plus a second pass on an aux channel? That is what I thought was meant by parallel compression. Thus compressing a signal 2X.

Link to comment
Share on other sites

No, parallel compression means that you have two audio streams (auxes) to the output: the compressed signal, and the uncompressed signal - those two are mixed "to taste".

You can even do it on one aux, if the compressor has a wet/dry parameter (Logics' has, see bottom of pic).

 

1181134998_pic2014-04-15at09_42_25.PNG.fabe8a1b5065f4f82b792199d09d0406.PNG

 

The advantage of two auxes is that each can have separate processing (like EQ), it's a more flexible setup than with the wet/dry slider 'method'.

Link to comment
Share on other sites

Is it common practice to compress a signal using a channel strip plus a second pass on an aux channel? That is what I thought was meant by parallel compression. Thus compressing a signal 2X.

Sure, it's often done. Sometimes even 3X or more.

 

Here's the kind of thinking a sound engineer might go through when treating a guitar for example:

 

that guitar melody is nice, but some of the notes are a bit louder than others making the guitar distracting instead of blending into the mix as should be its role for this particular part of this particular song. I'm going to use a compressor (or maybe volume automation, or maybe EQ, or whatever tool works for the job) to make the loudness of the different notes more consistent. Oh I don't like the way the initial attack of the pick sounds on those notes here. I'm going to dial in a compressor to smooth out those attacks. Now I'm sub-mixing that guitar along with its double onto an Aux and they would benefit from being "glued" together, I'm going to put a compressor on the Aux to help them be more coherent.

 

If done right, there's no reason the guitar should sound over-compressed at all. In fact if done right nobody should be able to hear any compression at all... except maybe another sound engineer. On the other hand, you can completely suck the life out of that same guitar using a single compressor.

 

Hope that helped.

Link to comment
Share on other sites

If done right, there's no reason the guitar should sound over-compressed at all. In fact if done right nobody should be able to hear any compression at all... except maybe another sound engineer. On the other hand, you can completely suck the life out of that same guitar using a single compressor.

+1

ears ears ears, therein again...

only if you crush the life out of it... can be done with one.
Link to comment
Share on other sites

Another way to think of it is basically mixing thinner spiky sounds in with thicker more even sounds to take advantage of the dynamics of the first and the fullness of the second.

 

Say you have a guitar and it's fairly even but every now and then some notes leap out about 6 db above the average. On the parallel compression aux, you could set the comp so it's pulling down the peaks by 6 db or so. You listen to that by itself and you notice that it sounds kind blunted, thick and "soft" sounding. Now you can mix in the uncompressed signal so that it's peaks are coming up level to the compressed peaks of the comp channel. Result: you still have peaks in your signal but the comp channel's lower level stuff is brought up in the mix, filling in around the peaks of the uncompressed channel. So the whole thing still has edges but it's fuller sounding overall.

 

Of course there is so much variation that you could apply in this scenario, like how much you vary the balance between the two signals, how much compression and what compression ratio to use, whether you want to squish the comp channel a lot and only mix in a tiny bit or compress lightly and mix them more evenly. As mentioned, use your ears!

Link to comment
Share on other sites

What a GREAT set of responses! To hear different techniques and understand the thought processes for why certain applications are used is really helpful. Thanks! In recording, I see there are a thousand ways to skin a cat. I should really get some educating as to why certain tasks are solved the way they are.

 

In David and camillo's scenarios, it seems like solving isolated issues in recording either boils down to personal preference (or inexperienced choices) rather than a set path of "best practices." Where as more common tasks may have a more traditional approach set for a solution. Am I fairly safe to assume this?

Link to comment
Share on other sites

In David and camillo's scenarios, it seems like solving isolated issues in recording either boils down to personal preference (or inexperienced choices) rather than a set path of "best practices."

I wouldn't say 'personal preferences' as much as what tools does the job. You don't think "tonight I'm going to use a hammer to fix the kitchen door" without having ever looked at the door just because you heard that a professional just fixed your neighbor's door with a hammer. If the kitchen door has a problem, you look at it, you determine what the problem is, maybe that screw here is a bit lose, you look at the screw, it's a philips head, so you reach for a philips screwdriver and tighten the screw. If the door works now, fine, you're done. If it's still wobbly, you look for another problem to fix. Etc....

 

Where as more common tasks may have a more traditional approach set for a solution. Am I fairly safe to assume this?

I don't believe in "a traditional approach". If something sounds like it needs EQing you EQ it. If it sounds like it needs compressing you compress it. Etc. Even something as basic as "Vocal tracks need EQ, compression and reverb" doesn't make much sense to me. Have you considered that maybe those vocals don't need any EQing? Don't need any compression? Don't need any reverb? Those situations arise.

 

So get your mind out of the "best practices" mindset, learn your tools, learn to use your ears, and learn to make your own decisions. That's the only way you'll progress. I've never seen a successful sound engineer turn a button without knowing why. They don't turn a button because they've seen someone else turn that button before. They heard something wrong, and they're trying to fix it. That's the mindset you should have.

Link to comment
Share on other sites

Good advice from David. One unfortunate thing about Logic is that when people load up e.g. 32 tracks of Channel Strip Settings, they load in all these plug-ins that do more harm than good.

 

I had a very well known L.A, engineer over here last week teaching me about why he loves the UAD Ocean Way and API Vision plug-ins and he told me that frequently when he is called in to "improve" a mix by a Logic based composer, the first thing he does is bypass all those plug-ins and the composer is generally shocked at how much better it already sounds.

 

I told him that I have often commented that when I see somebody's Logic project and I see tons of EQ and Compressor plug-ns, on every track, I think "well here is someone who does not know what the hell he is doing." He laughed, and said he agreed.

 

Especially with EQ, take the Hippocratic Oath: "First do no harm."

Link to comment
Share on other sites

So it does not need to be complicated beyond using the right tool for the job. Cool, thanks Dave. Combining that advice with Asher's assertion that using the least amount of filtering to get the sound is correct. That would probably mean that at the recording stage, mic placement serves as the primary EQ initially. In theory I knew this was right, but I am just now learning it in practice.

 

On an Aux channel, is it possible to trace backward to see which input channels have been assigned to it? As opposed to checking the input channal strip to look for the buss destination?

Link to comment
Share on other sites

Parallel compression means mixing the compressed signal with the uncompressed one, resulting in less compression.

Less compression compared to what? Compared to an inserted downward compressor? That would depend entirely on the settings in each case.

 

Since parallel compression is done in parallel there's no compression of the original signal, keeping the original peaks intact. As you blend, you'll in effect compress the summed signal as well as raise the level of the sum. With traditional parallel compression settings (i.e. NY style) you'll get upward compression.

 

Only in rare cases where the parallel signal is compressed using a very slow attack, low threshold and high ratio, could the result be less compression relatively speaking (i.e. expansion).

 

Is it common practice to compress a signal using a channel strip plus a second pass on an aux channel? That is what I thought was meant by parallel compression. Thus compressing a signal 2X.

Compressing a signal several times in normally referred to a serial compression, i.e. several compressors in series.

 

If you compress a signal during recording, then on the channel strip during the mix, and then route the output of that channel strip to a bus and compress in on the aux - that's serial compression.

 

If you (pre fader) send the channel strip to a bus and compress it on the aux - that's parallel compression. The compression no longer takes places on the original signal, but in a parallel chain.

 

Serial compression or any type of compression in most electronic music is rarely necessary for levelling purposes, only shaping. viewtopic.php?f=10&t=103235

 

Serial compression is common with real instruments and vocal, especially when it's digitally mixed, since there is no self-compression. The stages could look like this for a vocal:

 

1) Compression during recording, i.e. an 1176 LN to control the peaks and some overall levelling and coloration

2) A low level non-linear compressor pulling up details, i.e. Waves Renaissance in Electro mode

3) An averaging, usually RMS type, compressor smoothing out overall levels

4) Sum compression on a bus with other vocals for group control

Edited by lagerfeldt
Link to comment
Share on other sites

I currently use templates, for this very reason.

It's an even split between cool spatial effects and parallel tracks.

 

As for the question, I've got them all on tracks in the arrangement, with quick automation via touch tracks. It works!

Link to comment
Share on other sites

Parallel compression means mixing the compressed signal with the uncompressed one, resulting in less compression.

Less compression compared to what? Compared to an inserted downward compressor? That would depend entirely on the settings in each case.

Compared to the same compressor inserted in the signal chain (and obviously with the same exact settings).

 

The OP was under the impression that parallel compression would "double up compression and crush the life out of the drums". But it's the opposite: parallel compression allows you to mix in some of the uncompressed signal along with the compressed one, resulting in less compression than if you just had the compressed signal alone.

 

Only in rare cases where the parallel signal is compressed using a very slow attack, low threshold and high ratio, could the result be less compression relatively speakin g (i.e. expansion).

Take one compressor with fixed settings. Insert it serially and you'll have a certain amount of compression. Use it in parallel, mixing the compressed signal with some of the uncompressed signal, and you'll have less compression. It doesn't matter what the compressor settings are. Adding uncompressed signal to the compressed signal always results in less compression than keeping only the compressed signal.

Link to comment
Share on other sites

Take one compressor with fixed settings. Insert it serially and you'll have a certain amount of compression. Use it in parallel, mixing the compressed signal with some of the uncompressed signal, and you'll have less compression. It doesn't matter what the compressor settings are. Adding uncompressed signal to the compressed signal always results in less compression than keeping only the compressed signal.

I have heard and seen many tutorials that utilize similar techniques. Honestly, I can't hear the difference. Aghhh! I feel sooo inar-inar-inar ticulate... :x

Link to comment
Share on other sites

Take one compressor with fixed settings. Insert it serially and you'll have a certain amount of compression. Use it in parallel, mixing the compressed signal with some of the uncompressed signal, and you'll have less compression. It doesn't matter what the compressor settings are. Adding uncompressed signal to the compressed signal always results in less compression than keeping only the compressed signal.

Ah, with identical settings, yes.

 

It doesn't matter what the compressor settings are.

As long as they're the same.

 

Parallel compression is often used for upward compression which require very different settings from what a regular inserted compressor would be set for. But you are correct - I took the upward compression scenario for granted since blending regular downward compression settings is more commonly done internally with a crossfade wet/dry knob and compensated make-up gain than the always-add aux scenario.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...