Jump to content

[Noob] When I bounce my project, it doesn't sound the same as when I play it in logic pro


Recommended Posts

Hi,

 

I don't have a lot of experience in music. Before trying to make my first song in Logic pro X, I used Garageband for 2 songs. In garageband the export process was perfect. but in Logic pro, when I try to bounce the song, it sounds really different. I read some threads on this forum, about the Normaize function, the output format (I used PCM).

I tried to use Normalize on, the song is OK, but the volume is VERY low, it's not acceptable.

I tried to use Normalize off, the song is OK, until there are some part where the master goes over 0db, there are saturation glitches, it's totally not acceptable.

 

Maybe the problem is that the shouldn't go over 0db ? It never goes over 2 db, and its always for a very short time.

But there's something I don't understand, why the bounce doesn't export exactly as I heard it in Logic pro ? I could capture the audio with third party software while I play it in logic pro, and it would give a better result than the logic pro bounce. That sounds really silly to me, there must be a way to do that in logic pro. Could someone explain me what I'm doing wrong ?

 

Thx a lot.

Link to comment
Share on other sites

I just tried to bounce with the option "real time", also with "real time bounce 2nd pass", and it produces the same result despite I can hear the song is OK when playing it in logic pro. For my tests I increase the master volume on purpose to go over 5 db, I want to understand why I hear it fine on logic pro and not on the bounce file.
Link to comment
Share on other sites

OK after trying many things, it seems that adding a compressor to the master track helped. You need to find the right settings, but the result is ok and loud enough. For those curious I used these settings in the compressor shown in the attached image.

1621013536_ScreenShot2017-07-26at17_35_45.png.a6b60bf8c3eeb8c6334daaf1d6e9e444.png

Link to comment
Share on other sites

Use the Adaptive Limiter on the stereo output for the final volume push, it works better.

Make sure it's the last plugin.

For a quick mastering, I usually use both: 

  1. A compressor with a soft ratio – I often use the Platinum Analog Tape preset. 
  2. An adaptive limiter.

After that it's a matter of using proper gain staging to hit the compressor just right and the ad-lim just right with the just-right amount of gain. 

Link to comment
Share on other sites

David Nahmani:

After that it's a matter of using proper gain staging to hit the compressor just right and the ad-lim just right with the just-right amount of gain. 

 

Some questions about this, when you say "gain staging" are you refering to the master track volume, or each tracks' volume, or something else ? When you say to hit the compressor or ad-lim just right, what do you mean exactly ? I understand that if the volume is really too high, even with the compressor+ad-lim that won't sound ok, is this what you mean ?

Link to comment
Share on other sites

  • 2 weeks later...

Ok try this, maybe so you can understand what's happening:

Remove all plugins from your Output channel. Play your song, start to finish and look at the volume on that channel. I can guess that what's happening is that you have a lot of peaks that go above 0dB and so when you export with Normalize on, Logic is going to find the loudest peak and turn the WHOLE song down by the same amount of dBs that go above 0. So let's say that the loudest peak is going to 6dB above 0, so +6dB. When you choose Normalize, the whole song will be turned down 6dBs, that's why it sounds lower than when it's playing inside Logic.

Now after you play the whole song and you know how many dBs you have above 0 on the Output, select all your tracks and turn all the faders down by that same amount and maybe 1 or 2 dB more, so if it's going 3dB above 0, turn all faders down (except for the Auxs) maybe 4 or 5 dBs. Play the song again. Export without Normalize and see if the volume is the same. It will be for sure, if the problem you're facing is the one I'm trying to explain.

 

Hope it makes sense :)

 

And as "triplets" said, it all starts by fixing the individual tracks by EQing, compressing, limiting so you don't have one single compressor and/or limiter doing all the work at the end.

 

Good luck :)

Link to comment
Share on other sites

If you're sitting in the audience during a musical performance, "your amazing ears" can easily accommodate differences in sound within the hall – you can hear your sweetie whispering sweet-nothings to you during a KISS concert – because all of the sounds are bouncing through the hall separately and you can zero-in on each one.  The same thing happens visually when you "look around," and you can see detail in both the brightly-lit stage and in the very-deepest shadows at your feet, because "your amazing eyes" adapt instantly.

 

But, when you set out to record that sound, or that visual image, "everything changes."  Now, the sound is coming from one loudspeaker-bank.  The visual image is coming from one piece of film, or from one digital screen.  There is only a single range of sound-levels, being captured and played back from one source, or, only so-many colors, being displayed from one source.

 

So, in either case, you are obliged to coax the various pieces to "play well together," and to "appear realistic and believable" when compared to the real-life experience of actually being in that concert hall.  (Or, looking at that scene.)  Given that the medium in question ... unlike "our very-amazing senses" ... cannot represent "more than one independent stream of information" at the same time.  And also because that medium cannot even begin to match our human senses' ability to discern vast ranges of sound (or, light) levels.  Our eyes and our ears can instantly adapt, but the media that we use (in both cases ...) cannot "adapt" at all.

 

In the realm of photography, Ansel Adams made himself famous for articulating his so-called Zone System, which in some ways was a simplification of the considerations faced every day (and, to this day ...) by printers.  The essential principles are very much the same for sound recording:  "we have to shove this thing through an extremely-limited medium, and still make it fantastic."  (And, "the audience must never suspect.")

Link to comment
Share on other sites

  • 5 weeks later...

@3ple, yes I think it makes sense. I understand the normalize problem. I have to think about your "limiting so you don't have one single compressor and/or limiter doing all the work at the end", because I'm not sure to understand this part. Now that I have an AdLimiter on the master, the bounces sound ok to me, but I'm not an expert, maybe I should use AdLimiter on individual tracks and see of it makes a difference. So many things to learn here.

 

@MikeRobinson ahah yes thx very cool answer, it has been 1 month now, and my understanding of the whole thing has improved a bit, but I'm far away from the sound I hear on professional records, there's so much to learn.

Edited by jp44
Link to comment
Share on other sites

If you compress and limit each individual track, then you don't have one single compressor and/or limiter doing the whole work, compressing/limiting the whole track.

Yes I think I get it, so in the end each track will have its output level "controlled" by the individual compressor/limiter, and won't affect eh other tracks. And the song will render better than with one single global compressor/limiter, right ?

Link to comment
Share on other sites

Yes. If you tweak each individual track, the final output will be "cleaner" and whatever processing you use there won't have to be as hard. As a simple example with EQ: let's say you have a snare that has a lot of low end that you don't need, but you don't tweak it. When you get to the output and you want to remove those frequencies, whenever you apply an EQ you will be affecting the other instruments. It's the same with compressing/limiting. The more you can tweak individually, the more control you will have at the end of the chain (output).
Link to comment
Share on other sites

Remember what I said about "one speaker."  At the end of the day, (probably ...) one "Stereo Out" is what you finally wind up with, from a "production line" of mixing steps that deal with the various inputs.  You need to arrange things, up and down that production-line, so that the levels of each thing that is "going into" a particular processing-stage are compatible with one another.  Otherwise, that processing-stage is going to appear to affect one or more of those inputs "too much."  (The amount of volume-reduction needed to tame a "loud" component causes a "quiet" component to disappear.)

 

I drew an analogy with photography because the situation is actually quite similar:  photographic paper and film is much less sensitive than the eye (video is much worse ...), and sound recordings less than the ear.  In both cases you have to "squeeze" the information into the constraints of the media, usually going through a multi-step process, while making the final product appear the way you want it to. (And without making it appear unrealistic.)   In both cases the saving grace is your human brain, which mostly perceives (relative) contrast, rather than the absolute levels that would be described by a meter.  That "quiet" component is relatively(!) more quiet than the "loud" one, and your ears accept the recording as believable.  But, if you measured the sound-levels in a concert hall, they would cover a wider range of absolute values than what you would find on a good recording made in that hall.  The difference comes from what your ears can do, versus what recording can do.  The engineers knew how to work within the physical constraints and characteristics of the medium and of the process, "making it look easy."

Link to comment
Share on other sites

Compression and Limiting aside, another reason why the bounced file sounded different could be that Logic is playing out of your audio interface, while the bounced file is being played through the Finder/Built-in audio hardware which uses a different gain structure.

Only if Built-in audio is selected in System Preferences as sound output.

Link to comment
Share on other sites

  • 3 weeks later...
Compression and Limiting aside, another reason why the bounced file sounded different could be that Logic is playing out of your audio interface, while the bounced file is being played through the Finder/Built-in audio hardware which uses a different gain structure.

Good one :D

I'm right now imagening how I sit in front of my Neumann speakers in my treated room, listening to my new song. Then go home and listen back on my crappy mono boom box and then wonder why it sounds different :D

 

No offense meant to the reporter. He seems to have real issues and they seem to be revealed. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...