Jump to content

Audio not in sync


Kim Prima

Recommended Posts

Hi,

I am having a massive problem with my audio recordings in logic express 7(.3 i think). I play guitar for example and it's perfectly in time and i can press record and all is well until i go to play it back and it's slightly out of time which obviously makes it sound awful. thanks for any help !

Link to comment
Share on other sites

NOTE: this procedure was originally posted during the Logic 7 era and updated for Logic 8. The procedure is still perfectly valid for Logic 9.

 

===========

 

SUMMARY: with this procedure you'll be playing back a track from Logic and recording it right back into Logic on another track. This "loopback recording" will likely be late* with respect to the original due to latencies inherent in your interface, driver software, etc. Note that Logic's I/O buffer and process buffer settings have no influence on this procedure.

 

This procedure lets you figure out to the sample how off your looped-back recording is with respect to the original. Once you determine that number, you'll then enter into Logic's recording delay setting in the audio prefs per the instructions below. From that point on, your live-recorded tracks will be perfectly in time with when you played them.

 

The procedure uses phase cancellation of the original and looped-back track to certify that you've found the right recording delay value (you'll see reference to "null point" below, and that's what this is about).

 

* Note: audio recorded by most audio interfaces ends up being late. But on some systems the recorded audio can actually end up early! And in some cases a recording delay setting of zero will suffice. The procedure outlined below addresses the more common scenario -- late audio.

 

HOW TO DETERMINE AND SET THE RECORDING DELAY

 

STEP 1 -- very important!

 

• Start with a totally blank song and NOT an existing project. Easy way to do this: use the Explore > Empty template.

• Turn software monitoring off

• Turn the metronome off

• Set the recording delay value to zero

• Set Plugin Delay Compensation (PDC) to off

 

2. Create two audio tracks:

 

• Track 1 (channel 1) -- add a CD track or any stereo track of your own, preferably something with sharp transients at the top, like drums or percussion. I'm going to refer to this audio region as "X". Align it to start at bar 2.

 

• Track 2 (channel 2) -- set this channel to record from INPUTS 1/2. This is the track you're going to record your looped-back audio on.

 

3. Use patch cables to physically connect outputs 1&2 of your audio interface to inputs 1/2 of the interface

 

4. Start playback at bar 1 and go into record a little before bar 2 (punch on the fly should be enabled to make this operation as smooth as possible. You can also use auto-punch if you want). The recording you're making on Track 2 -- the "loopback recording" -- is going to be called "Y". You only need to record about 10 - 20 seconds of material max.

 

5. After you've made the loopback recording, take Track 2 out of record and insert the Logic > Helper > Gain plug on this channel. Set the L & R channels to be out of phase.

 

What's going to happen next: you're going to play back both "X" (the original) and "Y" (the loopback recording). Because of the out-of-phase setting on the gain plug, if Logic recorded a perfectly in-time copy of X, playback at this point will result in silence. This is what happens when two identical waveforms are played together with the only difference being that one of them is 180º out of phase with the other.

 

However, chances are that X won't be time-aligned with Y due to the latency inherent in your audio interface and its driver software. You'll likely hear flamming (slapback echo), or a thin, flanger-like sound. If you hear this, it's a clear indication that your recording delay setting needs to be adjusted.

 

NOTE: the proper recording delay setting for some systems is indeed zero! So if at this stage you do achieve complete dead silence (or very close to it) you can conclude the test. If you don't achieve silence, continue to the next step...

 

6. Open "Y" in the sample editor. Zoom ALLLLLLLLLLL the way in to the anchor point as far as you can go. Set the sample editor's "view" to "samples".

 

7. Click/hold on the anchor point, being careful not to move it. You will now see two numbers in the sample editor window. Write down the bottom number.

 

On most system "Y" will have been recorded late. This means that the top of "Y" contains a little bit of dead air as compared to the original, "X". We're going to move the anchor point to the right -- one sample at a time -- to shift the starting point of "Y", past the dead air and to the sample position that will cause "Y" to be aligned with X, as follows...

 

8. Hit play and move Y's anchor point to the right one sample at a time until you start to hear the sound thin out. Start/stop Logic as needed. As you move the anchor further to the right the sound will thin out more and more. As you get closer to the null point a steady, flanger-like "pitch" will start to form in the sound. If the pitch gets increasingly higher you know you're moving in the right direction.

 

You will reach a point where the sound is extremely thin and almost silent, and then, moving one more sample to the right, it will cancel. When this happens, click and hold on the anchor and write down the bottom number.

 

 

9. Subtract the first number from the second number. Then put a "-" in front of it. THAT's your recording delay value; set it in your audio prefs.

 

To confirm that this is the correct number

 

10. Delete "Y" and make a new loopback recording on track 2. This is going to be called "Z".

 

11. If the number you calculated is correct, and the Gain plug is still active on track 2 (putting "Z" out of phase with the original "X"), when you play back both tracks now you will achieve silence. To confirm, bypass the plug and you should hear your original track 2x as loud.

 

If upon playing back X and Z the sound is still not perfectly canceling, adjust your recording delay +1 or -1 from the value you calculated and repeat steps 10 and 11 again.) However, note that sometimes it's not possible to achieve complete silence. You might hear a little bit of residual sound, and that will likely be due to the A/D convertor in your interface re-interpreting the signal in a slightly different way from the original. So while the goal is to achieve complete silence, you may only be able to get 99.95% of the way there. If you accomplish that, no worries, you're doing everything right and you'll be in good shape!

Edited by ski
Link to comment
Share on other sites

"Turn software monitoring off, set recording delay to zero, set PDC off, and have zero plugins anywhere in your song."

 

Software monitoring, PDC, and recording delay are parameters in your audio preferences. Zero plugins means just that -- no plugins anywhere in the song.

 

You're best starting off with a blank template for this too, adding two audio tracks, and following the procedure above.

 

If you find that the procedure is too technical or seems too complicated, you may need someone more experienced on Logic to help you. This procedure is the only way to calculate recording delay and set it properly. And unless you set it correctly, your recording will always be out of time.

 

BTW, what audio interface do you have?

Link to comment
Share on other sites

Logic's default setting for recording delay is zero. The manual suggests that it's not necessary to change this, but that's got to be one of the most blatantly incorrect pieces of information in the entire manual. Unlike Cubase, Logic does not do an automatic calibration of the recording delay for your audio interface. The only way to get it to be right is to set it manually.

 

DP is the same way. The figure must be calcluated by hand and entered in their recording prefs. At least that's the way it is with their very own interfaces, such as the 2408.

 

In most DAW situations, most of the time, recorded audio will be late due to audio driver (and audio interface) latencies. I believe that the Hammerfall drivers are the only ones that attempt to calculate a recording delay on their own, but AFAIA the numbers are never right. If I'm not mistaken, most people whom I've helped out on this subject who've experienced the opposite effect -- recorded parts play back early -- were using Hammerfall gear.

 

So I'll put this out to everyone --- if your recording delay is set to zero, you should do the above test and see how things line up in your system. I can almost guarantee that you won't achieve perfect cancellation if you were to hit PLAY at the end of step 7. If you do, then great! If you don't, you have a problem: your recorded tracks will be out of time on playback.

Link to comment
Share on other sites

ok great thanks but i have a new problem . i cant plug my audio interfaces outs straight into the ins. it just does a crazy "about to blow up " sound. any ideas ?

 

That's probably because you have software monitoring turned on. It has to be off. That's why all the things I listed in the first step are so important! 8)

Link to comment
Share on other sites

Hi ski,

 

very interesting and important thing you figured out !

I can follow all your steps to calculate the recorddelaycompensation.

But i have one question which could be under step 1: what about the I/O-Buffersize, how should it be set. to the lowest (32) or to what i use in general for mixing like 512 or 1024 ?

 

Thank You

 

Milan

Link to comment
Share on other sites

Hi ski,

 

very interesting and important thing you figured out !

I can follow all your steps to calculate the recorddelaycompensation.

But i have one question which could be under step 1: what about the I/O-Buffersize, how should it be set. to the lowest (32) or to what i use in general for mixing like 512 or 1024 ?

 

Thank You

 

Milan

 

Hi Milan,

 

Strange as it may seem, neither buffer size settings (nor process buffer range settings) affect the recording delay calculation. In fact, after reading your post I did some tests to double-check this and I can confirm this is the case. At least on my system it is.

 

As far as setting the I/O buffer... as you probably know, the object of the game is to set the I/O buffer to the lowest possible number to reduce latency for live performance of virtual instruments while allowing them to actually function! For example, I was able to set my buffer to 32 when I first got my computer. But once I installed Garritan Personal Orchestra I was getting clicks and pops when I played those sounds, so I had to raise the buffer to 64. That cured the problem.

 

More recently I bought some VSL stuff and I had to raise the buffer yet again to 128.

 

But there are other tradeoffs that aren't always discussed... I'll relate how things work on my Logic 7 + MOTU system, though YMMV with Logic 8 and your particular audio interface...

 

My system doesn't work very well at a buffer setting of higher than 256... at settings of 512 and 1024, Logic complains. A lot LOL!!

 

But, at a buffer setting of 256, everything sounds better!. But that's too high a number to track with, so I'll track at 64 or 128 (if I'm using VSL). It would seem logical, then, for me to get the best sounding mix by upping the buffer to 256 but that's not the case... At 256, MIDI timing of any virtual or MIDI parts I have playing goes to s#!+.

 

Tradeoffs, tradeoffs... :roll:

 

-=sKi=-

Link to comment
Share on other sites

Hi ski again,

 

did the calculation...and it worked !...but... i had to try several times till i got it,my trouble was, i imported an normalized steroeinterleaved track and after i recorded that track on audio track 2 i missed that the level was a little bit lower than the original...hmmm.. then i thought to normalize that recorded track.. and so it worked.

Strange thing. The original has this double-zero sign and the recorded has not...and after normalization it also had that double-zero sign... wtf

 

And about that I/O-Buffer-thing, i know, for recording VI´s 32 is the best.but when i wanna use stuff like BFD 2 i have to increase up to 512 or even 1024... thats how it goes...But, well thanks for this calculation-thing !

Link to comment
Share on other sites

Hi ski again,

 

did the calculation...and it worked !...but... i had to try several times till i got it,my trouble was, i imported an normalized steroeinterleaved track and after i recorded that track on audio track 2 i missed that the level was a little bit lower than the original...hmmm.. then i thought to normalize that recorded track.. and so it worked.

Strange thing. The original has this double-zero sign and the recorded has not...and after normalization it also had that double-zero sign... wtf

 

And about that I/O-Buffer-thing, i know, for recording VI´s 32 is the best.but when i wanna use stuff like BFD 2 i have to increase up to 512 or even 1024... thats how it goes...But, well thanks for this calculation-thing !

 

There are definitely a couple of WTF's in there LOL! But you did run across one thing that I didn't mention in my procedure --- that sometimes you won't get a perfect match of level when you do this kind of loopback test via analog out--->analog in; in such cases you have to jockey the level of track 2 a little bit. On systems where you can route a signal digitally out--->digitally in to Logic, level mismatches are usually not an issue.

 

Anyway, I'm very pleased that this has worked out for you and Kim.

 

Best,

 

-=sKi=-

Link to comment
Share on other sites

There are definitely a couple of WTF's in there LOL! But you did run across one thing that I didn't mention in my procedure --- that sometimes you won't get a perfect match of level when you do this kind of loopback test via analog out--->analog in; in such cases you have to jockey the level of track 2 a little bit. On systems where you can route a signal digitally out--->digitally in to Logic, level mismatches are usually not an issue.

 

 

 

-=sKi=-

 

One more WTF... I routed the signal digitally like this:

logic out-->firewire-->Lightbridge-->Lightbrige ADAT out-->o1V96 ADAT in-->

Fader set to 0 db-->routing out ADAT -->Lightbrige ADAT in-->Firewire--->Logic in...= lower Level !? Don´t know why , must be something with the double-zero or normalization ... anyway 8)

Link to comment
Share on other sites

:shock:

 

How much of a level difference was there?

 

(BTW, if you want to adjust level in really fine increments -- like .1 dB at a time -- use the gain plug's gain slider, holding down shift before you click on itl; that lets you get .1dB adjustments. You can't get this kind of fine resolution from the channel strip's fader).

  • Like 1
Link to comment
Share on other sites

:shock:

 

How much of a level difference was there?

 

(BTW, if you want to adjust level in really fine increments -- like .1 dB at a time -- use the gain plug's gain slider, holding down shift before you click on itl; that lets you get .1dB adjustments. You can't get this kind of fine resolution from the channel strip's fader).

 

I am not sure, i just saw a difference in the waveforms , must have been between 3 - 6 db , enough that the phase cancelation didnt work properly, sound was getting thinner but no cancelation.

Link to comment
Share on other sites

  • 2 weeks later...

I am trying to do this, first of all I am using the duet, so I have a variety of in/out options (from output to -10 or instrument amp) then I have inputs of (mic xlr/+4xlr/-10xlr/instrument)

 

I thought i'd better not use the xlr ones as they go through the preamp and I want to avoid coloration for this test right? so I chose instrument input, then I used some TR or TRS cables to patch the output to the instrument input and in the apogee software I set the output to -10.

 

I recorded and got a replication but it was quieter, so I had to mess with gain (added 4 db of gain on both instrument input channels to get something SIMILAR to the original in amplitude) I cant get them to cancel completely but get close, and its only one sample space to the right. is this normal? why cant I get a perfect cancellation?

Link to comment
Share on other sites

I am not sure, i just saw a difference in the waveforms , must have been between 3 - 6 db , enough that the phase cancelation didnt work properly, sound was getting thinner but no cancelation.

 

It's hard to tell without being there to know exactly why you had such a difference in level and why you were only getting partial cancellation. BUT... if the sound was getting thinner, that's a good sign. Chances are that you're better off having set your recording delay based on however many samples of offset it took to achieve the thinning of the sound than what you had before.

Link to comment
Share on other sites

I cant get them to cancel completely but get close, and its only one sample space to the right. is this normal? why cant I get a perfect cancellation?

 

Can you explain in a little more detail what you meant by "it's only one sample space to the right"?

Link to comment
Share on other sites

Hi Swan,

 

Would there be increased delay on audio analogue OUTs vs Digital eg SPDIF?...or can I do this test with either?

 

Maybe it would be good to clarify what the recording delay parameter does before I take a stab...

 

Let's look at analog recording for a second... if you were to play a cowbell with a stick (sharp transient) into a mic and record that on tape, here is the chain of events:

 

• sound wave travels through the air hits microphone, causing its diaphragm to vibrate

• microphone signal is amplified by preamp (analog electronics)

• signal travels to tape recorder's analog electronics

• tape heads get energized

• magnetic pattern gets imparted to moving tape

 

From the time the sound wave hits the mic, there is virtually no delay between the time the mic's diaprhagm starts moving and the magnetic particles start moving on tape. This is because electrons (analog electronics) travel at near the speed of light. For all intents and purposes, there is zero delay introduced by the electronics.

 

The only place any kind of delay would be introduced has to do with your distance from the mic. If you are 2" away, it will take the sound of the cowbell less time to hit the mic than if you are 22" away. If the cowbell part sounded better the further away you were from the mic, your human musicianship would cause you to instinctively play head of the beat. Human Delay Compensation!

 

Digital recording is just soooo different. Once the signal hits the audio interface, it has to be sampled (time sliced) at the sample rate of your session. It will always take -- at minimum -- one sample's worth of time for the computer within the audio interface to calculate the proper value for each time slice it samples. But it will usually take many more samples' worth of time to do this calculation.

 

Once the calculation is done for each sample, a digital value for that sample passes to your computer (either via FW or PCI card). Both of these portals are governed by software which may introduce its own delay of at least one sample. But more realistically, this delay can often be more significant, on the order of tens or even hundreds of samples.

 

To summarize, the analog-to-digital conversion introduces a delay and the driver software introduces a delay. Let's say the total accumulated delay is 100 samples. When you record audio, you do it in real time. By the time it gets actually recorded in your DAW, it's 100 samples late! The recording delay lets you compensate for this.

 

Now, to your question!

 

If you record digitial--->digital via your audio interface, no time-consuming A/D conversion has to take place. But you'll likely still have a delay of some amount, because all digital audio recording systems use a clock (word clock or similar) to keep the audio streams flowing synchronously. So there may still be a delay, it just won't be as great as when doing analog recording.

Link to comment
Share on other sites

  • 1 month later...
what if I do this and I dont have any delay?

 

Not sure what you mean... do what?

 

Aside from that, in any digital recording system there will be some delay. It's inevitable and unavoidable, even if it's only a few samples-worth of delay (though it's usually much more than just a few...)

Link to comment
Share on other sites

hey ski ...

 

Could I do this in mono?? My audio interface ha 2 ins, but 1 is a Hi-Z 1/4" jack while the other is female XLR Low-Z. Anyways, I'd be worried that things wouldn't match up to well if I tried to loopback like this. I read quickly through your instructions and didn't see any reason why it needed to be stereo .... or did I miss something??

 

I was thinking that I could just record a mono midi track, and then just boune mono and import that to the arrange window, or would I be better off using stereo tracks as you listed but only using one side .... help me out here, can't you see my head is spinning!!!!

 

 

[deep breaths} ..... {/deep breaths] ..... I'm OK now!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...