Jump to content

Audio not in sync


Kim Prima

Recommended Posts

This took a lot longer than I had initially expected. It seemed to be that the Duet and Logic were automatically compensating and that the correct Recording Delay should be 0, but still quite to my disliking, I couldn't reach complete null. The output volume was lowered by a good -22db, so I figured that it had to be either 1) the AD/DA conversion slightly changing the wave, or 2) it being off by sub-samples, as stated possible by Ski, or worst case scenario 3) I didn't know what the f*%@ I was doing!?! :shock:

I didn't want to post back until I could somehow confirm either case, so I kept agonizing over Samples and WaveForm Alignment, when the Gods showed their Graces and I stumbled upon

http://www.artificialaudio.com/

at this blessed forum of all places! At the very least it corroborates my previously stated belief that the Duet and/or Logic indeed are doing an automatic compensation together. The other two screenshots where posted because I thought it interesting and strange that the I/O plug would have some kind of inherent latency.

Link to comment
Share on other sites

Hey Amnestic and All,

 

I'm having a bout of "real life" myself and haven't been (and won't be) on the forum much for the next few weeks. [caution: shameless plug ahead!] Among other things I'm gearing up for a trip to LA where I'll be teaching classes on EXS-24 and Analog Synthesis as part of David's on-going series of workshops and master classes. I'll also be giving a talk at the Logic User Group meeting, so there's much to do.

 

As far as -177 samples goes, that simply means that your audio interface/driver combination is not reporting to Logic how much processing time it takes to do its thing, and so your audio is showing up late in Logic (I take it that the negative sign in front of 177 is what you plugged in to the recording delay parameter?)

 

As long as you plug in the correct value, all of your subsequent recordings should be aligned properly in Logic from now on.

 

As far as MIDI latency, it's hard to say. I don't know whether or not you should expect there to be a correlation between audio hardware/software latency and MIDI timing. That's a whole other subject that I've been meaning to address myself but just haven't had the time.

 

The glitchy waveform display -- I think that might be kinda normal.

 

ski-- thanks so much for the reply! I'll see how the external MIDI devices work out (finally took my microKorg and Electribe EA-1 out of storage today!), but I'm sure I can always rely on quantization to fix most problems, right? I mainly use these synths for bass-lines and such.

 

As for the negative sign, yes... I had to move my "Y" recording 177 samples to the left. Hence, -177 is what I plugged into the recording delay compensation.

 

And regarding your bout of "real life" syndrome-- hey, at least it's music related. As an actor, primarily, it's rare that I can mix Logic with business. And you wouldn't believe how rusty you get with even just 2 months away from Logic. I feel like I'm re-learning everything. Mind you, I was a novice to begin with. Anyways, congrats on the travels and classes! If I was anywhere near L.A., I'd definitely be going to these workshops.

 

Now, I wonder if Paris, France might have Logic workshops and studio sessions, etc. (I'm moving there in September!)

Link to comment
Share on other sites

$29 Euros ($45US) to try to get complete null? Wow, you're REALLY determined. :lol:
yes that is true, but I'm starting to get into a lot more Audio Recording lately, and this is something that you SHOULD be really anal about. Also, I've been looking at some moogerfooger effects boxes lately. They sound so nice!
Link to comment
Share on other sites

$29 Euros ($45US) to try to get complete null? Wow, you're REALLY determined. :lol:
yes that is true, but I'm starting to get into a lot more Audio Recording lately, and this is something that you SHOULD be really anal about. Also, I've been looking at some moogerfooger effects boxes lately. They sound so nice!

 

Oh, I totally agree. I just think that if you didn't get complete silence, but were pretty close to it, you're probably way better off with that number (and probably incredibly close) already. Nothing will ever be properly in time. I mean, even if you do the test correctly, if you zoom in close enough, you'll always find some difference. It's unavoidable, in my opinion, and it's so minuscule that it's not something you pick up on when listening.

 

Or... maybe I'm wrong?

Link to comment
Share on other sites

Depends on your system. If, like me you're running audio in/out through a digital board (connected via lightpipe to an audio interface) then it's possible to get perfect nullification. On any analog setup (i.e., per the test, where you're running analog out---->analog in of your interface), you will probably encounter a slight discrepancy in the level and/or bandwidth of the looped-back signal ("Y"). But that's OK. As amnestic said, as long as you're getting close to perfect cancellation (as close as you can possibly get) you will be MUCH better off than not doing anything about the situation at all.

 

I believe that people can hear/feel timing discrepancies as small as .5 milliseconds, but they can't hear 1 or 2 samples. So if, as a result of your tests the best you can get is a tolerance of, say, 1 sample either way, I'd say you're good to go.

 

Just to review the math real quick: 1 sample at 44.1K = .02 milliseconds. When you start to get up to the range of 25 samples or so, that's approx. .5 milliseconds. The key is not so much to get perfect cancellation for the sake of it, but rather, perfect alignment of the X and Y waveforms. A little level discrepancy won't affect that. Still, if your alignment is off by 1 sample either way, it's no big deal. No one will ever ever hear that.

Link to comment
Share on other sites

Hey Ski,

 

Great instructions! Very clear. I spent a coupla hours yesterday stuffing around with this. However I couldn't for the life of me get anything to completely null. There would still be very faintly audible audio. I guess according to your last post that's not a real problem.

 

(The instructions for Logic 8, at least the version I have, are a little different I think. In the sample editor for example the values are on the right, side by side.)

 

It seems my system at the moment works best at 0 compensation! Even if I zoom right out in the waveform view to sample level after doing the recording, visibly the samples appear to be very, very slighly offset. Basically, I can see the variation in how the waveforms (at sample level in the arrange) move one or two samples in and out of alignment depending on the value I set in the record delay. Still, the best I could get was with a value of 0, but it still wouldn't null perfectly.

 

This test, and result of 0, was done with a pre recorded drum OH track from another session. Interestingly, when I used a drum loop from the apple library (apple loop) as the source I had to set the delay compensation to either -3 or -4 (neither was perfect) to get the desired result! How odd...

 

Maybe the Metric Halo just works really well with Logic, but there does seem to be kind of a "half-sample" discrepancy here, though I wouldn't have thought that was possible.

 

Ski, bottom line as far as I can tell after all that, is that my system is fine and I don't need to do anything!?! :? (I've never had an issue with latency, and use hardware monitoring whenever possible anyway.)

 

Cheers.

 

:)

Link to comment
Share on other sites

Ski, bottom line as far as I can tell after all that, is that my system is fine and I don't need to do anything!?! :? (I've never had an issue with latency, and use hardware monitoring whenever possible anyway.)

 

If I understand it all correctly, hardware monitoring is a separate issue (which helps to avoid latency caused by plug-in effects on channel strips while you're tracking). What this test does is helps you to record your audio tracks without delay caused by your A/I transmitting information to Logic. You would hear everything fine using your A/I's monitor, but it would still record it a few samples off, no matter what.

 

By doing this test, you've avoided that problem. And actually, it seems you've never had the problem, as your test has shown that "0" is the accurate number.

Link to comment
Share on other sites

If you've done the loopback test and found that your recording delay setting should be at zero, you've done two things:

 

1) confirmed that your recording delay parameter is properly set

 

2) shown that the drivers for your audio system report the latency to Logic (which is a good thing for yourself and others to know about)

 

I know that in my procedure (several pages back) I stressed getting perfect cancellation; yes, in an ideal situation you would want to get cancellation (it's the most obvious sign that you've discovered the optimal setting for the recording delay). But when it's not possible, it doesn't necessarily mean that your "Y" is out of alignment with "X". If one side or the other of near-perfect cancellation the signal gets louder, then that near-perfect point is the right one.

 

You can also confirm this by looking at the waveforms for "X" and "Z" when zoooooomed waaaaaaay in. If the waveforms look aligned as well as cancel as much as possible, you're on the money.

Link to comment
Share on other sites

  • 3 weeks later...

Bringing this thread back from the dead again...

 

I just looked at my Tascam US-122's Interface Manager, and it appears that it has an option I've never considered touching: Audio Safety Buffer (see pic below).

 

I've mentioned in the past in this thread that I had to set my recording delay to "-177". Could this 2ms setting be the culprit of most of that delay? Why does the "1ms" option look so red and evil? I've tried to search online to figure out how this "buffer" is even effective, and why anyone would want to set it to anything but 1ms, but I couldn't find conclusive results.

 

Any ideas?

 

Addendum: it just clicked that this probably similar to Logic's "I/O Safety Buffer" option, which itself is not really useful. I guess it'd be best to set the Tascam to 1ms. I don't see why this would affect anything negatively.

2121281105_Picture2.png.2725f7a77850cfedcab485a261f281a8.png

Link to comment
Share on other sites

  • 2 weeks later...

I think the 1ms is in red because it represents a value that's too low for most systems to operate at.

 

I often wondered the same thing (per exactly the kind of parameter you showed in your screen grab) --- why would anyone want to deliberately add latency to their system? After all, who would want to deliberately add delay and throw their timing off? Well, I don't think that's the purpose. Rather, I think this kind of parameter is present on audio interfaces, stand-alone versions of plugins, and so on, specifically so that you can find the lowest possible value that works on your particular system. In other words, it's up to the user to take time to experiment with this kind of latency parameter.

 

It's kind of like Logic and this whole recording delay parameter. In many cases it's not set by the audio driver of the interface you're using, in which case you have to set it yourself. Notable exceptions are the Apogee and RME systems. Speaking of which...

 

RME Latency Compensation: I've been reading up on how the RME systems work, and indeed they DO report driver latency to Logic. And as we've seen over the course of this post, it's not something that all audio interface drivers do. I've attached a .pdf to this post -- a page taken from their Multiface manual (which is the same text as in their Fireface manual). It explains why it's theoretically not necessary to manually set a recording delay value in Logic (though I still think it's something that should be checked manually, per the procedure described in this thread). Anyway, I think this RME info will prove an interesting read.

Multiface Latency.pdf

Edited by ski
Link to comment
Share on other sites

FURTHER THOUGHTS...

 

On the subject of "safety buffers"... my feeling (though just a feeling) is that this is a term that's being bandied about a bit too freely, and that clarification is needed. For example, there's a requirement for writing Apple's FW audio driver drivers that a "safety buffer" of 32 samples has to be in place in the code. OK, then there are things like the I/O safety buffer in Logic's recording prefs, and the "audio safety buffer" shown in amnestic's pic above. It's all a bit confusing as to what these terms mean depending on where they appear and how they're applied.

 

But the 32 sample buffer requirement is for Firewire. The Tascam unit is USB. So I wonder if the kind of safety buffer seen in the Tascam control panel is made available to provide a (necessary) amount of buffering via USB. (All analog>digital converted audio needs to be buffered before it actually hits Logic or any DAW -- more on this below). In other words, I don't know if there's a standard amount of buffering required for USB audio drivers as there is with FW drivers, so they leave it up to you to find the lowest value that works. That's a guess.

 

But here's where I'm coming from with this guess... as far as the 1ms being in red... I just did some math, and here's what I came up with:

 

A buffer size of 1ms at a sample rate of 44.1 = approx 44 samples. Now, maybe this is comparing apples to orangutans, but that figure is awfully close to the 32 sample "safety buffer" that's required to be written in to Firewire drivers. Looking at it another way, notice that Logic doesn't offer a buffer size smaller than 32 samples. And on most systems -- particularly smaller systems -- a buffer size of 32 simply won't work. But looking beyond any difference between audio systems and what protocol (FW, PCI, USB) they use to transfer data, samples are samples. And as mentioned above, any computer needs to buffer (delay) an incoming audio stream in order to give the computer "breathing room" to read the data, apply any DSP processing, and then actually record/playback the audio. It doesn't matter what the source of the audio is (USB, Firewire, PCI), all incoming audio needs to be buffered somewhere before it hits the DAW. So my thinking is that 1ms is in red because it represents a value that's very small and won't work on most systems.

 

Maybe others can chime in to clarify (or correct) what I'm saying here.

Link to comment
Share on other sites

ski-- many thanks for investigating this further!

 

I had continued my research beyond what I wrote in my last post, and haven't really found any answers to how this is affecting the Tascam, but I think what you mention above, comparing this to Firewire drivers, is probably accurate.

 

I played around with the setting (1ms, 2ms, etc), while doing your recording delay test, and I didn't find too much of a difference in my results between the 1ms and 2ms. I did see that my recording delay worked better with a shorter number, but not by much (from "-177" under 2ms, to "-174" at best under 1ms).

 

I'm assuming my machine is able to work with the 1ms buffer, but I just kept it at 2ms, since your test already helped my recording dramatically!

 

I'm guessing when the US-122 came out, most computers at the time might have had trouble with the 1ms setting?

Link to comment
Share on other sites

  • 3 weeks later...

Ski, in step 9 of your loopback procedure, I don't see the sample numbers you refer to. I'm using LP8.0.2 and it looks like there are some numbers in the upper right, but not sure which ones to use,

 

Also, in step 10, when I grab the anchor point and slide it, the whole time line moves with it. In other words, it doesn't move along the sample time line - the whole time line moves with it.

 

Any suggestions would be appreciated.

 

Thanks,

 

ERO

Link to comment
Share on other sites

Ski, in step 9 of your loopback procedure, I don't see the sample numbers you refer to. I'm using LP8.0.2 and it looks like there are some numbers in the upper right, but not sure which ones to use,

 

Also, in step 10, when I grab the anchor point and slide it, the whole time line moves with it. In other words, it doesn't move along the sample time line - the whole time line moves with it.

 

Any suggestions would be appreciated.

 

Thanks,

 

ERO

 

Yes, you follow the numbers on the right. I think ski was referring to LP7 when he made the steps. Basically, you follow the number that changes, but I can't remember which one that is. Just keep holding the mouse as you move the anchor, and whatever number changes, you'll know that's what you should follow.

 

And yes, the time-line will move with the anchor. But the audio region itself will be moving to the left ever so slightly. To prove it, don't zoom in all the way, and just move the anchor the right, while watching the arrange window. You will see your audio region move to the left. But when you're zoomed in, you're doing such minuscule adjustments that it doesn't look like anything is occurring. And yet, the adjustments are enough to throw your whole performance off if they're not properly set.

Link to comment
Share on other sites

Ski’s loopback procedure is great, and I finally got it to work in LP 8.0.2; however, I had to make a few changes in steps 9 & 10 to get it to work. Apparently, Apple has made some changes to the Sample Editor in 8.0.2, and there is also a bug that affects the way the anchor point works if a region is selected in the Arrange. So here’s how to make Ski’s procedure work in LP8.0.2:

 

Step 9:

(a) After you open “Y” in the Sample Editor, go back to the Arrange and de-select all regions by clicking in any grey space. With “Y’s” track still selected, but the region in the track de-selected, return to the Sample Editor.

(b) You will see two boxes in the upper right corner of the screen. Click in the region area of the Sample Editor and hit Command-A to Select All. The left box should now read zero, while the right box will show the length of the region in samples.

© Click and hold the anchor point, being careful not to move it. The number in the right box, while click-holding the anchor point, is the number of samples we’re after.

 

Step 10:

(a) Complete this step as described by Ski in his Step 10. However, once you find the point where cancellation occurs, write down the number in the right box while click-holding the anchor point. This is the number to use in Step 11. Finish the remaining steps in Ski's Loopback procedure.

 

Hope this helps someone...

 

ERO

Link to comment
Share on other sites

i find it easier to just create a high pitched "pop" and align in Arrange.

the signals cancel out after about 3 tries. :)

and its just "record > delete > record > delete"

 

anyway, does anyone think (or better, know) if i could change everso changing latency if id get a 3rd party FW card/

Link to comment
Share on other sites

Apparently, Apple has made some changes to the Sample Editor in 8.0.2, and there is also a bug that affects the way the anchor point works if a region is selected in the Arrange. So here’s how to make Ski’s procedure work in LP8.0.2:

 

Step 9:

(a) After you open “Y” in the Sample Editor, go back to the Arrange and de-select all regions by clicking in any grey space. With “Y’s” track still selected, but the region in the track de-selected, return to the Sample Editor.

(b) You will see two boxes in the upper right corner of the screen. Click in the region area of the Sample Editor and hit Command-A to Select All. The left box should now read zero, while the right box will show the length of the region in samples.

© Click and hold the anchor point, being careful not to move it. The number in the right box, while click-holding the anchor point, is the number of samples we’re after.

 

I work with 8.0.2 and have never had to deal with the above steps you wrote out. In fact, I just did the test two days ago without having to deselect any of the regions.

 

What exactly were the "changes" you speak of in the Sample Editor?

Link to comment
Share on other sites

  • 1 month later...

Hi Ski & everybody,

 

I tried your method, but with an extra parameter= Soundcard out to my TRS patch,

TRS patch back in (for measuring Logic-to- hardware-processing-back-to-Logic-latency)

I use only high quality cables.

Soundcard= Metric Halo 2882, buffer 512, 44.1 kHz.

 

I measured 130 samples with the anchorpoint.

 

So converting 130 samples to ms= 3 ms.

 

Once I entered 3ms in the recording delay and recorded back in, the recorded

part was still late!

 

I adjusted the recording delay manually and when I got the right result, I was

at 40 ms= 1764 samples

 

Now that's something I don't understand, although I'm glad latency is gone,

why such different figures then what I carefully measured?

 

Thanks

 

Saxophonick

Link to comment
Share on other sites

I measured 130 samples with the anchorpoint.

 

So converting 130 samples to ms= 3 ms.

 

That's the problem right there. No need to convert samples to milliseconds. The recording delay field wants you to enter samples, not milliseconds. So, assuming that your value of 130 samples was correct to begin with, enter that number in the recording delay field and do the loopback test for "Z" (per the procedure). You should be able to achieve perfect cancellation (or very close to it). If you don't post back with your results.

Link to comment
Share on other sites

Hi Ski

 

thanks for answering this older topic :D

 

As I said, entering 130 was way too much (recording ends up early),

so I manually tried different values in the recording delay preferences

and the right value is "-40".

 

So, glad I found out about this, but I don't get the 130-40 difference, unless

I miscalculated the moved anchor (first value was 88200, second value was 88070= seems to be 130)

 

For anyone interested to know: although the MIO 2882 has very low noise,

and the patch I use is exclusively Neutrik connectors with high grade custom

cable, doing a loopback: Mio OUT-Patch In-Patch out-Mio IN causes the recorded

waveform of the loopback to add some low-level noise, also the waveform itself

is altered (without any processing, just the cables)=not really audible, except it is impossible to have complete cancelation due to this, even when normalizing both the source and the result.

 

Luckily, IMHO, microscopic sample-accurate placement of

1) recorded audio

2) loops

3) virtual instruments

4) hardware processed audio re-recorded in the computer

in the same project doesn't stop you from making quality recordings and mixes,

as if even with latency, if you adjust the oldschool-AKAIsampler-way=by ear

it'll still sound good.

BTW I think I never seen/heard a 100% accurate-latency/phase-free mix?

Does that even exist?

Do some of you really place everything sample-accurately and if yes does it have a big impact on the sound/result?

I mean if you have the minimum-good ears you'll notice immediately if there's something out of phase, but what about those micro details we can't hear?

 

I'd appreciate some further thoughts, opinions and experiences

 

Best

Link to comment
Share on other sites

Hi,

 

I did the test but look how it turned out on my motu 828mk3.

 

At first:

 

mic input 1 had a recording delay of 446 samples

mic input 2 had a recording delay of 002 samples

 

after adjusting the recording delay compensation of input 1 to -446, I recorded a stereo signal and mic input 2 starts 446 samples before mic input 1.

 

look how the signal looks after doing this.

 

thanks

 

help me please

 

 

 

Jorge

1131120271_Picture4.png.f034d3518e68b6a63884772fdda6347a.png

Link to comment
Share on other sites

Hi

when patch leads plugged in (outputs to inputs) SHFan but only on input 1 .

have tried outputing both 1&2 to input 2 and ok just input 1 ie. levels scream and peak. (nearly fried my headphones )

Input 1 works under normal conditions just wont accept from outs

Have done as said in "IMPORTANT" and set tracks up ok

To recap

interface input 1 wont take interface outputs

It' a motu 828mkII

Thanks Ronny

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...