Jump to content

Considering changing from 44.1 to 48 (implications?)


Squidge1

Recommended Posts

Hi All 

 

I have been a long time user of 44.1 but am considering the step up to 44.1. My system based around Logic Pro X 10.3.2 on a Mac Pro 2012. Im using 2 x Antelope Orion's in a midi system (RME Madiface)

 

My setup is a hybrid system using a lot of midi/external instruments and hardware so latency is something I'm always working on.

 

My question if I change to 48k will it impact on any other areas not the obvious audio one which I don't expect much noticeable difference but rather such as latency etc

 

What are the key things to change in logic if I goto 48k, bit rate etc? any suggestions

 

Thanks

Link to comment
Share on other sites

The advantage in latency is exactly that your latency will be "only" 44/48 = 11/12 of what it was, meaning it'll be 8 % less, meaning if you had 12 milliseconds it will become 11 miliseconds - in other words: you will not notice any difference.

HOWEVER, If you were to go to 88.2 kHz or 96 kHz, then you would notice a difference, as it would (slightly more than) halve the latency. Downside: it doubles the CPU load. Other upside: slightly better recordings, fit for HD audio.

If the recording latency is all that matters, then record in 96 kHz (or even 192 kHz, for only a quarter of the latency, at quadruple the CPU load - it all depends on what your CPU can handle), and once you're done recording, downsample it all to 44.1 (unless your CPU can handle the load of 96 kHz project mixing - that'll depend on the project).

Link to comment
Share on other sites

The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. 

 

I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. 

 

However, if you don't intend to stretch the files, then it doesn't give you any advantage. 

 

I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 

 

44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. 

Link to comment
Share on other sites

The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. 

 

I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. 

 

However, if you don't intend to stretch the files, then it doesn't give you any advantage. 

 

I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 

 

44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. 

have any sources on that?

Link to comment
Share on other sites

Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts?

Don't know. I guess the benefit will be just little better. As SRF_Audio wrote, it would be better to have it in higher sample rates. Though not sure, if this is entirely true. Heard about it, but never tried it myself. I was never encountering heavy quality issues with time stretching, or the quality loss didn't really matter. j

Maybe you could first upsample your audio file, let's say to 176.4 resp. 192kHz, stretch it, bring it back down to the session sample rate. I must try that next time, just came up with this idea. 

Link to comment
Share on other sites

HOWEVER, If you were to go to 88.2 kHz or 96 kHz, then you would notice a difference, as it would (slightly more than) halve the latency. Downside: it doubles the CPU load

 

One thing to be aware of is that this is many times not a practical solution.

Most drivers working perfectly at a 32 sample I/O and a sample rate of 48 kHz will drop samples when working at 96 kHz. The obvious  solution is to then raise the buffer to 64 samples… and we're back where we started. :)

Link to comment
Share on other sites

The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. 

 

I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. 

 

However, if you don't intend to stretch the files, then it doesn't give you any advantage. 

 

I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 

Yeah, I wonder too if you have any references, or can you elaborate this more in detail, why this should be like this. I also heard this once, long time ago. That time I wasn't able to record at higher sample rates anyway, so I just forgot about it. 

 

 

44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain.

I'd say that also goes for 44.1kHz up to a certain degree. From there on the artifacts get ugly very fast. But those artifacts, that are present even in little changes, may not be an issue in the mix. 

Link to comment
Share on other sites

The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. 

 

I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. 

 

However, if you don't intend to stretch the files, then it doesn't give you any advantage. 

 

I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 

 

44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. 

have any sources on that?

Sure thing. It's a bit technical, but incredibly fascinating reading if you're interested in this topic. (Also Stardust Media, for you as well)

 

Source: http://lavryengineering.com/pdfs/lavry-sampling-theory.pdf 

Link to comment
Share on other sites

Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts?

A good way to think about it is like this:

 

A sample rate is really more like an interval of time. 

 

Have you ever accidentally played back a 44.1Khz file in a 48Khz project, without converting it?

 

What happens? It plays back slightly faster, and at a slightly higher pitch...and the overall length of the file shortens. 

 

So you could think of it really as the computer is just playing back rates of samples. It doesn't care what the input format is...it's just gonna playback X number of samples per second.

 

Next, remember the Nyquist theorem (in plain English): You need twice the sampling interval of the highest desired fidelity. 

 

So, 44,100 samples per second will accurately capture all audio frequencies up to 22,050 Hz, which is just above what most people can hear. 

 

If you squeeze a file, or pitch shift it up, it's not really a problem, because all frequencies are getting faster, and you already have enough fidelity for lower frequency waves. 

 

As you stretch a file or pitch shift it downward, you begin to deal with a loss of fidelity in the high end. Because remember, at the very highest frequencies, you only took 3-4 samples of the waveform. 

 

That's fine when the wave stays at that frequency. But now, you want to make those vibrations slower and lowering the pitch.

 

That's where a higher sample rate helps to maintain a smooth detailed waveform for extreme stretching.

 

A crude analogy would be video frame rates. If you watch a 60 frame per second video, and a 240 frame per second video at normal speed, it is very difficult to tell the difference. 

 

But slow them both down to 25%, and the difference in smoothness of motion becomes immediately obvious. The 240fps video will look insanely smooth, while the 60 will have a bit of motion choppiness. 

Link to comment
Share on other sites

Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts?

A good way to think about it is like this:

 

A sample rate is really more like an interval of time. 

 

Have you ever accidentally played back a 44.1Khz file in a 48Khz project, without converting it?

 

What happens? It plays back slightly faster, and at a slightly higher pitch...and the overall length of the file shortens. 

 

So you could think of it really as the computer is just playing back rates of samples. It doesn't care what the input format is...it's just gonna playback X number of samples per second.

 

Next, remember the Nyquist theorem (in plain English): You need twice the sampling interval of the highest desired fidelity. 

 

So, 44,100 samples per second will accurately capture all audio frequencies up to 22,050 Hz, which is just above what most people can hear. 

 

If you squeeze a file, or pitch shift it up, it's not really a problem, because all frequencies are getting faster, and you already have enough fidelity for lower frequency waves. 

 

As you stretch a file or pitch shift it downward, you begin to deal with a loss of fidelity in the high end. Because remember, at the very highest frequencies, you only took 3-4 samples of the waveform. 

 

That's fine when the wave stays at that frequency. But now, you want to make those vibrations slower and lowering the pitch.

 

That's where a higher sample rate helps to maintain a smooth detailed waveform for extreme stretching.

 

A crude analogy would be video frame rates. If you watch a 60 frame per second video, and a 240 frame per second video at normal speed, it is very difficult to tell the difference. 

 

But slow them both down to 25%, and the difference in smoothness of motion becomes immediately obvious. The 240fps video will look insanely smooth, while the 60 will have a bit of motion choppiness. 

That kinda makes sense. might rethink using 88.2k again.

Link to comment
Share on other sites

A good way to think about it is like this:

 

A sample rate is really more like an interval of time. 

 

Have you ever accidentally played back a 44.1Khz file in a 48Khz project, without converting it?

 

What happens? It plays back slightly faster, and at a slightly higher pitch...and the overall length of the file shortens. 

 

So you could think of it really as the computer is just playing back rates of samples. It doesn't care what the input format is...it's just gonna playback X number of samples per second.

 

Next, remember the Nyquist theorem (in plain English): You need twice the sampling interval of the highest desired fidelity. 

 

So, 44,100 samples per second will accurately capture all audio frequencies up to 22,050 Hz, which is just above what most people can hear. 

 

If you squeeze a file, or pitch shift it up, it's not really a problem, because all frequencies are getting faster, and you already have enough fidelity for lower frequency waves. 

 

As you stretch a file or pitch shift it downward, you begin to deal with a loss of fidelity in the high end. Because remember, at the very highest frequencies, you only took 3-4 samples of the waveform. 

 

That's fine when the wave stays at that frequency. But now, you want to make those vibrations slower and lowering the pitch.

 

That's where a higher sample rate helps to maintain a smooth detailed waveform for extreme stretching.

 

A crude analogy would be video frame rates. If you watch a 60 frame per second video, and a 240 frame per second video at normal speed, it is very difficult to tell the difference. 

 

But slow them both down to 25%, and the difference in smoothness of motion becomes immediately obvious. The 240fps video will look insanely smooth, while the 60 will have a bit of motion choppiness. 

That kinda makes sense. might rethink using 88.2k again.

I split it up by task. 

 

For sound design, foley, SFX, etc. I record everything at 96Khz. 

 

For music projects and instruments, I record at 48Khz, and even that is just for video compatibility. Otherwise, 44.1 is fine.

 

The big disadvantage to higher sample rates is, of course, increased file size.

Link to comment
Share on other sites

Thanks for the links and explanations.

 

It would make sense for me to record vocals and instruments at 176.4 kHz. Downsample a copy of the final edited version. And keep working in the song session at 44.1

 

If I'd need to change tempo afterwards I can create a new copy of the final take, sttetch it, and again create a new downsampled version.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...