Squidge1 Posted October 26, 2017 Share Posted October 26, 2017 Hi All I have been a long time user of 44.1 but am considering the step up to 44.1. My system based around Logic Pro X 10.3.2 on a Mac Pro 2012. Im using 2 x Antelope Orion's in a midi system (RME Madiface) My setup is a hybrid system using a lot of midi/external instruments and hardware so latency is something I'm always working on. My question if I change to 48k will it impact on any other areas not the obvious audio one which I don't expect much noticeable difference but rather such as latency etc What are the key things to change in logic if I goto 48k, bit rate etc? any suggestions Thanks Quote Link to comment Share on other sites More sharing options...
David Nahmani Posted October 26, 2017 Share Posted October 26, 2017 There is nothing to change to work at 48k or 44.1k, the differences will be very minimal, wether in terms of audio quality or latency. I wouldn't overthink it: wether you change to 48k or stay at 44.1k won't make much difference. Quote Link to comment Share on other sites More sharing options...
stardustmedia Posted October 26, 2017 Share Posted October 26, 2017 The most important point is, what sample rate will your media be. I stay on 44.1kHz because I want to avoid sample rate change afterwards. In worst cases the sample rate change can do some real harm. If your end product anyway stays on 48kHz (e.g. for movies, etc.) then it's anyway better to start at that sample rate. Quote Link to comment Share on other sites More sharing options...
fusbur Posted October 26, 2017 Share Posted October 26, 2017 Hi guys Is there no advantage to recording at 48k if you predominantly work with audio? The dithering process always concerned me but I stuck at 48k anyway and dithered on bouncing based on perhaps old thinking. Quote Link to comment Share on other sites More sharing options...
Eriksimon Posted October 26, 2017 Share Posted October 26, 2017 The advantage in latency is exactly that your latency will be "only" 44/48 = 11/12 of what it was, meaning it'll be 8 % less, meaning if you had 12 milliseconds it will become 11 miliseconds - in other words: you will not notice any difference. HOWEVER, If you were to go to 88.2 kHz or 96 kHz, then you would notice a difference, as it would (slightly more than) halve the latency. Downside: it doubles the CPU load. Other upside: slightly better recordings, fit for HD audio. If the recording latency is all that matters, then record in 96 kHz (or even 192 kHz, for only a quarter of the latency, at quadruple the CPU load - it all depends on what your CPU can handle), and once you're done recording, downsample it all to 44.1 (unless your CPU can handle the load of 96 kHz project mixing - that'll depend on the project). Quote Link to comment Share on other sites More sharing options...
fisherking Posted October 26, 2017 Share Posted October 26, 2017 44/24 is pretty great; i only work 48/24 if i'm working specifically with video. what's the end product? streaming? cds? or..? Quote Link to comment Share on other sites More sharing options...
SRF_Audio Posted October 26, 2017 Share Posted October 26, 2017 The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. However, if you don't intend to stretch the files, then it doesn't give you any advantage. I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. Quote Link to comment Share on other sites More sharing options...
Ploki Posted October 26, 2017 Share Posted October 26, 2017 The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. However, if you don't intend to stretch the files, then it doesn't give you any advantage. I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. have any sources on that? Quote Link to comment Share on other sites More sharing options...
fusbur Posted October 26, 2017 Share Posted October 26, 2017 Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts? Quote Link to comment Share on other sites More sharing options...
stardustmedia Posted October 26, 2017 Share Posted October 26, 2017 Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts? Don't know. I guess the benefit will be just little better. As SRF_Audio wrote, it would be better to have it in higher sample rates. Though not sure, if this is entirely true. Heard about it, but never tried it myself. I was never encountering heavy quality issues with time stretching, or the quality loss didn't really matter. j Maybe you could first upsample your audio file, let's say to 176.4 resp. 192kHz, stretch it, bring it back down to the session sample rate. I must try that next time, just came up with this idea. Quote Link to comment Share on other sites More sharing options...
Eric Cardenas Posted October 26, 2017 Share Posted October 26, 2017 HOWEVER, If you were to go to 88.2 kHz or 96 kHz, then you would notice a difference, as it would (slightly more than) halve the latency. Downside: it doubles the CPU load. One thing to be aware of is that this is many times not a practical solution. Most drivers working perfectly at a 32 sample I/O and a sample rate of 48 kHz will drop samples when working at 96 kHz. The obvious solution is to then raise the buffer to 64 samples… and we're back where we started. Quote Link to comment Share on other sites More sharing options...
stardustmedia Posted October 26, 2017 Share Posted October 26, 2017 The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. However, if you don't intend to stretch the files, then it doesn't give you any advantage. I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. Yeah, I wonder too if you have any references, or can you elaborate this more in detail, why this should be like this. I also heard this once, long time ago. That time I wasn't able to record at higher sample rates anyway, so I just forgot about it. 44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. I'd say that also goes for 44.1kHz up to a certain degree. From there on the artifacts get ugly very fast. But those artifacts, that are present even in little changes, may not be an issue in the mix. Quote Link to comment Share on other sites More sharing options...
praecox Posted October 28, 2017 Share Posted October 28, 2017 I record everything in 88.2/24 because of an analog console. I believe that recreates its sound very well. I use 24 channels Mytek AD/DA. It's just sounds better. Quote Link to comment Share on other sites More sharing options...
SRF_Audio Posted October 28, 2017 Share Posted October 28, 2017 The only advantage to higher sample rates is in your ability to time and pitch stretch recorded audio, while maintaining higher fidelity in the stretched audio. I record sound effects and foley at 96Khz, because I often manipulate the time and pitch of a file creatively for SFX design. However, if you don't intend to stretch the files, then it doesn't give you any advantage. I record everything at 48 Khz, since I do quite a bit of video production, and I will often use my music in the video. 44.1/24 is perfectly acceptable for recorded audio that won't be manipulated in the time/pitch domain. have any sources on that? Sure thing. It's a bit technical, but incredibly fascinating reading if you're interested in this topic. (Also Stardust Media, for you as well) Source: http://lavryengineering.com/pdfs/lavry-sampling-theory.pdf Quote Link to comment Share on other sites More sharing options...
SRF_Audio Posted October 28, 2017 Share Posted October 28, 2017 And then for a little bit more digestible reading on the subject: http://www.trustmeimascientist.com/2013/02/04/the-science-of-sample-rates-when-higher-is-better-and-when-it-isnt/ Quote Link to comment Share on other sites More sharing options...
SRF_Audio Posted October 28, 2017 Share Posted October 28, 2017 Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts? A good way to think about it is like this: A sample rate is really more like an interval of time. Have you ever accidentally played back a 44.1Khz file in a 48Khz project, without converting it? What happens? It plays back slightly faster, and at a slightly higher pitch...and the overall length of the file shortens. So you could think of it really as the computer is just playing back rates of samples. It doesn't care what the input format is...it's just gonna playback X number of samples per second. Next, remember the Nyquist theorem (in plain English): You need twice the sampling interval of the highest desired fidelity. So, 44,100 samples per second will accurately capture all audio frequencies up to 22,050 Hz, which is just above what most people can hear. If you squeeze a file, or pitch shift it up, it's not really a problem, because all frequencies are getting faster, and you already have enough fidelity for lower frequency waves. As you stretch a file or pitch shift it downward, you begin to deal with a loss of fidelity in the high end. Because remember, at the very highest frequencies, you only took 3-4 samples of the waveform. That's fine when the wave stays at that frequency. But now, you want to make those vibrations slower and lowering the pitch. That's where a higher sample rate helps to maintain a smooth detailed waveform for extreme stretching. A crude analogy would be video frame rates. If you watch a 60 frame per second video, and a 240 frame per second video at normal speed, it is very difficult to tell the difference. But slow them both down to 25%, and the difference in smoothness of motion becomes immediately obvious. The 240fps video will look insanely smooth, while the 60 will have a bit of motion choppiness. Quote Link to comment Share on other sites More sharing options...
fusbur Posted October 28, 2017 Share Posted October 28, 2017 I’ll run some tests on flextime etc at a higher sampling rate and 48 and 44.1 when I get a chance. Quote Link to comment Share on other sites More sharing options...
SRF_Audio Posted October 28, 2017 Share Posted October 28, 2017 I’ll run some tests on flextime etc at a higher sampling rate and 48 and 44.1 when I get a chance. Important to note though, the material must actually be recorded at the higher sample rate. Upconverting a file originally recorded at 44.1Khz gives no benefit. Quote Link to comment Share on other sites More sharing options...
Ploki Posted October 28, 2017 Share Posted October 28, 2017 Thanks guys. I don’t have a problem with latency tbh, but I do have a guitarist who needs a bit of flex timing. Well, quite a lot. Would I then have better results with flex time if I recorded at a higher sample rate than 48k, ie, less artifacts? A good way to think about it is like this: A sample rate is really more like an interval of time. Have you ever accidentally played back a 44.1Khz file in a 48Khz project, without converting it? What happens? It plays back slightly faster, and at a slightly higher pitch...and the overall length of the file shortens. So you could think of it really as the computer is just playing back rates of samples. It doesn't care what the input format is...it's just gonna playback X number of samples per second. Next, remember the Nyquist theorem (in plain English): You need twice the sampling interval of the highest desired fidelity. So, 44,100 samples per second will accurately capture all audio frequencies up to 22,050 Hz, which is just above what most people can hear. If you squeeze a file, or pitch shift it up, it's not really a problem, because all frequencies are getting faster, and you already have enough fidelity for lower frequency waves. As you stretch a file or pitch shift it downward, you begin to deal with a loss of fidelity in the high end. Because remember, at the very highest frequencies, you only took 3-4 samples of the waveform. That's fine when the wave stays at that frequency. But now, you want to make those vibrations slower and lowering the pitch. That's where a higher sample rate helps to maintain a smooth detailed waveform for extreme stretching. A crude analogy would be video frame rates. If you watch a 60 frame per second video, and a 240 frame per second video at normal speed, it is very difficult to tell the difference. But slow them both down to 25%, and the difference in smoothness of motion becomes immediately obvious. The 240fps video will look insanely smooth, while the 60 will have a bit of motion choppiness. That kinda makes sense. might rethink using 88.2k again. Quote Link to comment Share on other sites More sharing options...
SRF_Audio Posted October 28, 2017 Share Posted October 28, 2017 A good way to think about it is like this: A sample rate is really more like an interval of time. Have you ever accidentally played back a 44.1Khz file in a 48Khz project, without converting it? What happens? It plays back slightly faster, and at a slightly higher pitch...and the overall length of the file shortens. So you could think of it really as the computer is just playing back rates of samples. It doesn't care what the input format is...it's just gonna playback X number of samples per second. Next, remember the Nyquist theorem (in plain English): You need twice the sampling interval of the highest desired fidelity. So, 44,100 samples per second will accurately capture all audio frequencies up to 22,050 Hz, which is just above what most people can hear. If you squeeze a file, or pitch shift it up, it's not really a problem, because all frequencies are getting faster, and you already have enough fidelity for lower frequency waves. As you stretch a file or pitch shift it downward, you begin to deal with a loss of fidelity in the high end. Because remember, at the very highest frequencies, you only took 3-4 samples of the waveform. That's fine when the wave stays at that frequency. But now, you want to make those vibrations slower and lowering the pitch. That's where a higher sample rate helps to maintain a smooth detailed waveform for extreme stretching. A crude analogy would be video frame rates. If you watch a 60 frame per second video, and a 240 frame per second video at normal speed, it is very difficult to tell the difference. But slow them both down to 25%, and the difference in smoothness of motion becomes immediately obvious. The 240fps video will look insanely smooth, while the 60 will have a bit of motion choppiness. That kinda makes sense. might rethink using 88.2k again. I split it up by task. For sound design, foley, SFX, etc. I record everything at 96Khz. For music projects and instruments, I record at 48Khz, and even that is just for video compatibility. Otherwise, 44.1 is fine. The big disadvantage to higher sample rates is, of course, increased file size. Quote Link to comment Share on other sites More sharing options...
stardustmedia Posted October 28, 2017 Share Posted October 28, 2017 Thanks for the links and explanations. It would make sense for me to record vocals and instruments at 176.4 kHz. Downsample a copy of the final edited version. And keep working in the song session at 44.1 If I'd need to change tempo afterwards I can create a new copy of the final take, sttetch it, and again create a new downsampled version. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.