Jump to content

why do ppl record at higher sample rates?


Recommended Posts

just curious about this. for example, why are some people recording at 96k? what is the benefit, considering what the final file would be?

 

if you're recording for film/tv, everything winds up 48. and most commercial music winds up streamed. or are people here making commercial cds?

 

anyway, just curious. i record at 44/24, and, when mastering, bounce down to 44/16 (with dither). most of my work is (streamed) pop music; film work (of course) winds up 48/24 wav.

 

thoughts?

Link to comment
Share on other sites

My thought is that it's 100 times more important to choose the right microphone for the task. Or the right microphone position. Or the type of compression of that instrument in that mix. Or fresh strings on your guitar. Or good acoustical separation between the ukulele and the hihat. Or good vibes during the recording session. Or a proper monitoring situation for the engineer. Or a decent headphone mix. Or even just a lava lamp on your desk. And a good song to start with.

 

Thats what I think.

Link to comment
Share on other sites

My thought is that it's 100 times more important to choose the right microphone for the task. Or the right microphone position. Or the type of compression of that instrument in that mix. Or fresh strings on your guitar. Or good acoustical separation between the ukulele and the hihat. Or good vibes during the recording session. Or a proper monitoring situation for the engineer. Or a decent headphone mix. Or even just a lava lamp on your desk. And a good song to start with.

 

Thats what I think.

 

amen, really. (i mean, the lava lamp makes sense; a good song less so). just wondering where 96k is useful... meanwhile, i find that 44/24 sounds better than 44/16, and my (former) mastering engineer preferred that (then he bounced down to 44/16).

 

anyone else?

Link to comment
Share on other sites

If latency is a concern in a multi-track recording environment involving a band (with a drummer) and headphone mixes, a higher sample rate reduces it.

Which is negated by the fact that it doubles the load on your system, forcing you to double the I/O-buffer for the same crackle-free system performance, resulting in exactly the same latency than before.

 

In other words - unless you have a super computer which doesn't care if you record 64 tracks with heavy plugins at 32samples buffer at *any* sample rate, you will find that you buy shorter latency with higher latency...

Link to comment
Share on other sites

I guess for commercial recordings it makes sense as you can go back to those masters and produce high quality mixdowns at a later date as the 'final' format progresses for the mainstream. Everyone is so much more aware of how a digital cap exists that never did in the analog world.

 

Great example is with film/tv where original analog film was used, can be returned to and re-sampled back in at 1080p/4k etc. Whereas if a 1080P digital master was created that's the limit - you ain't grabbing 4k from it. So if you're in the business of commercial productions you need to make sure there's headroom there for ever-evolving digital platforms, and that goes right back to source and how you capture.

 

Not sure what Tidal is right now, but that's one of the higher format platforms out there i beleive? I'm sure you can get 24/96 already from their catalogue.... Funny thing of course is that most people are playing it through gear that is converting it anyway - i'm sure the average phone/computer is only putting out 16/44 to 24/48 on a hardware level. (Not checked though).

Edited by skijumptoes
Link to comment
Share on other sites

I guess for commercial recordings it makes sense as you can go back to those masters and produce high quality mixdowns at a later date as the 'final' format progresses for the mainstream. Everyone is so much more aware of how a digital cap exists that never did in the analog world.

 

Great example is with film/tv where original analog film was used, can be returned to and re-sampled back in at 1080p/4k etc. Whereas if a 1080P digital master was created that's the limit - you ain't grabbing 4k from it. So if you're in the business of commercial productions you need to make sure there's headroom there for ever-evolving digital platforms, and that goes right back to source and how you capture.

 

Not sure what Tidal is right now, but that's one of the higher format platforms out there i beleive? I'm sure you can get 24/96 already from their catalogue.... Funny thing of course is that most people are playing it through gear that is converting it anyway - i'm sure the average phone/computer is only putting out 16/44 to 24/48 on a hardware level. (Not checked though).

 

and how many tidal users wind up listening on apple earbuds, or bluetooth speakers? but i DO want my audio to be as good as it can be, when it (at least) hits tidal, etc.

Link to comment
Share on other sites

You're thinking 'now' though mate, Imagine if many years ago you had a load of tracks in 128k MP3 format - how that would sound today. Technology will evolve, and it will by far exceed 16/44 to the point where it DOES become noticeable, 24/96 has been pretty much standard on audio interfaces for decades now, and platforms like Tidal show that there is a demand for that to creep through to the musos.

 

I think music will step up as they will find ways of placing us deeper within that sound so that quality becomes more critical. Just as VR is doing with video/gaming.. These technologies get you closer and deeper into the digital content so the core quality needs to ramp up to support it.

 

On top of that you have the whole 360/spatial sound formats, from a business viewpoint there needs to be a premium platform as right now the mainstream consumption of music is totally throwaway, you practically give it away and make your money on the success/popularity of listener counts. So yeah, that's one reason why people may want to be high quality at source.

Link to comment
Share on other sites

Which is negated by the fact that it doubles the load on your system, forcing you to double the I/O-buffer for the same crackle-free system performance, resulting in exactly the same latency than before.

 

It doesn't happen at our studio with 20-24 simultaneous tracks, using a 2010 mac pro with PCIe interface and up to 5 headphone mixes. Sorry.

At 88k we have less than 3ms roundtrip latency reported by Logic with a 64 buffer. No direct monitoring, all going thru Logic.

It all comes down to the particular setup. And no crazy latency inducing plugins going, since it's a live recording with real people.

Edited by triplets
Link to comment
Share on other sites

Which is negated by the fact that it doubles the load on your system, forcing you to double the I/O-buffer for the same crackle-free system performance, resulting in exactly the same latency than before.

 

It doesn't happen at our studio with 20-24 simultaneous tracks, using a 2010 mac pro with PCIe interface and up to 5 headphone mixes. Sorry.

At 88k with have less than 3ms roundtrip latency reported by Logic with a 64 buffer. No direct monitoring, all going thru Logic.

It all comes down to the particular setup. And no crazy latency inducing plugins going, since it's a live recording with real people.

 

good to hear that. but why do you record at 88k? (just curious). also, all the people on my projects are 'real people'.... :wink:

Link to comment
Share on other sites

an you elaborate? what is 'more info'? and how is it more stable? am asking, i'd like to understand this...

 

Ever done video?

The norm nowadays is pretty much film with much higher resolution than you gonna put out in a video.

Why? Because when you zoom in on a shot, the lower resolution will pixelate.

So film at 4k, put out at 1080.

 

Why do we record at 24 bit? More information for later editing. Are you recording at 16 bits?

These days there's a lot of stretching and compressing of files.

On our setup, when we record at 88k and leave the buffer at 32, it tends to overload for obvious reasons.

So we increase the buffer to make it more stable to stay at 88k.

Link to comment
Share on other sites

but what format do these things end up in? streaming on spotify? CD releases?

 

i understand the idea of scaling down from a 'better place', why i record 24bit: it just sounds better. so... is that it? it sounds better, and then you scale it down, and retain some of that better quality?

 

anyway, think i get it. so, thanx for that. 8-)

Link to comment
Share on other sites

Good question and nice thread. Learning that increased rate improves latency, wow this trick was new to me, and of course, that makes sense, great, this is why i love lurking this forum, and love it when people ask questions like this and learning from others observations and processes.

 

Here's my experience and why I record at 96/24: I work as a media artist with music and sound design (film, tv, theatre, art, games), both in a traditional sense (multitrack mixes) and the weird art sense (complex interactive installations with responsive live sound). I prefer to sample and record at 96/24 primarly for that "capture as many details as the technology can at the moment" philosophy, and the always save the original files as kind of "raw" input, so I can always go back to a higher quality version if needed. My recording is an even mix of "foley", field, voice talent, and instruments, but in 95% it is always recorded "solo", in the field or the studio, patiently one source at a time. So I can focus on source, mic, position, getting the sound right, make sure there are details in the first place to justify the high rate, I totally agree with Fuzzfilth, this is way more important than sample rate. And also, being ready in the right MOMENT is also critical. Some of my most cherished and valualable sounds are grabbed with the iPhone regular camera app on a whim. It's there, it can always record within 2 seconds.

 

One thing I have found, is that primary edits (noise removal, EQ, etc) are often way better to do at 96khz before resampling to production rate.

 

I never shift to 16 bit, stopped doing that years ago. Everywhere and everything I deal with accepts 24 bit.

 

If the 96 khz source material is directly going into a large linear mix, a la "orchestra/band recording", it is usually (batch) downsampled to 48 or 44 depending on production platform, either by myself batch ahead or just automatically upon import by the app... just like mentioned by skijumptoes above, in film, video and for graphical assets I shoot or create at 4k and then edit and publish in HD. This is mostly to save CPU, and based on the experience that the increase in quality from 44/24 to 96/24 (or HD to 4k) is lost in 95% of the performance and delivery settings. It's just more practical and efficient to work and deliver in 44 (or 48 for film/TV).

 

I think, if my computers could handle my rather large arrangements in 96khz in realtime, i probably would stay at 96khz as "far" as possible in the production line, perhaps even performance. I'd much rather have the processing power than the hz right now. But, whenever that general shift to 96k and 4k platforms will happen.. and my systems can chew it, then I can just flick a switch, and all my stuff is good to go ;)

 

Some experience with 96k in real world: I have a few times made complex (live, responsive sample-based arrangements) performances running in realtime at 96kz for huge theatre setups, with high-end sound systems very well tuned in large hall or venues. Its so beautiful!! There is to me a slight, nuanced improvement, everything is more glossy and sparkly, particularly in the transient details in complex parts, you can hear everything, it never goes to mush, and in the bass, tones are clear and body punches are "laserlike", less mess there too. Its like everything has more FOCUS (*). But this is highly subjective and woo-ish, I know. And its only in a few selected places I could hear this. I love this "superfine" sound, but am also aware that of an audience of 1000, only a couple will notice, they both will notice something different, and only 0 will really care, and more importantly, my heartrate watching the CPU meters during the show is inversely relative to the herz rate, so back to 44 we go. (* = So, I only know the difference, because for some of these, I had to give up and resample everything down to 44 khz and run the project at 44 to avoid buffer glitches, and then we all felt something "disappear" in the sound, it had less "sparkle". So I have A/B'ed it, with my own material, and for me there was a difference. YMMV.)

 

But, a second usage scenario, a lot of these 96khz samples are used as source material for my own multisampled sampler instruments, and then I ALWAYS prefer using the 96khz versions, all the way, particularly since a lot of my sound and instrument design works with slowing down or speeding up material in realtime, and to me there is quite a difference in sound to a 96khz sample slowed down 2 octaves and to a 44 khz sample doing the same. And these instruments with 96k source material I can use as-is in projects running at 44khz (through Kontakt in LPX or ABX). Both 96k and 44k samples have different tonal texture when realtime pitched (also dependent on processing software). And PS, none of them are BETTER really, they are just DIFFERENT, sometimes I prefer the 44khz "sound", or even the glassy rough texture of a pitched 22khz sound, but mostly I use the 96khz originals. So for sampled instruments, I prefer then to stay at 96 for as long as possible. I can always go "down", but its a lot more effort and hassle to go "up" in quality.

 

Works that are published, are almost always produced in 44khz. When publishing for streaming I still deliver 44/24, but my aggregator now also accepts 96/24 for delivery to "HD" services. I am not considering it ATM, but aware that it exists. My audience is about 80% streaming (of which 80% is Spotify) and 10% downloads (Bandcamp) and 10% vinyl. Vinyl is mastered separately and delivered 44/24 wav.

 

Looking fwd to hear others thoughts on 96 and 44!

Link to comment
Share on other sites

I understand the idea of scaling down from a 'better place', why i record 24bit: it just sounds better. so... is that it? it sounds better, and then you scale it down, and retain some of that better quality?

It's not just that, having your source audio in the highest quality possible means that (destructive) edits will be performed in the highest quality possible, and also if you start to manipulate the audio via time algorithms you'll be doing at better quality- What i mean by that is if you ever timestretch or move transients for timing then the resolution exists to provide better results.

 

i.e. being extreme if you half speed a 88k audio loop it's going to be a closer to 44k quality, whereas 44k source is hitting on par to 22k. ...In theory. :)

 

Plus, if you're doing this professionally then you archive for the future, not by todays final format standards. That's why photographers archive Raw capture data vs jpeg etc. Again, you never see that Raw image data as a final print/production but it's captured at the highest digital rate possible, processed as such and then deployed to the final format.

 

The standard for end user listener quality WILL step up, it has to to generate money for the music and tech industry.

Link to comment
Share on other sites

because they don't know how digital sound works.

i just don't get the value of 88k

 

We compared 44k and 88k on drums, and the low and high end information was slightly better on 88k.

We are talking 12-16 tracks of mic'ed drums.

dude, you're like 60, your hf roll off is at 14k already - if you're lucky.

Link to comment
Share on other sites

Ever done video?

The norm nowadays is pretty much film with much higher resolution than you gonna put out in a video.

Why? Because when you zoom in on a shot, the lower resolution will pixelate.

So film at 4k, put out at 1080.

 

Why do we record at 24 bit? More information for later editing. Are you recording at 16 bits?

These days there's a lot of stretching and compressing of files.

On our setup, when we record at 88k and leave the buffer at 32, it tends to overload for obvious reasons.

So we increase the buffer to make it more stable to stay at 88k.

The difference between 24bits and 16bits is the NOISE FLOOR.

24 bits allows -144dB of dynamic range, and 16 bits allows -96.

Think of it in binary.

Lets take an arbitrary sample:

24 bits: 0101 1101 1011 1001 1010 1111

16 bits: xxxx xxxx 1011 1001 1010 1111

The top portion is identical. There's no difference except the 8 least significant bits.

It doesn't help with stretching, it doesn't help with compression because more than likely, you record at healthly levels and your microphones SNR is 60-80dB.

It's still safer to record at 24bit because its cheap.

 

As far as pixel size VS sampling rate goes, it's not the same either.

Sampling in video is not resolution and works completely differently: https://www.studio1productions.com/Articles/411samp.htm

it's also not frame rate.

Link to comment
Share on other sites

You're thinking 'now' though mate, Imagine if many years ago you had a load of tracks in 128k MP3 format - how that would sound today. Technology will evolve, and it will by far exceed 16/44 to the point where it DOES become noticeable, 24/96 has been pretty much standard on audio interfaces for decades now, and platforms like Tidal show that there is a demand for that to creep through to the musos.

 

I think music will step up as they will find ways of placing us deeper within that sound so that quality becomes more critical. Just as VR is doing with video/gaming.. These technologies get you closer and deeper into the digital content so the core quality needs to ramp up to support it.

 

On top of that you have the whole 360/spatial sound formats, from a business viewpoint there needs to be a premium platform as right now the mainstream consumption of music is totally throwaway, you practically give it away and make your money on the success/popularity of listener counts. So yeah, that's one reason why people may want to be high quality at source.

VR can be done with 480p picture with completely s#!+ bit depth. There are completely different parameters for VR.

The future of music/audio wont be in 96khz/48bit.

The future will be in self-adaptive smart mixes, multitrack deliveries (kinda like dolby atmos object oriented works). If your future proofing depends on wasting storage space, be my guest tho. :)

 

so unless OUR EAR evolves. we're doing nothing. Keeping multitrack archives imo is smarter than some high-resolution nonsense. Video until today was not at the point that exceeded our own senses... Audio was, much earlier.

Because our ears are s#!+, and because audio is simply easier to do than video.

 

And 128k mp3 still carries more information that a very noisy and smeared vinyl (which i loved, and a lot of people apparently do)

Edited by Ploki
Link to comment
Share on other sites

To put video into perspective:

96K means in video terms that you're sampling and delivering (deep into) IR and UV light spectrums...

24bits in terms of video means you're trying to capture more contrast between shadows and highlights than your sensors self-noise is able to reproduce.

 

in video, this means a crazy increase on storage and resources, so people simply don't do it.

 

in audio, people will always find some excuse to do it.

 

David's answer is literally the ONLY legitimate reason in this thread, everything else is misinformation and myths.

 

realistic reasons: more relaxed anti-aliasing filters, which means easier work for drastic saturation and waveshaping. (mostly solved with oversampling, and negated with downsampling for delivery anyway.)

 

amen, really. (i mean, the lava lamp makes sense; a good song less so). just wondering where 96k is useful... meanwhile, i find that 44/24 sounds better than 44/16, and my (former) mastering engineer preferred that (then he bounced down to 44/16).

 

anyone else?

it sounds better below 96dB, because it sounds and 16bit doesn't. Else its identical. That's just how digital works. there's no more resolution above -96dB.

Mastering engineers are one of the worst shamans of audio in my experience, despite the fact that they should be the opposite...

 

I need to get back to my coffee, this made me tired

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...