Jump to content

why do ppl record at higher sample rates?


Recommended Posts

ok last one before back to coffee

It's not just that, having your source audio in the highest quality possible means that (destructive) edits will be performed in the highest quality possible, and also if you start to manipulate the audio via time algorithms you'll be doing at better quality- What i mean by that is if you ever timestretch or move transients for timing then the resolution exists to provide better results.

 

i.e. being extreme if you half speed a 88k audio loop it's going to be a closer to 44k quality, whereas 44k source is hitting on par to 22k. ...In theory. :)

 

Plus, if you're doing this professionally then you archive for the future, not by todays final format standards. That's why photographers archive Raw capture data vs jpeg etc. Again, you never see that Raw image data as a final print/production but it's captured at the highest digital rate possible, processed as such and then deployed to the final format.

 

The standard for end user listener quality WILL step up, it has to to generate money for the music and tech industry.

1st paragraph: most plugins/algos today oversample. That extra information is high-frequency info only.

2nd: yes, implying your microphone, preamp, interface, and AD (which has a leaner filter at 96K, so it cuts off earlier and less steep) were able to capture anything above 20khz.

so yes, in theory. in practice... dubious.

Some things (mostly synthetic) will have some extra info, most won't.

 

I've already addressed this photo/video analogies elsewhere, but yes.

We are already at "RAW" and OVER the limits of human perception with 48/24k.

Photos are not captured as such yet because of tech. There's RAW 8bit, 10bit, 12bit (my camera does 12bit) and at most i think its 16bit.

But these bits are 1:1 not analog to audio bit depth...

One thing is true tho: using 14bits (as opposed to 8bits for example) to capture more dynamic range means you can push exposure in post production more than you would with 8 or 10 bit capture.

Kinda like if you record at 24bits and unhealthy SNR (-40dB) you have 100dB of dynamic range to spare, as opposed to recording at 16bit at -40dB and having only 60dB of DR to spare.

It's all about the noise floor here, and also when boosting exposure in low-bitdepth photos (where you have to use denoise in order to increase exposure in post)

Link to comment
Share on other sites

so unless OUR EAR evolves. we're doing nothing.

Who's to say that it will come through our ears in future? Technology could push audio far deeper into our brains via implants that we simply 'plug into'... There's no way of telling where this evolves.

 

But the most important aspect is if you're a professional recording studio today, if your rig can support 24/96 comfortably then you're secured if the artist wants to release on 24/96 in future - ready for when a demand/market exists for their music in a premium format.

 

Whether it's truly any better can be discussed till the cows come home, but if the demand is there you need to be compliant. Studios cannot remain stuck in the 16/44k world if they want to serve their client base.

Link to comment
Share on other sites

so unless OUR EAR evolves. we're doing nothing.

Who's to say that it will come through our ears in future? Technology could push audio far deeper into our brains via implants that we simply 'plug into'... There's no way of telling where this evolves.

 

But the most important aspect is if you're a professional recording studio today, if your rig can support 24/96 comfortably then you're secured if the artist wants to release on 24/96 in future - ready for when a demand/market exists for their music in a premium format.

 

Whether it's truly any better can be discussed till the cows come home, but if the demand is there you need to be compliant. Studios cannot remain stuck in the 16/44k world if they want to serve their client base.

it won't happen in 20 years or more. And we still listen to beatles with all the beauty of shortcomings of recording systems of their era. its a part of this music, just as much as tech of today is.

 

I could give two fucks less if people in 15000 years (when ears would evolve) listen to my stuff at 48k (and hear what i actually heard) or in 96k, which has a bunch of ultrasound s#!+ i couldn't even monitor because my ears were s#!+

Link to comment
Share on other sites

it won't happen in 20 years or more. And we still listen to beatles with all the beauty of shortcomings of recording systems of their era. its a part of this music, just as much as tech of today is.

It happened two years ago my friend

https://www.whathifi.com/news/tidal-adds-hi-res-audio-streams-tidal-masters

https://www.techradar.com/news/tidals-hi-res-mqa-music-library-crosses-the-1-million-mark

 

It will be the phone/portable market that drives this as they always need a gimmick to push the latest tech and the music platforms will do anything to squeeze a fee for 'premium' listening experiences, digital headphones to match etc.

 

And it don't matter how much better it sounds, point remains that if you're taking money from an artist you want to ensure you're doing everything possible to preserve in the highest quality possible for future deployments.

 

Edit: This great review LOL! ->

"The masters stream I find very open and revealing. Talking to my friend, he asked “what’s the difference” to which I said listen to the bits between the notes."

 

So there it is Ploki, it's the *bits* between the notes that matters.. All hail 24/96!!

Link to comment
Share on other sites

again, i think it's important to recognize what we're aiming for; i mostly make pop music, and spotify is the goal; so 44/24 is already the '8K' of the streaming world.

 

i too don't care about far into the future; plus, if i want to remix a 2020 song in 2040, am sure the tools would be there to make it sound 'awesome'. anyway, the great thing is... we all have our methods, our needs. and we can work in whatever format suits those needs.

 

at the end of it all, i run all my mixes thru an ensoniq mirage, then upscale to 96/48 or higher. and it all sounds killer on my 2015 apple earbuds....

Link to comment
Share on other sites

it won't happen in 20 years or more. And we still listen to beatles with all the beauty of shortcomings of recording systems of their era. its a part of this music, just as much as tech of today is.

It happened two years ago my friend

https://www.whathifi.com/news/tidal-adds-hi-res-audio-streams-tidal-masters

https://www.techradar.com/news/tidals-hi-res-mqa-music-library-crosses-the-1-million-mark

 

It will be the phone/portable market that drives this as they always need a gimmick to push the latest tech and the music platforms will do anything to squeeze a fee for 'premium' listening experiences, digital headphones to match etc.

 

And it don't matter how much better it sounds, point remains that if you're taking money from an artist you want to ensure you're doing everything possible to preserve in the highest quality possible for future deployments.

 

Edit: This great review LOL! ->

"The masters stream I find very open and revealing. Talking to my friend, he asked “what’s the difference” to which I said listen to the bits between the notes."

 

So there it is Ploki, it's the *bits* between the notes that matters.. All hail 24/96!!

https://musically.com/2019/12/09/report-spotify-has-36-market-share-of-music-streaming-subs/

tidal doesn't even blip.

 

and anyway, by your logic, why stop at 96/24? record at 384/32bit integer then go all the way - that's the highest quality possible right now, switch to cubase/axr4.

Else you just drew a convenient arbitrary line at 96/24 for absolutely no reason, because its neither the most practical nor the highest quality possible. :)

 

in any way, i kinda hear what you're saying, but i don't see any indication or trend that it audio/music consumption and selling points will go in any way into 96/24 or higher.

 

Take a look at dolby atmos. The third layer, "object" layer, has a special way of working. It has audio, and metadata that defines where it should come from.

in general that means that it can be scaled on a weird stereo system such as iphone (and could take advantage of vertical "stereo", so bottom/top position for vertical videos) or a huge ass 24.4 surround system.

In the same manner you could have multitrack stems with some form of metadata that would adapt itself to the system its played back on, from headphones to huge speakers.

i think that's much more likely than 96/24 which gives absolutely 0 benefits to end users, and all the studies that show benefit are highly criticised, and its going to be very hard to market something that you can't really sell to anyone except true believers.

 

you'd be wiser to print down multitracks with all the plugins you wont be able to launch anymore in 4 years (let alone 10) than to use 96, if you want future manoeuvring space.

 

But all this marketing aspect we're talking about doesn't change the fact that majority of preconceptions of bit depth and sample rate in this thread, and most of the amateur/prosumer audio groups are wrong. :/

Link to comment
Share on other sites

again, i think it's important to recognize what we're aiming for; i mostly make pop music, and spotify is the goal; so 44/24 is already the '8K' of the streaming world.

i too don't care about far into the future; plus, if i want to remix a 2020 song in 2040.

Right, i get that of course - But you're asking why people *do* record at higher sample rates, You don't need to validate it to your own needs to justify those people doing it.

 

Studios that want to future proof themselves are not looking at 2040, they're looking at today.

 

No matter how daft we may think services like AmazonHD, Tidal and Qobuz are - it's happening right now, and i know many people who seek these types of services.... I've argued with friends who use them explaining that their hardware, speakers, headphones aren't up to the job of recreating the fidelty - but they still tell me they can hear massive difference.

 

And they probably can, cause what they comparing too? A 128-256k spotify stream probably was the last time they heard music which they are comparing - But we know that CD 16/44 alone is a jump way beyond MP3 at those rates. I just think majority of people have forgotten how good CD quality is,to be honest. So by default the musos will jump to 24/96 digital streaming as it's 'premium'.... But in reality is it any better than a CD playback?

Link to comment
Share on other sites

In the same manner you could have multitrack stems with some form of metadata that would adapt itself to the system its played back on, from headphones to huge speakers.

i think that's much more likely than 96/24 which gives absolutely 0 benefits to end users, and all the studies that show benefit are highly criticised

This is somewhere i think music will go - i've thought it for a longtime, i can see it becoming a far more interactive experience for the end user where you can stream an artists material as we do today, Or you can buy the premium version which is a higher quality, multitrack format.

 

The kicker is that you can drop out elements such as Guitars, Vocals, Drums etc and also bring in alternate elements which were left out of the final cut to really explore each song as a fan. People could remix the levels/pans and therefore original artists material could be easliy sung over, or used for learning to drum/guitar over them - or you may just fancy listening to the album as an instrumental.

 

Just think how reactive/focused EQ could be if it could affect the different elements based on your listening setup. As you alluded to, it would be a truer studio listening experience than throwing out a 96k master to the likes of TIDAL/Amazon etc. I think the stumbling block would be getting artists to agree to have their content released in such manner... Particuarly as people would use the multitracks to over-critise top artists.

Link to comment
Share on other sites

dude, you're like 60, your hf roll off is at 14k already - if you're lucky.

 

Dude, you don't know anything about me, stop being condescending, will ya?

i'm not being condescending, it's a joke, because you have grey hair in your avatar. :?

And on top of it, it's a joke that's slowly on everyone's account.

I used to hear up to 21000Hz 10 years ago, and i'm topping at 18-19k right now. in next 10 years, 16k if i'm lucky. nyquist at 22k seems generous.

 

Here's one that's not a joke:

Although i don't necessarily disagree with everything you've written in this thread (mostly latency related), you've also written a lot of half-truths, misinformation and false video analogies about sampling rates and bit depth here.

It's a public forum so you can write anything you like even if its technically false, but that also means i can correct anything i want.

 

 

Just think how reactive/focused EQ could be if it could affect the different elements based on your listening setup. As you alluded to, it would be a truer studio listening experience than throwing out a 96k master to the likes of TIDAL/Amazon etc. I think the stumbling block would be getting artists to agree to have their content released in such manner... Particuarly as people would use the multitracks to over-critise top artists.

DRM and encoding, it's already pretty hard to break. (i tried a lot of things to rip apple music downloads (to drop into logic as references) -good luck with that).

So in the end you can control as much or as little as artists allows/intends.

Right, i get that of course - But you're asking why people *do* record at higher sample rates, You don't need to validate it to your own needs to justify those people doing it.

 

Studios that want to future proof themselves are not looking at 2040, they're looking at today.

personally for subjective reasons i really don't give a s#!+.

but when people start the shamanistic mumbo jumbo that's technically false and spread misinformation to justify their own actions, it's potentially detrimental to the community and common knowledge about stuff.

Idk why, but audio myths tend to spread like wildfire.

Link to comment
Share on other sites

again, i think it's important to recognize what we're aiming for; i mostly make pop music, and spotify is the goal; so 44/24 is already the '8K' of the streaming world.

i too don't care about far into the future; plus, if i want to remix a 2020 song in 2040.

Right, i get that of course - But you're asking why people *do* record at higher sample rates, You don't need to validate it to your own needs to justify those people doing it.

 

honestly, am not trying to 'validate it to my needs', am just trying to get the point. and i hear what ppl are saying. whatever works for them, works... for them. am fine with my workflow, just interested in other points-of-view; hence, this thread...

Link to comment
Share on other sites

but you've also written a lot of half-truths, misinformation and false video analogies about sampling rates and bit depth here.

 

Sp just because I made an analogy to video to make a point about resolution that's false now?

I know about the difference of bit depth and 96 and 144dB dynamic range and that's why we don't record now at 16 bit, even though we could.

And comparing 88k drums to 44k drums and liking 88 better is not valid either?

And don't include me in the phrase "ppl don't know how digital works", no need for being passive aggressive and making all this personal.

 

A lot of the things we do and decide upon are subjective, so let's leave it at that.

Link to comment
Share on other sites

but you've also written a lot of half-truths, misinformation and false video analogies about sampling rates and bit depth here.

 

Sp just because I made an analogy to video to make a point about resolution that's false now?

I know about the difference of bit depth and 96 and 144dB dynamic range and that's why we don't record now at 16 bit, even though we could.

And comparing 88k drums to 44k drums and liking 88 better is not valid either?

And don't include me in the phrase "ppl don't know how digital works", no need for being passive aggressive and making all this personal.

 

A lot of the things we do and decide upon are subjective, so let's leave it at that.

 

Analogy to video by itself is not inherently false, just the one you made is. :? and i think i already clarified why...

Comparing 44k to 88k drums is valid, but some converters (especially older converters) tend to work differently at differently sampling rates.

Meaning one may sound better at 44k, another at 88k. (that's also a legitimate subjective position based on your particular equipment)

We don't record at 16 bits because the thermal noise floor of analog components exceeds it.

But it doesn't exceed 24bits, that's why nobody is recording at 32bit integer. because there's absolutely no need to capture thermal noise floor of converter analogue path.

So we could also record at 384khz/32i, but we don't either.

 

There's also the fact that you'd have to split (after preamp) the same source in parallel to two daws running identical converters to make a really unbiased decision.

So it's valid for that particular scenario, maybe, but it says absolutely nothing about higher sampling rates in general.

 

sorry for being passive aggressive tho. my bad :oops:

Link to comment
Share on other sites

Listen, I'm amazed at how much we argue about bits and sample rates and 32 bit float and oh my god I'm not using all the bits and blah blah blah.

 

George Martin was using 4 track tape machines and loosing fidelity on each summing bounce and ending up with two or more generations. Wonder how much high end he added with those Abbey Road EQs.

 

It's easy to get lost in the details and forgetting about the music you're trying to write or record.

Link to comment
Share on other sites

To all, please keep all discussion void of any personal attacks. You can discuss the subject, and argue about the subject, but not comment on the person who is discussing the subject. I've now added some policies and guidelines in the FAQ (link at the top left of the forum) to clarify that. I have found that when things go into peanut mode (I'm sorry, it's a French expression that means when things start to go wrong) that's the best way to stay on course.

 

Thanks guys. ;)

Link to comment
Share on other sites

Listen, I'm amazed at how much we argue about bits and sample rates and 32 bit float and oh my god I'm not using all the bits and blah blah blah.

 

George Martin was using 4 track tape machines and loosing fidelity on each summing bounce and ending up with two or more generations. Wonder how much high end he added with those Abbey Road EQs.

 

It's easy to get lost in the details and forgetting about the music you're trying to write or record.

fwiw i agree with this.

also so many things historically has been done wrong in a technical sense but just worked artistically

Link to comment
Share on other sites

Yep - unless the engineering and noise is totally horrible (check out Bobby Darin's classic "Mac The Knife" for an example of the engineer having problems dealing with the volume spike), it's never about the engineering - it's ALWAYS about the song & performance.

 

I think ABBA did a boatload of bouncing/layering on their stuff (8 track tape?), which didn't seem to make their music less successful?

Link to comment
Share on other sites

VERY interesting thread and educational too. So the bottom line for recording at higher rates is lower latency is that right?

 

And it's true that most folks are listening to music on less-than-amazing systems (earbuds). But speaking of video to compare things, here it is: Filming at 4k vs. HD means much more information being recorded per frame on 4k (more flexibility adjusting contrast/colors etc.); and because 4k is four times larger than HD, you can crop/zoom a lot more before the image starts getting fuzzy.

 

Yes, 4k eats up way more hard drive and storage space (and needs more processing speed too); A common trick is to use lower-resolution aliases to edit, and when it's time for final, replace the aliases with the real 4k footage and render that.

 

So with 4k you can always knock things down to play on HD or SD, but you can't go the other way so... not sure if that's a good argument for a similar approach with audio or not but as for video...

Link to comment
Share on other sites

I think with all the flexing and pitch shifting we do, the more (usable) resolution a file has, the better.

 

This sample rate argument has been beaten to death at Gearslutz, and Dan Lavry said in one of those never-ending threads, the ideal sample rate is around 60k, but no manufacturer is doing that.

And for him extremely high sample rates like 192 don't bring any benefits( like the law of diminishing returns).

 

Don't ask me to find that thread, but I know I read it.

Link to comment
Share on other sites

So with 4k you can always knock things down to play on HD or SD, but you can't go the other way so... not sure if that's a good argument for a similar approach with audio or not but as for video...

You mean proxies right?

Tbh, i've already thought about exactly that when doing somethin in fcpx some time ago.

 

Reaper i think has the ability to run session at a different sample rate than the source files are, and resamples stuff on the fly - so you can use 96k sources in a 44.1k session to conserve CPU then switch to 96k before exporting. (or even don't and just keep it at 96k for all the "future proofing/whatever you may think) reasons mentioned before.

 

Imo its a great argument a shame it's not implemented on wider scale.

IF logic was smarter at relinking files you could even do it manually (mix the session with 44.1k proxies and replace with 96k at later stage) but it's not smart and screws up timing badly when you do so.

 

I think with all the flexing and pitch shifting we do, the more (usable) resolution a file has, the better.

 

This sample rate argument has been beaten to death at Gearslutz, and Dan Lavry said in one of those never-ending threads, the ideal sample rate is around 60k, but no manufacturer is doing that.

And for him extremely high sample rates like 192 don't bring any benefits( like the law of diminishing returns).

 

Don't ask me to find that thread, but I know I read it.

60k is not dumb tbh. You get the usable frequency range of 20k while having nyquist at 30k, allowing for a much leaner filter and more space with various harmonic generator plugins.

As far as flexing and pitching goes, logic's flex pitch will be worse at 192khz than melodyne is at 44.1k any day, so its a bit more to it than just sampling rates. and again, the question is how much you actually gain by using higher resolutions sources as opposed to oversampling algorithms.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...