Jump to content

The Ultimate DAW Test


zangiamit91

Recommended Posts

 

In this test, I did the same mix on various major DAWs and then compared the results by ear and by null test.

As far as we know, all the daws sound the same, and it's valid until you start to use the summing engine, and then you can hear the differences. The daws that tested are -

Pro Tools 2021

Logic Pro X

Ableton Live 11

Studio One 5 Artist

Cubase 11 Elements

Universal Audio LUNA

You can hear the Logic has the most difference, I don't know why it is so different, but I'm pointing to the pan law compensation that Logic uses.

I try to print in different pan law mode, but there was no difference.

Also, you can see pretty barely the transient of each track that has some slightly different. Especially on Logic...

If someone has some idea or wants to take the test and try to share the results with us, let me know, and I'll add it to the download folder. here is the link for the media files and projects - https://www.dropbox.com/work/Amit%20Zangi/Daw%20Test

Link to comment
Share on other sites

 

In this test, I did the same mix on various major DAWs and then compared the results by ear and by null test.

As far as we know, all the daws sound the same, and it's valid until you start to use the summing engine, and then you can hear the differences. The daws that tested are -

Pro Tools 2021

Logic Pro X

Ableton Live 11

Studio One 5 Artist

Cubase 11 Elements

Universal Audio LUNA

You can hear the Logic has the most difference, I don't know why it is so different, but I'm pointing to the pan law compensation that Logic uses.

I try to print in different pan law mode, but there was no difference.

Also, you can see pretty barely the transient of each track that has some slightly different. Especially on Logic...

If someone has some idea or wants to take the test and try to share the results with us, let me know, and I'll add it to the download folder. here is the link for the media files and projects - https://www.dropbox.com/work/Amit%20Zangi/Daw%20Test

Seriously, put the same project in 6 different Engineers hands using the same sequencer, Logic lets say.

Now compare the results with this NULL test. :roll:

Link to comment
Share on other sites

As far as we know, all the daws sound the same, and it's valid until you start to use the summing engine, and then you can hear the differences.

 

All DAW's can add and multiply floating point numbers just fine (which is what a digital mixer is). DAW tests have been done for years, and it's been proved time and again there are no summing differences.

 

Once you start to mix, and use various features of DAWs, plugins and so on, then of course the end results will start to differ, for a myriad of reasons, but it's nothing to do with some kind of sound problem in the summing.

 

You can hear the Logic has the most difference, I don't know why it is so different, but I'm pointing to the pan law compensation that Logic uses.

 

That is one difference, of course. I haven't looked at your test (and I'm not particularly interested to do so - I was interested in this back when the Awesome DAWsum test was being done, and that satisfied me enough to close the book.)

 

You will get ultimately different results in different DAWs (hence null testing will show differences) for a whole bunch of reasons, including different workflows, different interfaces and familiarity levels lead to different choices, and a myriad of other factors. People get so obsessed about null tests (even the ones who do it right) and in some circumstances it can be a useful tool to learn/investigate something, but a lot of the time, it's just like all the photographers who shoot focus tests all the time - at a certain point, the tools are plenty good enough to make great art, so it's probably more worthwhile investing the time there, instead...

Edited by des99
Link to comment
Share on other sites

interesting, and am also curious about logic's differences. if i were starting out, and looking for a DAW, i'd put logic far down the list; but, using it for a million years (ok, maybe 16 years), am perfectly happy with it.

 

the null tests are really interesting...

I would be curious to know which one would be on the top of your list and why fisherking? Let me guess... Studio One, maybe?

Link to comment
Share on other sites

interesting, and am also curious about logic's differences. if i were starting out, and looking for a DAW, i'd put logic far down the list; but, using it for a million years (ok, maybe 16 years), am perfectly happy with it.

 

the null tests are really interesting...

I would be curious to know which one would be on the top of your list and why fisherking? Let me guess... Studio One, maybe?

 

in the real world, logic is my DAW; i have no serious issue with it, and no interest in switching up (i've worked in pro tools, and live some time ago; i prefer logic). just a momentary observation based on what i'd heard in the youtube video...

Link to comment
Share on other sites

Thanks for the test. It is interesting.

At first glance I can see, that you used the "normal" balance-pan in Logic on two tracks, that really differs from a stereo-pan like you used in Pro Tools. There is the first major difference I can think of. But you could change that behavior in Logic with stereo-panning.

Link to comment
Share on other sites

I just did another small test with a different panning mode in logic. I got a complete silence with a null test between them. Also, I made the same test without plugins at all. The same as I did in the video, the result still can hear some guitar parts from logic. I added the file to the link in the video description.
Link to comment
Share on other sites

this is kind of a pointless test. If you were mixing down a project in each case, it would be pretty much impossible to exactly match the mix settings in each DAW in order to expect anything to NULLify, especially when there are plugins involved. What exactly do you think could be proven? I promise you that whatever you think you proved, someone will debunk.

 

All of these products are designed to be transparent in sound and not impart any particular sonic signature like would be the case with analog gear. They are transparent. If you can't make good sound with them, the problem is you.

Link to comment
Share on other sites

this is kind of a pointless test. If you were mixing down a project in each case, it would be pretty much impossible to exactly match the mix settings in each DAW in order to expect anything to NULLify, especially when there are plugins involved. What exactly do you think could be proven? I promise you that whatever you think you proved, someone will debunk.

 

All of these products are designed to be transparent in sound and not impart any particular sonic signature like would be the case with analog gear. They are transparent. If you can't make good sound with them, the problem is you.

 

thanks for clearing things up for everybody. so... no tests, no experimentation; no exploration needed. and who said any of these DAWs 'can't make good sound'?

Link to comment
Share on other sites

Honestly, the only way that any two computer programs could be expected to produce exactly the same output, given the same input, would be if the algorithms selected by each software design team were the same ... or, sufficiently so. That strikes me as a rather unrealistic expectation – and, having little to do with "the human ear."
Link to comment
Share on other sites

This article may shed light.

 

https://www.admiralbumblebee.com/music/2019/02/17/Daw-V-Daw-Differ.html

 

A null test in general will only tell you if two audio waves files are bit for bit exactly the same. It reveals nothing about why they are different, if they are different; or whether different is good or bad.

 

If you have a test where human choice is involved in setting up multiple channels with any kind of dsp processing such as plugins or eq, then there could be any number of unknowable reasons why one mix of tracks in one daw could produce some differences in the data compared to the other daw. You have too many variables there to draw any conclusion other then one was mixed here and one was mixed there and they don’t null out.

 

Admirable bumble bee has produced hundreds or thousands of hours of test results and basically a simple null test is pointless. That doesn’t mean all the daws use the same dsp for all things. Basic summing of levels and even pan laws when properly configured, should be very close or exactly the same because that is dead simple dsp. Daws, by design, do not impart anything to the sound while summing the signals. They are transparent by design.

 

However, he did that one test comparing more sophisticated differences when moving faders while the music plays, for example, where the dsp is likely to be unique in each case and indeed he measured some differences. There are plenty of ways for daws to handle more elaborate situations such as fading or plugin dsp differently such that we could expect a null test to fail nearly any real world mix comparison. No duh. What do we learn from a failed null test? Nothing.

 

Even the simple mixing of tracks without any plugins at all would be difficult to compare unless you make sure they are all using exactly the same data in each track, but if you setup a track in one daw where one track has the level set to some fader value, let’s say that fader value is -5.2db or whatever. So you set all the daws to put that track at -5.2db. But you don’t actually know that each daw would be treating the fader with exactly the same multiple internally, that is just a label: Internally who knows if that fader would be using a slight lower or slightly higher digital multiplier to represent what it thinks should likely be -5.2db. If the multiple is off even by a little then they won’t produce null-able audio and it doesn’t matter at all because it doesn’t represent any kind of flaw per say.

 

The only thing you could try to test would be to remove all human variables, set all faders to 0db, make sure no plugins, make sure no dithering, normalization or other possible dsp anywhere, run the audio through and null test so that you can confirm that yes the basic engines are equally transparent. This has been done before and everyone agrees they are all transparent.

Link to comment
Share on other sites

This article may shed light.

... The only thing you could try to test would be to remove all human variables, set all faders to 0db, make sure no plugins, make sure no dithering, normalization or other possible dsp anywhere, run the audio through ...

 

I've seen many variants of comparative DAW tests. When plugins were used there were only 3rd party plugins whose cross-DAW preset files could be loaded into each DAW (no tweaks allowed). But usually no panning, no level changes, no automation, no sidechains, nothing as complicated as "real" mix. Some comparisons indicate once Aux or FX busses are in use, then applying PDC changes what is heard (even with no plugins). Differences in PDC between DAWs can make them sound different with projects otherwise setup to match as closely as possible including reproducing the peak and RMS levels displayed in the mixers.

 

The effects of (incorrect) PDC depends upon the sample delay added to the audio. The effects may/may not be subtle: bass can get muddy; midrange/high end can change; stereo image can change. There are many threads going back for years about PDC in DAWs, e.g., on Gearspace, with complaints about Logic. Some note PDC in Cubase is the best of any DAW now. Others mention PDC does not work the same way at bounce/render time compared to what happens during playback. So comparing files bounced/rendered to disk via null tests may not necessarily be comparable to what you actually *hear* during real-time playback. PDC is affected by the DSP a plugin performs and I believe some plugins incur internal delay from processing that is not reported to the DAW. In that case, no DAW will be able to entirely compensate for bus/plugin latency and it's not exactly documented what plugins work that way, i.e., will change the sound of the mix more than they should. Certainly we can all try and use what sounds best to our ears. If you have more than 1 DAW program, then it's all relative depending upon how reliably you can make comparisons. But "sound transparent" does not necessarily mean "sound the same".

 

Apple is aware of these kind of long standing issues with Logic. I've filed issues with Apple about PDC compensation and in the follow-up sent them Logic and Cubase projects for reference. A change to PDC in future Logic versions could mean every old Logic project you open will sound different compared to running it with previous versions of Logic. Software companies often save non-backwards compatible software changes for the next major release.

Link to comment
Share on other sites

I'm a little skeptical about PDC differences effecting the audio as much as you are implying. DAW's theoretically line up the audio according to the latency reported by each plugin, sample accurately. Logic does line up its audio. The bugs that exist in LogicPro have more to do with automation not being lined up to the PDC-adjusted audio. Also there are some side-chaining quirks..no argument there but I don't think that is being presented here in this thread.

 

The thing about Admiral Bumble bee is that in his tests he focuses on one very specific thing at a time, then seeks to find tests that will actually reveal something potentially useful. Throwing together a mix and then saying they don't null....well... we learn nothing.

 

compared to analog consoles...all DAW's are universally so incredibly transparent as to be non-issue. People buy all kinds of plugins to color the sound exactly for that reason. Trying to infer that one DAW is somehow coloring the sound substantially compared to another is a fools errand. The DAW's are not coloring the sound. We are. we use plugins, we mix differently, we do our gain staging in good ways or bad ways. Its on us to make it sound good.

 

The only area where some scrutiny lies is in some of the area where ABB has been addressing like how are fades handled. Apparently LogicPro does introduce a fair bit of noise with fades compared to say ProTools. Some of the other DAW's are noisy too, BTW. But we are talking noisy as measured....you probably can't hear its anyway...but who knows..maybe you do when you have 23 tracks fading at once? I don't know, but that is a useful measurement...focused, to the point, identifying a particularly thing.

Link to comment
Share on other sites

This article may shed light.

 

https://www.admiralbumblebee.com/music/2019/02/17/Daw-V-Daw-Differ.html

 

A null test in general will only tell you if two audio waves files are bit for bit exactly the same. It reveals nothing about why they are different, if they are different; or whether different is good or bad.

 

If you have a test where human choice is involved in setting up multiple channels with any kind of dsp processing such as plugins or eq, then there could be any number of unknowable reasons why one mix of tracks in one daw could produce some differences in the data compared to the other daw. You have too many variables there to draw any conclusion other then one was mixed here and one was mixed there and they don’t null out.

 

Admirable bumble bee has produced hundreds or thousands of hours of test results and basically a simple null test is pointless. That doesn’t mean all the daws use the same dsp for all things. Basic summing of levels and even pan laws when properly configured, should be very close or exactly the same because that is dead simple dsp. Daws, by design, do not impart anything to the sound while summing the signals. They are transparent by design.

 

However, he did that one test comparing more sophisticated differences when moving faders while the music plays, for example, where the dsp is likely to be unique in each case and indeed he measured some differences. There are plenty of ways for daws to handle more elaborate situations such as fading or plugin dsp differently such that we could expect a null test to fail nearly any real world mix comparison. No duh. What do we learn from a failed null test? Nothing.

 

Even the simple mixing of tracks without any plugins at all would be difficult to compare unless you make sure they are all using exactly the same data in each track, but if you setup a track in one daw where one track has the level set to some fader value, let’s say that fader value is -5.2db or whatever. So you set all the daws to put that track at -5.2db. But you don’t actually know that each daw would be treating the fader with exactly the same multiple internally, that is just a label: Internally who knows if that fader would be using a slight lower or slightly higher digital multiplier to represent what it thinks should likely be -5.2db. If the multiple is off even by a little then they won’t produce null-able audio and it doesn’t matter at all because it doesn’t represent any kind of flaw per say.

 

The only thing you could try to test would be to remove all human variables, set all faders to 0db, make sure no plugins, make sure no dithering, normalization or other possible dsp anywhere, run the audio through and null test so that you can confirm that yes the basic engines are equally transparent. This has been done before and everyone agrees they are all transparent.

 

Yes I'm still reading his test; it is really interesting.

But for your suggestion - I did that test, and I still got sound in the null test.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...