Jump to content

loopsinner

Member
  • Posts

    150
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

loopsinner's Achievements

Newbie

Newbie (1/14)

0

Reputation

1

Community Answers

  1. You are right about this. You can reduce it, but not totally eliminate the artifacts. Long answer: Every limiter produces different artifacts, no one limiter sounds the same. Often, even the same limiter has 3-4 algorithms within it that you can choose from: in Izotope Ozone for example, it even has several of IRC modes and styles. What you're looking for is a more transparent limiter, and every limiter comes with a manual of how to make it more transparent sounding. For example, I use ToneBooster Barricade as my main limiter (coz its the cheapest I could afford) and there is a transparent mode within it. Within that transparent mode, I can chose for auto attack and auto release, OR set my own attack and release. The difference is with auto, I get more transparent sounding master but at the price of losing some loudness. Remember that loudness is psychoacoustic. Brighter master will sound louder, we just don't want to overdo it with too much artifact and find the right balance. So what I think may help you, assuming that it's the limiter that produces most artifacts in your case, is for you to: Read your limiter's manual. Find out how to make it more transparent. Choose the most transparent sounding algorithm within your limiter's setting. Use manual attack/release to manually control the artifacts/distortion. Auto attack/release can sound more transparent but also can sound 'weaker' lacking punch/loudness. Dithering. With so many dithers to choose from, choose one that is suitable for your bit depth (16bit/24 bit) and see whether a noise shaping / less audible dithering will help it sound more transparent. That is assuming it's the limiter that produces the most artifacts/distortion in your master. It could also be other things.
  2. When you said the Duet has high end sheen did you mean the playback/output or the recording/line in/preamp? If it's the output, I'd prefer mine to be flat/neutral/not coloured though. But awesome if its the line in/preamp.
  3. Thanks for your contribution David. I’ve went through this, oh dear I had and replaced my instruments before. Sometimes I even took 2 sounds out and left my composition minimal but better sounding. I always ask myself this question though : Does the change I made to my mix make it sound better or just different? If it sounds better, I commit to that change. If it makes it sound just different, but not better, I revert my changes. Looking at this list though https://www.logicprohelp.com/vip-users/ , with so many grammy winners are using Logic either for composing or mixing, I bet they have some tips to share with us. We just need to lure them here to this forum But to add another tips: My iPhone/iPad speakers is what I use to check the top end. If it sounds harsh in here, and if I pull back my head or turn down the volume, it means I have too much of the top end. Blast my monitors a little louder from usual, move 5-6 meters away from my regular sitting position. Listen to the rumble. More often than not, my kick is too loud. Lead instruments/vocals are king. I notice that when listening to an fm radio, when I’m almost losing the signal and everything is just noisy, I’m still singing to that current hit song they are playing, and it still sounds decent. How? Their vocals are a little louder than the background instrumentation - in the mid range area. As soon as you listen on a decent setup with bottom end, you’ll notice that top 10 songs have vocals sit just about right, not any louder - this suggests they balanced the bottom with the mid, taking in consideration that their songs will be played on small speakers (vocals would sound boosted) and regular sized studio monitors/headphones (vocals would sit just right with the low end added). I’m speculating, but i think top producers mix the mids first and make the main vocal sit a tad louder, and then slowly bring up the kick/bass until everything sounds about right and the main vocal doesn’t sound like its louder anymore. Can anyone confirm?
  4. It’s not useless. It does protect your speakers from playing it too loud. To oversimplify, there’s 2 types of peaks. A clipped and a non clipped. A peak will be clipped automatically when you go over 0dBFS OR when you intentionally do it with a clipper. A limiter doesn’t do digital clipping, it turns the peak down. A clipped peak gets reconstructed into regular peak in your D/A converter, now you have a higher peak than the digital peak after the reconstruction. A limited peak is just that, it doesn’t get reconstructed by you D/A converter and isn’t going higher after going through your D/A converter, in general, BUT I oversimplify this - so to get real, there's a thing called true peak. So true peak (what you speaker is playing) is closer to the height of a limited peak(when you use a limiter) than a clipped peak (when you go over 0dbFs or use a digital clipper), if it makes sense to you.
  5. Yeah, that's my take too. I agree with you on this 100%. Although I'm still team EnVerb Imagine telling a mastering engineer who has 40 years of experience and countless Grammys and say "I do not want a single percentage of intermodulation distortion at all in my master, I heard its bad" he would just tell me to go somewhere else because I listen to the internet too much and I suck at understanding the musical context and its enhancement in audio mastering. Mastering in general introduces a bunch of nonlinear processes that further results in more intermodulation distortion, in general. They (the mastering engineers) just do it tastefully, and musical. I dare you use EnVerb in your next mix.
  6. I have golden ears and I can hear even tiny distortion in a compressor. Kidding! Obviously I pushed the compressor to the max when I did the test myself =P Anyways, as a team EnVerb from your EnVerb vs Space Designer with David, I still think Intermodulation Distortion is unavoidable in the context of mixing and the sound enhancing processes with distortion producing plugins in the summing/master busses. It can be made less noticeable, but can't be avoided. But, obviously I would love to be proven wrong, that's what I'm here for : to learn more about this.
  7. Like this? Test.zip (distortion on a track VS master bus) Because i can hear the difference, this is what i wrote:
  8. Okay, I've tried it myself. The difference of putting a compressor on a track VS on a summing/master bus is HUGE, the sound it produces is not even subtle. But, but.. I personally like the sound of a compressor on the summing/master bus although the eq graph clearly shows I'm adding extra harmonics below the fundamental frequencies. Maybe I do have a terrible taste in tone generator =( Anyways, if this is a normal occurrence in mixing, and can't be avoided (I looked around for ways to avoid it, nothing came up), I think the best a mixer can do is to hear whether he likes the sound of a distortion producing plugin (compressor, limiter,saturator etc) on the summing/master bus and adjust the distortion accordingly? Because the more I look into this, the more I tend to think this is a normal occurrence in mixing and nothing much we can do about it? Or is there?
  9. Gonna be honest: I know about aliasing and how to avoid them (oversampling/higher sampling rate) but this is the first time I actually heard about intermodulation distortion. So I'm interested to learn more about this as well.
  10. I don't think so. I think big music labels gonna use 2 different mixing services and treat it like how the movie industry deals with a movie. A video editing guy is capable to do special effects and color grading himself, but the movie industry often hires separate companies to do the editing, color grading, and special effects - because they aim for the highest quality and only trust people/companies with specific skillset. Plus, there's only a handful of certified Dolby mixing studio in this world (https://professional.dolby.com/cinema/industry/content-services/studios-with-dolby-premier-studio-certification/). But mixing for indie artists with tight budget, maybe that will be a different story.
  11. Lol, no worries. Apple tricked us by calling it Spatial Audio when it really is just Dolby Atmos implementation. With the current trend of augmented reality and snapchat filters, it's just a matter of time before even TikTok will adopt Dolby Atmos for an extra immersive experience of teenagers dancing to spacial audio with a unicorn head.
  12. and you need to hook some monitors on the ceilings as well, lol.
  13. I don't think its a gimmick, it's Dolby Atmos, a standard which has been used widely in cinemas for I dunno how many years already. It's not a gimmicky technology that Apple developed, its an adoption of Dolby Atmos and Apple just gave it a new name. Dolby Atmos has been used in games, VR, movies, and now in music. I have no doubt we are going in the direction where most consumers would want an immersive experience. Not just apple music, I doubt it very much. The rest of music streaming services will soon follow I hope.
  14. Well, i haven't won a single grammy, but i can tell that you are a good singer/songwriter who may find his success one day, if you work hard enough on it. To add a thing or two about song arrangement, imagine this song is stripped down to just an acoustic guitar and you as a vocalist, how would you perform this song? What's the arrangement that works before you or the audience get tired? Thats the arrangement you should shoot for, in my opinion.
×
×
  • Create New...