Jump to content

Matt Mayfield

Member
  • Posts

    1,755
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Matt Mayfield's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Hi everyone, If you're familiar with the Loudness War (which is of course not really about loudness - but about sound quality), you might find this petition of interest: https://www.change.org/p/music-streaming-services-bring-peace-to-the-loudness-war The idea is that, if all streaming services adopt the AES recommendations for normalization of perceived loudness (not peak normalization, of course), then artists will feel much less pressure to use more dynamic compression than their artistic vision requires. They'll have the freedom to choose exactly the dynamic range they want, without worrying about being too quiet or too loud. It's picking up quite a bit of steam - as of this posting it's been live for under 24 hours and is already over 1000 signatures! Please consider signing and sharing the petition if you agree. Thanks! Matt
  2. The folks at iZotope have some technologies in RX5 that are kinda sorta tangentially relates to this, with their Reverb Removal stuff, but as far as I know there's no easy existing way to capture an IR from an existing recording. (You'd have to know what was the direct sound and what was the reverb.)
  3. Also, if you leave the Gain at 0.0dB, and only use Input Scale to manage the pre-limiting input level, you can get at least some idea of the max amount of GR by observing the level on the Input side. If it's Over, the positive value dBFS will be the biggest amount of limiting (though that's not accounting for the Out Ceiling).
  4. I think there's been some discussion at cross purposes, or confusion over specific terms, here. If you add together the values to which the Gain and Input Scale are set, you should arrive at the total gain (+ or -) which is applied to the signal before it's processed by the actual limiting itself. That signal (post-Gain and Input Scale) is what's constantly analyzed by the limiter section. If its level goes above the Out Ceiling (settable between -2.0 and 0.0 dBFS), then it will reduce the level of that signal before letting it pass through. As far as I've ever seen in hardware or software, "Gain Reduction" is the standard term in English to label a meter that shows how much this is happening. HugeLongjohns, what you seem to be asking is whether there's a Gain Reduction meter on the Adaptive Limiter. The answer is still No, but it's a sensible question, because that's a very common meter to have in a compressor/limiter.
  5. Note that if you Freeze tracks, that uses **more** disk bandwidth, not less. Try un-freezing as many tracks as you can. Freezing frees up CPU power, and instead offloads it to the hard disk, by pre-rendering. So if your error message says the CPU can't do any more, then freezing will help, but if the error says the disk is too slow, freezing will only make things worse. Edit - also... depending on how deep you want to go down the rabbit hole, you may want to check out the Activity Monitor utility (in the Applications/Utilities folder) to see if there are any unusual things using up CPU.
  6. In that case, it probably depends on the size of your projects. If you have many samples loaded *and* record 100 audio tracks, one USB hard disk will probably not be fast enough. But if the projects are smaller, it might be OK. The only way to know for sure is to try it
  7. I would suggest putting samples on the internal SSD, if they fit, and audio files on the external. This is because sample reading is seek time intensive (easy for an SSD), and reading audio files is bandwidth intensive (easy for HD). (As for the system drive, once Logic is launched, the disk access to run the OS and application is fairly minimal.) It's not *always* necessary to use two different drives for audio and samples, but it is a very large amount of work for one drive to keep switching back and forth. Separating the two tasks spreads out the workload.
  8. I agree with the comments so far. That sounds like a spring reverb on the drums to my ear - i.e. lots of individual "chunks" in it. I would expect probably a plate reverb (with a very diffuse "wash") instead. Nice going so far!
  9. While David is correct and I would hate to disagree, I feel strongly that I should add a point to the quantization discussion that hasn't yet been discussed. At 24-bit, the quality loss is theoretically there, but insignificant for any practical purpose. The noise would be 144dB down from full scale. No recording or playback equipment that I know of even has a 144dB dynamic range; if I recall, the best 24-bit converters are only like 120ish dB. Even if it did, our *ears* don't have that much usable range - that's more than the range between "inaudible" and "lifetime deafness in seconds." So personally, if it's more convenient workflow-wise to bounce, I'll do it and not worry about any quality issues. Turning up or down one of your faders 0.1dB will make more of a difference to the sound than the effect of a bounce. After all, Bohemian Rhapsody was bounced numerous times in the analog domain and it still became a hit. Or another analogy: real studios use analog patchbays even though the increased cable length increases noise, because it's worth the workflow benefits.
  10. Personally I would recommend against the small "PA package" systems, and instead suggest a Class D powered speaker or two combined with your passive mixer. I use and really like the QSC K series. I've gotten away with a single K8 for small acoustic sets, and otherwise use two K10s and either a QSC KW181 or York LS200P subwoofer for larger one- and two-man band shows with backing tracks. The 2xK10 + KW181 setup was even enough for a 200-person indoor event with a full band including drums - though it was a quietish Beatles tribute, not a huge metal show with 100-watt backline. Electro-Voice also has the ELX112P, and there are several other manufacturers who make a comparable active speaker at a similar price. In my experience, I've found that even small kit of 'high-end amateur/low-end pro' quality like the K-series or ELX usually outperform something like a Fender Passport (including the biggest ones) in terms of clarity and loudness. I hope that helps - Matt
  11. One of the most dramatic advantages of an SSD over a spinning hard drive is the SSD's ability to access many pieces of information from different places on disk very quickly. To read from two different files stored in different places on the magnetic platters, a spinning hard disk drive (HDD) has to physically move a magnetic head on a miniature arm with a stepper motor. That means that its seek time, the time needed to find a different piece of information, is longer (measured in milliseconds). An SSD has no moving parts - it just has to change which electrical lines it uses to read from, so its seek time is nearly instantaneous. (Nanoseconds?) If by "samples" you're referring to the long, continuous audio files recorded into a project on Audio tracks, then I agree that - up to a certain number of tracks depending on the drive speed, sample rate, and bit depth - HDDs work just fine because most of the data is coming from the same "neighborhood" of the disk. While that's technically a correct use of the word "samples," most of the time I'd expect "samples" in this context to mean the individual tiny snippets of audio (samples) used for a Software Instrument track and a sampling synthesizer, like EXS24 or Kontaxt - the individual notes of a piano, or individual hits of drums. For those kind of samples, which involve a ton of seeking to various places on disk (especially when using many different Software Instrument tracks at once), SSD typically performs dramatically better, because that use case plays to its strengths. So what you describe is very unusual. I suspect that, if you're repeatably getting better performance for sampling instruments out of your HDD than your SSD, you probably have a top-notch HDD and a bad (defective) SSD. Edit: something I forgot to mention. This only applies to samples that need to be streamed from disk and don't fit in RAM. If all the samples fit in RAM, the only difference is the initial loading time.
  12. Bet you $100,000,000.00 of Monopoly money that's the issue right there. When "Include Audio Tail" is checked, Logic assumes there's a reverb or something whose decay you want to include in the bounce, past the end of the actual notes and audio shown in your project. Occasionally, either Logic or a plugin gets into a state where it thinks that the reverberation (tail) goes on forever, so it keeps bouncing long after anything actually audible is done.
  13. In case it's useful (for the original poster, or others reading this thread later), here's a technical explanation. As I understand it, a typical pro DAW's mix engine (in simplified form, and leaving out plugins) does this: For each sample: 1) Set up a temporary "sample storage" place (let's call it S) and reset it to 0 2) For each track: ----a) Note that track's current sample value (let's call it T) ----b) Multiply T by whatever the track's fader is set to (i.e. 0dB = x1, -6dB = x0.5, -infinity = x0) ----c) Add the result to whatever is in S, and store the sum in S ----d) Repeat a-c for the next track until done 3) Record what ended up in S as that sample for the mixdown 4) Repeat 1-3 for the next sample until done At least in the circumstance of only one track and all relevant faders at 0dB, even samples that DO pass through the audio engine come out with the same values as went in (multiply by 1, add 0) and therefore the exact same audio, even after countless generations. There are so many things that make hugely more of a difference to the sound of a track than any digital generation loss or mix engine coloration (if those things are even audible at all in your specific circumstances), that it's generally not worth worrying about - unless you're clipping during the bounce process (to fixed-point), using lossy compression per pass, or something like that. If you do run into situations where you or someone else suspects digital generation loss as a source of quality loss, I'd be inclined to be extra thorough testing other causes first. It would take an awful lot to convince me that generation loss was the culprit, as opposed to some other thing that someone overlooked.
  14. I'm not familiar with the TD-9, but I have a TD-6 and a dual-trigger snare that can recognize rim shots, so I can tell you the general idea and you can look up how to do it specifically on the TD-9. Please excuse me if I say things that you already know - I don't know whether you're a beginner or an expert or somewhere in between, so I'll just describe it in basic terms. 1 - You need a physical drum pad with two sensors so it can differentiate between a normal hit and a rim shot. 2 - It needs to connect via a TRS (stereo) cable to the drum brain (TD-9 in your case). 3 - The drum brain needs to be able to accept a dual-trigger snare, and trigger a rimshot when appropriate. It sounds like these first 3 are already in place, since you said you can trigger rim shots with just the TD-9 alone. Although: is this actually a dual-trigger snare? Or is it just triggering rim shots for the hardest hits, and the pad is actually a single-trigger? If it's a single-trigger pad, and it does rim shots on the hardest hits (no matter *where* or *how* you hit it), then the way to do this would be to record MIDI as usual (ignoring the lack of rim shots in Logic), then select the notes with the highest velocity and move them to the rim shot trigger note. If it's dual-trigger (where you play rim-shots or cross-sticks like on an acoustic kit) then we move on... 4 - The drum brain's "kit" preset has to be set up to trigger the correct MIDI note on a rim shot that Logic will recognize. 5 - On the other side, in Logic, you have to be playing a kit that actually has a rim-shot sample with the trigger note set up in #4. Note - I won't be on this forum again for a little while (working on a project this weekend) so if you have followup questions, it might be some time; I hope others can chime in if so. But I hope that helps. Thanks, Matt
  15. I think you probably know this and were referring to it, but I'll spell it out in case others see this thread: bouncing per se has no artifacts; that's what Logic does anyway when it plays back. Common sense tells us that if you Flex-Pitch a syllable, if you bounce the track and then re-Flex the same syllable on the new bounced version, artifacts will accumulate. It's up to the engineer/artist to decide how much is too much. The remaining question is whether the bounced tracks acquire more artifacts in places where you don't apply further Flex-ing. I don't think so, but that probably needs more experimentation to be sure.
×
×
  • Create New...