Jump to content

breeze

Member
  • Posts

    24
  • Joined

  • Last visited

breeze's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Probably a really good idea,but FWIW, I didn't, I just installed Logic 8, and there were no problems. It uses separate prefs, etc, so Logic 7 prefs are safe, but Logic 8 does use all my cusom EXS instruments from the App Supprt folder, and it stepped on existing Waveburner and Soundtrack apps with updated versions. It all installed fairly quickly, and was up and running PDQ. JB
  2. I use Space Designer as a sound design tool for SX all the time. You can load in any soundfile as the impulse, so convolving an input signal with something other than a reverb impulse response will give you all kinds of differnt things. Anything with the same basic shape as a reverb IR, such as a metallic crunch or a glass impact, will still feel reverb-like, but more of a SFX bed than actual reverb. Put in something like a glassy impact, and you can make a crystalline SFX bed, perhaps. Anything that's long, without a lot of transients, weill yield SFX-bed type results, while something with transients and spaces in between will yield layered (delay-like) results. It's a matter of experimenting, and sending the same input signal through Space Deaigner while you keep loading in new IRs (use any kind of samples) will tell you a lot. Then you can start manipulating the amplitude envelope of the IR, and this can yield all kinds of wooshes and similar gestures. Reversing the IR will give all kinds of variations on that approach - right there is a ton of new sounds to create. Then, add the filter to this, as well as the filter envelope, and you can make all kinds of strange and unusual sounds, depending on the input signal and the IR you choose. Between the amplitude and filter envelope, you can make a whole lot of cool sounds. If you work with the synthesized IR rather than a sampled IR, you can also manipulate the density. A high density feels like reverb, but setting a low density yelds all kinds of wonderful stutters and so forth. Watch the IR display change when you set a low density and you'll get a sense of what's happening there. Along with the amplitude and filter envelopes, there's a whole world of cool sounds you can make with low density settings, and enveloping those settings. If you change the IR start time so that you're into the IR a bit, and set a short duration, now you're convolving with a very short section of the IR (1-100 ms) and it's more like a filter (resonator). There's all kinds of timbral changes you can impart on a signal this way, great for anything from drum loop variations to SFX on dialog, to delay returns in a pop tune, etc. You can use almost anything for the IR in this case, although different IR's (vocal note, guitar note, car by, metallic crunch, etc) will yield all kinds of variations to the basic idea of a resonantor. With any of these ideas, you can also set up a feedback loop (you have to use auxes for this), and send the output of Space Deisgner back into its own input again. Plus, you can process your input signal either pre- or post- Space Designer (flange it, Spectral Gate it, etc) and get even more interesting sounds. It's all trial and error, but once you get the hang of it, Space Designer is a really fun sound design tool. When you hit on something you like, bounce it, and add it to your SFX library or whatever.
  3. This isn't necessarily true. Probability would dictate that it's unlikely you'll have much of an issue, but it *is* possible that those two rcordings might have a good bit of phasing (destructive interference) if the performance is the same, and nothing changes about the recording set-up. Anyone who's ever tried to layer two kick drum samples only to have the resulting sound be *thinner* than either sample has come across this situation. In fact, this is something to consider when double tracking a guitar or a vocal. As David points out, you want to make the performance as dead-on accurate as you can for each doubling, but the magic in doubling comes from the *differences* in the two parts, not in the similarities. Sometimes, folks doubling a vocal, for example, record the two vocal takes, put them up, and there *is* a lot of phasing, because the performance was very close, and the recorded sound is exactly the same. The solution is to move the singer for the doubled part. Take a step 6" back and 6" to the side, and the vocalsits' relationship to the mic is different enough that you'll minimize phasing issues, and get a better result from doubling. For a third part, move them the other way, etc. On electric guitar doubling, move the mic, switch a pickup, swap out the mic, etc. You want enough differences in the basic timbre to avoid phasing issues. For acoustic guitar doubling, again push the player back 6", or have them twist their body position slightly in relationship to the mix, move some reflective baffles, and so forth. I had to mix a tune earlier this year where the lead vocal doubling was so dead-on, it all sounded like hard comb-filtering. Unuseable, so I rerecorded the double, fed through a gently modulated delay, into the studio space through a loudspeaker and brought that back into the mix, and it worked pretty well.
  4. In reference to the Logic plug-in, Phase Distortion is a synthesis technique using a delay line whos delay time is modulated by an audio (rather than sub-audio) signal, thus modulating the input signal's phase position. This distorts the signal, adding upper partials as a result of the waveform distortion. In the Logic PD plug-in, the input signal serves as it's own modulator, and you can filter the modulator to control the behavior of the phase distortion. PD was the sythesis technique used in the old Casio synths from the 80's (CZ-101, etc) and it's related to FM sythesis as well. Some delays, like the old Lexicon PCM-41 & 42, would do this as well - use the input signal as the modulation source, using an envelope follower. Not exactly the same thing (the envelope folower was slower) but it's the basic idea.
  5. It works - I had it running for years. If you can change banks, then you have the bank change messages configured correctly (Logic even has a "preset" of sorts for this). So if I remember correctly, the first expander card shows up in bank 7 - did you try that, or were you trying bank 5? I think 5 and/or 6 were for the RAM cards, then 7 & 8 together work for expander card #1, 9 & 10 for #2, and so forth. The hard part is getting the names. In the old days, you used Sounddiver, but I'm a little fuzzy on the current disposition of Sounddiver. I have a multi with names for the World, Orchestra, Techno, plus another with Session and Pop if you need 'em.
  6. What I'll often do is set up the expander on a separate aux track, with the track outout routed to the reverb track input (I use aux tracks for everything). So, if I want to send a signal to the reverb with this expanded processing, I'll send it to the aux track with the expander on it. If I want to send a different signal to the same reverb without this expander, I'll route it directly to the aux track with the reverb. This way, you have the choice.
  7. breeze

    Cher Effect

    Here's a SOS article about this production and the famous vocal effect. Check out the interesting "disclaimer" by SOS pointing out that these guys initially claimed it was the Digitech Talker, but then the truth came out. http://www.soundonsound.com/sos/feb99/articles/tracks661.htm As pointed out, the Pitch Correction plug-in can do this somewhat, but Autune is probably better since betweem MIDI targeting and Graphical Mode you'll have more control over the results.
  8. Oh, right... I think in Ye Olde Days, the Logic oscillator was useable for simple ring modulation. It's been a while.... I can't find any old manuals from 5.5. If you post a screenshot of the 5.5 test oscillator I or someone here might remember what it looks like, and therefore how you make it happen. Off the top of my head, I can't remember... The MDA pluggies include a ring modulator, if you need a 3rd party pluggie to try out some ideas with ring modulation. JB
  9. I get this a lot with my MAudio stuff and Logic - I use ProjectMix, AudioPhile, and Ozonic. All have weirdnesses, all somewhat different. Usually, opening the M-Audio software panel and hitting the buttons does it. For example, sometimes I get eactly what you get - no audio. In the M-Audio software panel, there's audio coming in via the software return meters, but there's nothing going out the M-Audio interface outputs 1-2 even though the software return is assigned to out 1-2. So, on the software returns I click outputs 1-2 to unassign them, then again to assign, and the audio appears at the output as it should. Other times, the ProjectMix boots up with outs 1-2 muted, but I still get audio coming out 1 (left). So just unmuting the outputs fixes it, and the next time I mute it mutes both channels. And so forth - sometimes none of this works, and I have to change clock on the M-Audio to external then back again, and the interface "comes back" working, etc. Just a little "smack in the head" to the interface often fixes it. If not, then it's time to reboot Logic.... Make sure you have the proper M-Audio drivers for your OS, although even so, all of the above bugs do happen to me.
  10. If it's the same mix, they do. Take a mix that peaks at -3dBFS, and add 3dB of gain to your mix, it will now peak at 0dBFS and sound 3dB louder. Well, of course - it's obvous that changing the gain of a signal changes its percieved loudness at the same monitoring level. Wasn't my point. But I thought your comment referred to a mastered mix. In the mastering community, the idea of the "loudness" of a mix is an issue with the RMS level of the mix, not it's peaks. I was just trying to point that out. Mastering engineers don't manipulate the "loudness" of a mix by changing the mix gain (I thought your post suggested otherwise - perhaps I misunderstood), they do it by manipulating the relationship between the peak and RMS levels. A typical pop mastering gig will peak within 0.5dB of full scale, with the mix "loudness" is controlled by raising the RMS level against that peak. Just wanted to point that out. Clearly, I meant that they were the same mix before applying limiting. The statement relates to the same issue - I was pointing out the difference between peak and RMS level. That was the point of the original post, and the thrust of my example of two treatments applied to the same mix.
  11. I think there's a Yahoo Group dedicated to the Ozonic - they might have files and templates there? I write my own for my Ozonic (LOVE this thing!), but they wouldn't mean much to anyone else as they're for particular tasks. I have a set of them for Logic, and just keep swapping them for what I'm doing. JB
  12. I tend to use strip silence more than this method for chopping loops, since most loops aren't really locked to a grid. But I like using this for pads and effect beds - chop it up quickly then delete every other slice, set a little fade in/out on each slice, and it's a good rhtyhmic bed for the tune. Then I'll use "Audio to MIDI Groove Template" to extract the groove from whatever loops or performance I'm using, and requantize those slices from the extracted groove, so they stutter along with the same groove as the rest of the tune.
  13. Just to clarify, if those are peak levels, then I think they have no direct relationship to the "loudness" of a mix. As a mastering engineer, I can make a mix hit peaks at -3dBfs yet make it much louder than the same mix hitting peaks @ -0.3dBfs. Depends on what I do with a limiter. The RMS level is the issue in the percieved loudness of the mix, so it's important to look at both in Logic's meters (nice that Logic offers this!!). If you aren't going to have the mix mastered professionally, then I'd mix it without a 2mix limiter and aim for between -6 and -3dBfs, getting the mix sounding as good as you can, the way you want it. That's *plenty* of level, and gives you or someone else plenty of headroom for mastering. Then I'd insert the AdLimiter, set the output ceiling to -0.5dBfs, and then change the gain until the output level starts to hit -0.5dBfs as well. From there, you can crank it a little harder if you want it "louder". The good thing about this approach is that later on, if you *do* want it mastered properly, you just bypass the Adlimiter and you've got your mix, the way you wanted it, before you made it "louder". So you could bounce that, with it's nice headroom, and let the mastering engineer have at it.
  14. If you're looking to augment the Logic pluggies, check out those Wavearts pluggies. They're pretty affordable, and sound quite good! Also, if you need sound design/ear candy stuff, Cycling 74's Pluggo is still a good package - used to be "74 plugins for 74 bucks", but I think it's more like $130 these days?. Still a lot of weird, fun toys for not too much $$.
  15. I use folders, which was already mentioned here. Much easier than trying to manage 100's of tracks, which would be really cumbersome. I also use the "two Arrange windows" method for editiing, and it works fine. Sometimes, I'll do your method, and just record them in a linear manner. Perfectly workable, expecially if you're looking for a single, intact take that you'll build from, rather than engage in creating a heavily edited foundation from all the takes. The one thing I do different is that I'd never cut 10 takes of the basix of the tune. When I record a band, I try to make sure they're rehearsed and "on" when it's time to record, and try to hold it to three takes (more of less). If they're prepped to record, in three takes you'll have it. Plus, I'm not hesitant to make a committment to scrapping a take - if the take was generally not happening, I won't keep it around just because the transition to the second chorus might be a keeper. Not worth keeping it - if they can play it, then it's going to be in a better take. If they can't play it, I'll call the session, and it's back to the rehearsal space with 'em!
×
×
  • Create New...