Jump to content

IMRECS

Member
  • Posts

    42
  • Joined

  • Last visited

IMRECS's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Hi, My project is a cross between live instruments and electronics and we are getting our productions ready for live performance. We will be using backing tracks of synth bass, various synth pads, drones, lots of drum samples both real and machine, some backing guitars, etc. I'm starting to prepare the productions for bouncing backing tracks for live, as well as the mixing phase for the studio versions. My plan is to mix everything in mono for the live tracks, and to get things sounding as good as possible in mono, before moving to stereo for the final studio mixes. So I will start off with everything centered and a mono plugin on the master bus...do my balances, EQ, saturation, compression, and mono reverbs/delays to get things sounding as good as possible. At that point I will bounce down the backing tracks for the live show...once that's done, i'll remove the mono plugin on the master, start panning (I tend to like LCR style mixes) and applying stereo reverbs, delays and any other stereo effects I want to use to enhance the soundstage (in some cases replacing mono effects). This brings me to a couple of questions regarding pan laws, and stereo tracks. I'm using -3db compensated for my pan law, though I'm not sure if I should check the option to apply it to pan knobs on stereo tracks....continuing with that, I have a number of stereo tracks already and have to figure out how to best handle them. Most of them are Drum samples that were recorded with stereo overheads, or stereo ambience embedded in the track. In some cases there are some synth patches I used that have lots of stereo information. For the drum tracks that have stereo info, is it better to just pick a side, L or R and treat it like a mono track, or sum the L and R side? As long as they aren't badly out of phase, is there a recommended way to optimize stereo tracks for mono mixes? A lot of my snare samples are from BFD, where I have access to recorded stereo overheads, room mics and ambient mics, so I can either choose L or R, or sum them. For synth patches, same question? Choose a side, or sum? Is the right idea to just compare the 2 methods and choose the stronger sounding version if there is one? Once the live mix is done and I go back to stereo, I can take the samples that I chose either L or R or that I summed and make them stereo again or just leave them in mono...but if I go back to the stereo version and want to pan them, should I use Logic's direction mixer to pan? For instance, a snare with stereo information, panned slightly left....or 2 percussion samples with stereo information panned hard left and right? What's the general idea here? I assume if I hard pan a stereo track with the direction mixer, its the same as just summing that track to mono, and panning it hard with the regular pan knob? Does the direction mixer follow a pan law? Just want to make sure my mixes are as seamless from mono for live backing tracks, back into stereo for the recordings.
  2. Also need to know the best way to loop something that is just a drone, so there is no start or end point? If I record a drone (lets saw enow guitar) for 16 bars but need to make it 64 bars, how do I best cut/loop/edit it to be seamless? Is this possible?
  3. I need some help accomplishing some tasks. There are a handful of tracks we are recording where we want to take 1 cycle of a riff/phrase when tracking, and copy and paste it over and over for any number of bars. What we do is track the part making sure to get 1 cycle that starts from the beginning (so no notes are ringing out/carrying over), one that is in the middle, and one that includes the ending of the entire loop cycle. So Beginning + Middle (copy and pasted over and over) + End. 3 regions, arranged as such. We are wondering is how to best go about this to avoid clicks, while connecting the ends to the middle loops and seamlessly looping those middle regions. What I've been trying to do is cut them generally at the loop length, then shorten or lengthen the loops together to try and get them to some sort of zero crossing point, or to where the waveforms look like they are lined up when zooming in to the maximum zoom, then doing an equal power crossfade with a value of 1 to make sure there is no click. This is proving to be very time consuming, and since the middle section needs to be copy and pasted over and over, and our attempts to find the zero crossing/best connector point involve cuts that are changing the length of the loop and not putting it exactly on a tick mark (snap mark), the copy/pasting seems to not be clear. Does my dilemma and goal make sense? If so, could someone explain to me the most efficient method to take single phrase/riffs of audio and "loop them" seamlessly? Again, since the riffs once started have notes ringing out, I need 3 versions, a beginning, a middle (looped indefinitely) and an ending (usually the final note being held out. THANKS!
  4. was this bug fixed in the new logic x update? just checking. thanks!
  5. Thanks. The only status available is Note and Rel Vel. Safe to assume no CC information then?
  6. I don't see anything labeled CC2 or any CC for that matter. Only Position | Status | Ch |Num |Val | Legnth/Info. Everything is Ch 1. Nothing under the Controller Button shows up. Does that mean there is no CC information stored in the midi?
  7. where would I see CC2 information? I don't see anything like that. thank you!
  8. I am having an issue with my drum sequencer program. When I re-open a session in Logic, many of the sequencer's samples/pads are randomly on mute by default. The sequencer is run as multi-output software instrument. I load in the samples, route them to their respective outputs, and then trigger them via a midi file in the arrange window. I asked the dev about the problem and he suggested that CC2 on various MIDI channels might be stored in my MIDI. I am guessing CC2 controls mute on/off in the sequencer? Anyway, I didn't find any CC2 on the midi, though I'm not sure I checked this out sufficiently. Also, I do not have any hardware controllers connected. How would I go about checking this out? Any ideas of what the problem could be if that's not it?
  9. can you explain this cheat, I am not sure how to send to an "aux" from the audio track. solo safe aux 2 doesn't work because I have this problem with multiple aux's and I can't solo safe everything. any other suggestions?
  10. If I have a track and I send it to a Bus, which is returned on more than one aux, and I solo the track, only the track and the first aux that bus is sent to plays. For example, I send snare to Bus 1 and Bus 1 is returned on Aux 1 and Aux 2, and solo the snare, Aux 2 does not play. Is there a way to have all of the Aux's returned from that bus play when the signal I am sending via that bus is solo'd? Thanks
  11. yes that's what i am doing. the issue is, let's say my bass has a linear phase EQ on it somewhere in the chain. the lin phase eq has fairly substantial latency, so while PDC aligns everything at the outputs, the bus at the side chain input does not get PDC aligned. So If I put a side chain compressor, or dynamic eq after the linear phase EQ (or another plugin with substantial latency), the trigger is not aligned to the audible audio coming off of the bus's aux channel's outputs.. most typical plugins seem to only cause small latencies...though if any chain or plugin causes a lot of latency the side chain will start to sound like it's happening way off. for small latencies, pulling the ghost trigger back a ms or a couple of ms would be a fine solution, just listening to the transients to find a sweet spot)..
  12. Can anyone clarify this? To sum up my question: It is suggested that when doing parallel processing such as compression, or saturation, where the parallel signal is very similar to the original signal , that a linear phase EQ should be used on the parallel chain for any EQ moves. A Kick, for example, is sent via Bus 1 to Aux 1 (Kick Bus w/ Regular EQ) and Aux 2 (Parallel Compression + Linear Phase EQ). Since Aux 1 is being EQ'd with a regular EQ, then there will be phase distortion with Aux 2, regardless of whether you use a Linear Phase EQ on Aux 2. The only way a linear phase EQ would truly prevent phase distortion would be if you did not EQ or process Aux 1 at all, or if you used a send to send the already eq'd and processed signal of Aux 1 to Aux 2 via a second bus. Is that correct?
  13. it seems the only solution is to side chain on a signal before it hits any plugins with substantial latency (fab filter pro Q in natural or linear phase mode, DMG equilibrium in linear, minimum or analogue phase mode, etc etc.)...with standard plugins that introduce very small latencies, it seems like the side chain can be accurate enough, and probably slightly more accurate by using a ghost trigger and pulling it back a few ms, to compensate for any small latency in the plugin chain. So I guess the lesson is side chain early on, and use plugins with longer latencies as late in the chain as possible, though still would love to know if there is any other way to work around this. Also, is there anyway that I can see plugins or specific chains of plugins reported or total latencies in logic? For instance, if side chaining bass to kick, it would be nice to know how many ms of PDC is being applied to the bass by the time i drop in my compressor, so that I could pull back the ghost trigger by the same amount.
  14. the kick is already going through a bus/aux channel. i don't send anything directly to the master output, other than 4 submix aux channels (drums, bass, gtr, synths). all is latency compensated at the ouputs and ping tests don't create any comb filtering or latency audibly from any bus or routing scheme. only when trying to sidechain via a bus
×
×
  • Create New...