Jump to content

ct1

Member
  • Posts

    108
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ct1's Achievements

Newbie

Newbie (1/14)

  • Conversation Starter Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done
  • One Month Later Rare

Recent Badges

0

Reputation

  1. Thank you for this extended answer the details of which I cannot always understand because of my limited knowledge. The varied specs arbitrarily applied by streaming audio/video platforms make the process a bit of a headache though.
  2. I know this. The problem is that we've all seen on YT and elsewhere that audio files are generally uploaded way louder that -13 <-> -15. Go to YT, and see the stats of any ""professional" or "big name" video (including the most recent ones). You will see that the content loudness value is +4, +5 sometimes +6db, which shows that professional studios DON'T WANT to upload within YT's guidelines. Until now, it seemed that doing what they do was a "winning loudness war". Not as much now given the above mentioned experiment ?? Hence my question...
  3. Hello I was with a friend of mine last week and he made a curious experiment before me. He mastered a track and compressed/limited it enough to be able to go as low as -6/-7db Lufs. Nothing unusual. He uploaded it on YT and got his video ready in no time. Right-clicking the video showed the normal Lufs value as “content loudness”=8db, meaning his track was indeed -14+8=-6. He then bounced the same song (compressed in the same way) but at -14 db Lufs and uploaded it too. This time around, YT said “content loudness=0”, which again is normal. I was amazed to hear NO DIFFERENCE WHATSOEVER in volume between the two tracks played by YT. Whether played on loud parts or quiet ones: absolute same volume level from the speakers. The -14 track was of slightly better sonic quality probably because the audio file had been left “more untouched” than the -6 track. Then, my friend downloaded the compressed files from YT and each was their own: YT’s -6 was indeed a -6db file and YT’s -14 was a -14lufs file. Many of you may explain this but I can’t. If YT doesn’t touch the Lufs level of the file, then does YT’s Player do the trick and apply -x db of constant offset (with no compression to speak of, since louds and quiets sound the same between -6 and -14 ??) to any track YT analyzes as being of content loudness=x ?? Of course, the consequential question is: does uploading files at |LUFS| ≤14 make any sense now ?? Thank you for reading. Waiting for your answers since I believe I may be missing something.
  4. I wish this could help me but where is this: "Under controller, "Snap To" now defaults to a time value, whereas before I am sure it used to be "none"" ? Thank you.
  5. Hello guys. Simple problem: a guy playing the piano and singing while being filmed. No loudspeakers allowed. I used to use headphones for this but the cable and the earcups themselves are just ungainly on the video. I'm looking for something like earbuds if it is possible ? The audio interface hardware is Universal Audio. Thank you very much.
  6. Thanx to both of you for responding. Yeah, cinningbao, that's what I mean, and thank you for this workaround I wish we could avoid because I find myself wandering more often than not on the Legacy side of things...
  7. As an example: Legacy>Jam pack1>Guitars>Delicate echoes. If I click on the channel strip's plug, I can only set Volume, release, Cutoff. Imagine I want to change the pitch bend range, how can I do ? Thank you very much (I know that these Legacy things can be found as well in the not-Legacy content, but the channel strips are sometimes different).
  8. @Vallisoftware: thank you for this lesson. But pardon my ignorance: since Logic's online user guide only deals with External instruments regarding the use of external hardware, I have no idea how you connect the said modulator to a given track's pan button. If I do that, "Learn plug-in parameter" tells me I'm not trying to connect a plug-in parameter. I don't even know what to set in the External plugin parameter since I have no external hardware whatsoever and it seems it is by no way any kind of internal router... Or do you imply that any Logic Instrument plug-in will anyway recognize the Pan 10 CC with no need of an external midi plug ? So, can you tell me where I am wrong ? Thank you very much.
  9. And it... works. Thank you David. Though, what did my previous setup do exactly? Apply at any time the actual pan value from the modulator to the actually played midi note at that particular time (which would explain why the sustain stuck to it since Sustain=No New note event ?)...
  10. Hello I had yet to use a MIDI plug-in in Logic and don't seem to pull it off. Simple task: assign a LFO modulator to a track's Pan (for it to alternate between L and R). I choose the waveform, sync the rate (1/16th note), define To as "10 Panorama" and play. If notes are placed on whole nodes, I can understand that I hear notes-in at the same Pan place, but I should at least hear the pan travel on the sustain. I don't... What am I doing wrong ? Thank you very much.
  11. Thx for this answer. That's what I was guessing. But it worked for the bass drum. And it worked fine for all drum parts of another song though. Is there any work around when it doesn't ?
  12. Hello I know that Flex pitch has a reputation for bugging. I'm currently working with .wav files whose most make "Create midi track from flex pitch data" crash Logic. Not all of them. I decided to exit my session, creat another one, import those dubious files, try again and bam: still the same, i.e those files taht work still do and those which don't still don't. Tinkering with the buffer size doesn't help. Even joining an audio file that successfully translates to midi with one that does not creates a midi track that offers all notes from the former and none from the latter. So my question is: does this particular function not fail if Logic doesn't recognize the pitches to be converted to midi? (my files are all drum files with one instrument each, which makes this assumption a bit weird though). Thank you for any help since I don't know how to do now.
  13. Nice one. Thank you fuzzfilth.
  14. Hello This may sound as a newbie question but, how can I visualize the names/durations of the audio regions of a particular track ? Because of a transfer of audiofiles between 2sessions with different tempo maps, I need to get the exact absolute time stamps of the audio regions of a track in the first session to put them at the right places in the 2nd one. But I can only view the names/durations of ALL the tracks of the session. I thought I could filter by tracknumber but can't find how. And clicking on the Track column doesn't reorder the events which seem to be stuck to the time order... So, yes, I can do what I need to, but this is quite inconvenient... Is there a better method ? Thank you very much.
  15. Yes I have. But there seems to be a probleme here. When I convert track volume data to Region volume data, all volumes ctrl points are not erased from the track, i.e all points that fall between regions remain, as well as ctrl points that reflect the actual value at each region's beginning. That draws horizontal lines at non-zero values inside regions that I assume add up with the new region data volume points. But, if I undo, I see that for instance +2.4 and -1.3 track volume points were converted to +2.4 and -1.3 Region volume points, which means those new unchanged values are now tampered by track volume points that remain and that I expected to be completely cleared. This is manageable with large regions, not with a string of short ones that I have to first join into a single whole file. But, even so and then erasing the remaining track volume points, delayed playback doesn't seem to perfectly apply to the new region volume points. What's the truth with all this and the correct way to do things ? Thank you David.
×
×
  • Create New...