Jump to content

fenglich

Member
  • Posts

    15
  • Joined

  • Last visited

fenglich's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later Rare
  • One Year In Rare

Recent Badges

0

Reputation

  1. Thanks. I first changed the ceiling to -1.0 which gave two clippings, and then -1.5 which gave 0 clippings. I found this article, that precisely discusses this: https://www.productionmusiclive.com/blogs/news/mastering-tip-what-are-inter-sample-peaks-why-they-matter
  2. For a project I have the adaptive limiter set on Stereo Out, and have an Out Ceiling of -0.3 dB. This ceiling is hit during playback. I export to m4a:AAC with normalisation off. When running afclip on the file, however, I get plenty of reported clippings: afclip : "TheCreatorHasaMasterPlan.m4a" 2 ch, 48000 Hz, 'aac ' (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame CLIPPED SAMPLES: SECONDS SAMPLE CHAN VALUE DECIBELS 37.565896 1803163.00 2 1.002880 0.024976 37.565901 1803163.25 2 1.009195 0.079504 (…) total clipped samples Left on-sample: 8 inter-sample: 18 total clipped samples Right on-sample: 49 inter-sample: 126 Info on afclip is here: https://www.apple.com/apple-music/apple-digital-masters/docs/apple-digital-masters.pdf This is one source on clipping: https://proaudioland.com/news/the-difference-between-clipping-and-limiting/ What I suspect is that the limiter does “soft clipping”, and that afclip flags this. Still, the material I’ve consumed promotes limiting/compressing as a solution for clipping. What is wrong? Is afclip’s reporting ignorable? Is limiters a solution to clipping?
  3. These two features have overlapping functionality. For multitrack recordings, I’ve found takes more convenient. Yes, you can record multiple tracks with alternative tracks (AT), but with takes you can select say take 3, and all tracks are changed. This is not the case with ATs, where you have to manually select say take C for each track. On the other hand you can edit each in the multitrack recording, in the manner AT allows. AT allows editing, but takes has some functionality for that as well; select the scissors symbol as opposed to comping and you can cut, crop, and move regions. For some scenarios this is sufficient, for others, AT has more power. So when to use what? If you’re recording multitrack, probably go with takes. If you’re recording one track and don’t need strong editing functionality, go with either. If you’re working on one track in a creative manner and need much flexibility, go with AT. That’s my conclusion. What have I missed?
  4. I record a grand piano in stereo using matched pair condenser microphones. In logic I have it as two mono channels, arranged in a group (such that my takes are organised, for instance), and both sent to a bus for easy mixing. To me this works fine. I don't need to edit or do anything else per-channel. However, why not use a stereo track? It seems it would simplify a bit, but I don't know if two mono tracks has an advantage I can't spot now. Ideas?
  5. Yes, let me try to summarise and shed light on what we've stated. Logic has various features for these problems, and we try to apply them to three areas: recording, editing/working on the material, and bouncing. As is clear, it's hard to find a perfect solution that suits all these three simultaneously. With a song with an upbeat one can when /recording/: A. Not let the upbeat start on beat 1, but record silence before the upbeat/song starts. B. Record directly on the upbeat. This is maybe not suitable in for instance jazz, where there might be a rhythmic variation. C. Possibly combine either of the two above with Logic's Count In feature. Once recorded one can: A. Remove any initial silence B. If using a musical grid for this, model it with time signature changes. This has limitations and it's probably better to switch to a score if this is important. Bouncing: A. If you use empty bars as opposed to Logic's Count In-feature, the most convenient way for bouncing is to select all regions and after that bounce B. If your regions starts at the first position, such as that you've left-trimmed the regions or recorded directly on beat one, bounce as usual. The problem though, is that according to my experience, the workflow isn't as described above. Maybe one records some takes, goes through mixing/light mastering, to then go back to recording again. In my case, I have some instruments recorded, while the main instrument has the upbeat, and it has several takes. This means I cannot edit and move the regions to adapt for bouncing, because then I can’t record more takes. Maybe this is unavoidable.
  6. Yes, that's a lot further. The problem now is that the bar count is off by an offset of +1; upbeats are never counted. Maybe I'm having wrong expectations or using it incorrectly, but on the other hand using bar counting makes sense if it's done correctly. Can I make my upbeat bar start at "0" somehow? It's also discussed here: viewtopic.php?t=101255
  7. I have a song in 4/4 tempo with musical grid, and it has an upbeat of a halv note (two beats). Currently, I have silence at the beginning of my song and this makes bouncing cumbersome (yes, setting for instance locators is a workaround). I imagine that a good solution would be to change the first measure to only consist of two beats, but I can't find a way to do that. Like the post here: viewtopic.php?t=105669 I have the problem that I can't find the start marker, it's gone. I have the end marker though. David Nahmani writes "Sometimes the song start marker does not show up." in the thread, which would preferably be solved somehow :/ If I start a new project I neither have it. (I have also read "Set the start and end points for a Logic Pro project" in the documentation.) What is the best solution? How does one deal with an upbeat, and how does one get a start marker? Using Logic Pro 10.6.3.
  8. I recorded two audio tracks, replayed and saved, one region on each track. Afterwards I found two files in the Trash, “Piano L#02.aif” and corresponding R. When opening in Apple Music or VLC they report the files being 4:28:25 hours long, and the beginning sounds like a part of the regions I recorded. I recorded about 7 minutes, the files in the trash are about 3.3 MB large. What has happened? I see no use of these files so I think they are safe to delete and I see no problem with my recording. But what did I press/do wrong? Why are these in the trash? Where did they come from?
  9. The Save As-problem is allegedly a bug in Logic. See this thread: https://discussions.apple.com/thread/253007468 .
  10. I think I've found the answer to this. See this video: In short, you have to delete the files in Logic's file browser first, by in the file browser selecting Select Unused from Edit and then Delete to remove these ("bogus") references in Logic. After that the Clean Up in Project Management will delete the actual files. So, Logic doesn't consider a file unused until its reference have been deleted in the file browser.
  11. Further: in my new project I deleted all regions, and copied in the regions for my tracks for one particular song, from the original project. Now I want to delete the files for regions I removed, and therefore select Project Management->Clean Up... and check Delete Unused Files. I get no dialog after that, and it haven't deleted any files. However, if I afterwards select Edit->Select Unused in the file browser, it selects 176 files. Contradictory. Maybe this is related? Are they in one sense considered "used" because the original project is using them? That wouldn't make sense since they're copied, it's new files. This makes me wonder that maybe I shouldn't delete them. I'm in other words having the same problem as brought up 4 years ago here: viewtopic.php?t=130353
  12. Original project is a folder, and I'm saving as a folder.
  13. Sorry, I don't get this. Still today, when I choose Save As and uncheck copy Audio files, why does Logic still copy the audio files? And how do I prevent it?
  14. I mix multiple songs for jazz ensembles. The musicians and instruments stay the same, but the songs differs. The current project (16 channels) of four songs were recorded in the same Logic project, one after the other. The question is how to proceed now in the most practical manner. One approach is to mix all songs in one project, another is to split the project up into one per song and mix them individually. The first alternative means the mixing is uniform for all songs which makes individual song adjustment complicated at best. The latter alternative means all mixing/setup must be done individually for each song (which is messy, say maybe the chain for the bass is the same across all songs). I don't know which alternative is worst/best. One way is to do as much as possible that is common among the songs (say setup and organisation, those things), *before* copying/splitting into individual projects. After the split, one will have to resort to tedious copy/paste and syncing manually between the projects. I've also run into this problem for a solo piano project. I have multiple songs which feels most natural to have as one project per song (with takes), but then you get the problem that the setup is the same across projects/songs. I have a template for it, but it doesn't solve syncing afterwards. Any thoughts?
×
×
  • Create New...