Jump to content

mrwheet

Member
  • Posts

    44
  • Joined

  • Last visited

mrwheet's Achievements

Rookie

Rookie (2/14)

  • Reacting Well Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done
  • One Month Later Rare

Recent Badges

4

Reputation

  1. Yup, agree 100%. Actually, I had the realization a little while ago that human music always involves an action—even in the most tech-based processes. Somebody has to perform some kind of action, that action takes time, and that time alters the mode of listening and evaluating what's being created. So the process, which is grounded in action, influences the product in a continuous way. This is completely unlike generative music models. EDIT: Mind you, I think there's still too much emphasis on form here, which again suggests that the music is somehow explained by its form. But if that were true then we wouldn't need the seemingly infinite variety of forms we have; there would be a constant progression toward a singular "perfect" form—or at least a perfect form for each musical idea—then we'd stop. Haha... Instead, we've seen that conventional musical "sophistication" ebbs and flows over time, adapting to the expressive interests and creative processes of the societies that produce it. But generative AI can only do form, and nothing more. Many years ago I went down a rabbit hole of wondering whether the particular character of human music could be explained by thinking of composition as a "lossy" codec, where active/embedded processes form the compressor, and the decompressor exists not in an external system, but in the minds of human listeners. The idea was that generative AI may not be able to reproduce human music because too much information has been lost in the compression, and the decompressor is a piece of embedded, biological human "software". I didn't pursue the idea very far, but it's kind of intriguing to think about...
  2. Not a hobbyist here; professional composer for 20+ years... though granted, perhaps not a "media composer" in the normal sense. Vocaloids are just more disembodied horseshit. Nobody's going to google what a vocaloid thinks of anything, what they did for Christmas, or whether they love or hate the latest single of <Artist X>. All I'm saying is that people connect to artists because they're human, not strictly because of the music. The music opens a door, and the listener steps through it and tries to understand the person who made the music... THEN there's a connection between the form and the content... basically, they want to understand why the music came to be what it is. Sampling from a probability distribution may replicate the form with ever-increasing accuracy, but it can't explain it... arguably nobody can except the "fan" who builds a narrative for themselves around the song, the artist, and the connection they've made between the two. Okay, fair. I was being off-hand about it... But in that case it may just be a matter of getting very good at using the AI tools. After all, it's still going to be a gig that execs pawn off to others either way—they're certainly not going to do it themselves. And if it takes you 10 minutes with an AI tool instead of 90, then all the better for you. I guess my point is just that I see a lot of doom and gloom spread about the death of the arts by the hand of AI and I think it's wildly hyperbolic. The arts are the domain of humans not because of the form (which AI can replicate—though I think on a fine-grained level they'll continue to "miss" on that as well) but because of the social function (which AI can't touch).
  3. While I don't think generative music AI is going to go away, the legal questions are likely to get ugly pretty soon (they already are, to be honest). What will likely happen is a system for managing and brokering "legit" training data deals with labels/publishers/rights holders and AI companies, along with a system for attribution. Having a background in generative AI myself, I know that datasets are the bread-and-butter of the industry—no data, no gen AI. So rights holders should technically be in charge. But regulation has to get behind them. That said, I also agree with all the statements to the effect that the human element is still essential to music. These models are necessarily "behind", in the sense that they can do nothing without human data, and their "job" is only to replicate the underlying probabilities of that data. Sure, they can interpolate in that space, but interpolation is a far cry from the creativity of a human composer. We'll be in the driver's seat for a very long time, imho, because we're still the ones defining what to be in charge of—that is, what music is in the first place. Of course, as others have said, cookie-cutter music gigs will go away. But although those may be "a living", they're no way to live. 🙂 I also tend to think the prompt-to-create music generation stuff will become very boring for people very soon. It's impressive, and can be pretty funny at times (in the case of models like Suno), but since there's no cultural connection to a living creator, no "bio", no performer, no tours, no broader social relevance/zeitgeist, there's really very little to keep peoples' interest, long-term. My take is that ultimately people care about music because it's human, not because of what it sounds like.
  4. I suppose the argument against a left-align is that it would seldom be obvious (to the software, at least) which region should be the "target" of the alignment. So it becomes a question of which region to target, and then it cascades into a series of "what ifs" and "buts"... heh... (having done a fair bit of software dev I know this kind of thing can become an endless rabbit hole!)
  5. @wonshu Yeah, I agree about selecting the wrong event; there are bound to be plenty of situations in which it would select something other than what was intended. This pair of key commands works pretty well, though. The funny thing is that I pretty much never use the "home" key, so it's very much outside my touch-typing repertoire!
  6. Just out of curiosity, you're on "smart" snapping and "snap regions to relative value" ?
  7. Well... the more I do it the better it gets... 🤣 I think maybe it's partly my set-up. Too much lateral motion to reach my trackpad, so my arm sometimes drifts a tiny bit between seeing the yellow, and releasing... maybe... ?? Not totally sure, but it's been a problem in Logic long enough for me to have noticed. Generally I just deal with it, though. It's obviously not too terribly awful or I would have made the leap to another pointing device by now. Heh... I think maybe it's just a small point of fatigue inducement (however minimal) that I'd rather avoid, and key commands remove any attention to precision.
  8. Sorry, I wasn't clear. Yes, I do see those, and they do work pretty well. I'll definitely keep that enabled. I just find it a bit finicky with the three-finger drag (i.e., sometimes on release the region moves off the snapped point). However, my "hit rate" does improve a lot if I just one-finger click-drag for precise dragging. So that's an option.
  9. Really, wow? Maybe I'm "holding it wrong"... 🤣 The problem I have is the delay on releasing—often times the region moves slightly during the release. I can release one finger before the others to "solve" that, but it's far from second nature to me. I wonder if I have some dodgy trackpad settings? EDIT: Obviously that delay-on-release is helpful, since it means you can do long drags as a series of shorter ones, but I find it can mess with precision.
  10. I think one of the problems I have with dragging is that I use the trackpad ("magic" or MacBook Pro built-in) with three-finger dragging, which is kinda crap for precise positioning. I know I should use a mouse or trackball, or just get used to click-dragging (again), but I'm working on my laptop a lot, so this use of the trackpad has become second nature to me.
  11. Yeah, I turned that on earlier, but hadn't realized what it was supposed to do... haha... It's not bad, but still requires more care than selecting regions and hitting keys.
  12. Hmm... respectfully disagree. I'm *very* regularly in situations where I'm dragging and dropping audio from various places, and it would be super helpful to be able to quickly align a bunch of new regions to the same start point. In fact, I do a *ton* of work that isn't on a normal "grid" at all—clock time, maybe, but that's not necessarily very helpful. It's by no means the majority of times that I dig down, define the tempo and metre, and start snapping everything to absolute positions. So, 95% of the time for you, sure, but maybe not for everybody.
  13. Ah... actually, left-align would definitely be better, as this doesn't seem to work for multiple regions.
  14. ...okay, kind of roundabout, but on my keyboard I can do ctl-home to move the cursor to the start of a selected region, then select the region to align and hit the semi-colon... basically, "navigate" to the start of the selected region, then "move to playhead"... I think this is equivalent to what @mbiaso was suggesting, just different Logic version (and keyboard)..? Anyway, the old "left align" from graphics software would be more straightforward.
  15. I found your question by googling the same thing. It's actually super weird that a simple left-align isn't available. It would be super handy.
×
×
  • Create New...