Jump to content

kiotozane

Member
  • Posts

    114
  • Joined

  • Last visited

kiotozane's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Is there a trick or clever setup that would allow for preparing a batch of multiple Autosampler grabs, and let them run unsupervised overnight? I'm about to sample a lot of material, each instrument taking about 20 minutes and multiply that for round robins, its gonna take days... would be superb to set it all up and then hit "go" and let it work through everything overnight, without me having to activate the next one. Or since the sampling isn't particularly CPU costly, some way to run them parallell so I could grab handfuls at a time. I'm 95% guessing there isn't a way to do this, but thought to hear with you brilliant beautiful minds...
  2. hey i've had the same problem with Vital, its annoying but can quickly "fix" it by double clicking on MIDI *OUT* display, not MIDI In. That might be what you meant, but in case you've been clicking on MIDI in, try doubleclicking on MIDI out instead, that worked for me killing stuck notes. (I assume that sends that panic message paulcristo also mentions so possibly could also map that to a key command if it works).
  3. I'll also just add a super mild me-too on this. I'm on a Mac Mini / Big Sur, latest LPX, running Rosetta. It is rare, but it happens every now and then, maybe once a workday of 8 hours? Enough that I recognize it when saw your post. Suddenly the region/automation/fader/whater I try to do in arrange is unresponsive, I have to click and do something ELSE in the GUI, undo that, then I can do it. I'm using BetterTouchTool to assist me in mousing. That has sometimes created problems with other apps so noting it here in case it's something we have common. I don't think I would be able to make a video of this, because the video would just show the mouse gliding or hovering over something, while I'm clicking in reality. Its like the mouse click does not register, it does not select/grab/focus on the thing I click on. But clicking and operating something ELSE wakes up the thing that didn't respond. (The problem is so microscopic for me i haven't bothered report it, feels like i have too little to go on)
  4. Under Logic's own Preferences -> Display you can set this to whatever you want. LPX can follow the system mode or you can override system mode to always light or always dark. When you are either in system dark mode or you have set LPX to dark mode by preference, then all windows and dialogues are dark. Just in time for Halloween.
  5. Of course! Don't know why i came up with that extra rename step
  6. Drag all the files onto a track in LPX, apply Inspector fadeout to all of the regions that works for you. Export all these regions as new files, with their filename being original filename (their region name) plus added "FADED" to the end of the filename. Select all the original files in Finder, right click and Rename, add UNFADED or whatever to the filename. Then select all the new LPX updated files in Finder. Again use Finder Rename tool, and remove the FADE text from the filenames. Voila you now have all the files with fade-outs with the same filename as before.
  7. Backwards compatiblity: I was wondering this because I have to keep some systems on Mojave, so I tested it, could be useful for others to know: If you save a project with 10.7, it opens seemingly okay with 10.6.3 but with a warning alert upon loading that "this project was created by a newer Logic Pro, this may cause problems, please update Logic Pro". For this test I did not do anything except open the project and hit save on a system with Big Sur / 10.7, then opened the same project on a system with Mojave / 10.6.3. I don't know what happens if you work with new features and save the project.
  8. Hello! I would say this is "expected", did you have wildly lower CPU use before? I've always seen strong increase in CPU use with Flex. I just now got a maxed out Mac Mini, I just tried if they had optimized it there, loading single looped sample into Quicksampler: Max polyphony 64, when all notes playing Regular mode CPU use approx 10% of one core Flex mode CPU use approx 50% of one core, so ~5 x increase in CPU Btw I also quickly tried Alchemy in Granular, and got same result as Flex, at 25% core use at the max 32 voices polyphony. Which would be 50% at 64. So to me, Flex is using same CPU as Alchemy granular, which is also what I would expect. My experience from many many samplers over the years, is that shifting to any kind of realtime timestretching algorithm always comes with a sizable CPU cost over traditional faster/slower playback. Sampler and QuickSampler is in then in my experience here kinda modest (a 5 x increase), most other samplers have around 10 and sometimes up to 20x increase (hello Kontakt!) in CPU use when shifting from regular traditional faster/slower playback. They also quite often restrict the polyphony hard, LPX doesn't. PS this was highly informal tests just eyeballing the meters.
  9. THE GODS ARE LISTENING! and benevolently providing in 10.5. thank you team Logic, i am super grateful, this is so helpful in huge arrangements! arrigatou gozaimasu!
  10. Good question and nice thread. Learning that increased rate improves latency, wow this trick was new to me, and of course, that makes sense, great, this is why i love lurking this forum, and love it when people ask questions like this and learning from others observations and processes. Here's my experience and why I record at 96/24: I work as a media artist with music and sound design (film, tv, theatre, art, games), both in a traditional sense (multitrack mixes) and the weird art sense (complex interactive installations with responsive live sound). I prefer to sample and record at 96/24 primarly for that "capture as many details as the technology can at the moment" philosophy, and the always save the original files as kind of "raw" input, so I can always go back to a higher quality version if needed. My recording is an even mix of "foley", field, voice talent, and instruments, but in 95% it is always recorded "solo", in the field or the studio, patiently one source at a time. So I can focus on source, mic, position, getting the sound right, make sure there are details in the first place to justify the high rate, I totally agree with Fuzzfilth, this is way more important than sample rate. And also, being ready in the right MOMENT is also critical. Some of my most cherished and valualable sounds are grabbed with the iPhone regular camera app on a whim. It's there, it can always record within 2 seconds. One thing I have found, is that primary edits (noise removal, EQ, etc) are often way better to do at 96khz before resampling to production rate. I never shift to 16 bit, stopped doing that years ago. Everywhere and everything I deal with accepts 24 bit. If the 96 khz source material is directly going into a large linear mix, a la "orchestra/band recording", it is usually (batch) downsampled to 48 or 44 depending on production platform, either by myself batch ahead or just automatically upon import by the app... just like mentioned by skijumptoes above, in film, video and for graphical assets I shoot or create at 4k and then edit and publish in HD. This is mostly to save CPU, and based on the experience that the increase in quality from 44/24 to 96/24 (or HD to 4k) is lost in 95% of the performance and delivery settings. It's just more practical and efficient to work and deliver in 44 (or 48 for film/TV). I think, if my computers could handle my rather large arrangements in 96khz in realtime, i probably would stay at 96khz as "far" as possible in the production line, perhaps even performance. I'd much rather have the processing power than the hz right now. But, whenever that general shift to 96k and 4k platforms will happen.. and my systems can chew it, then I can just flick a switch, and all my stuff is good to go Some experience with 96k in real world: I have a few times made complex (live, responsive sample-based arrangements) performances running in realtime at 96kz for huge theatre setups, with high-end sound systems very well tuned in large hall or venues. Its so beautiful!! There is to me a slight, nuanced improvement, everything is more glossy and sparkly, particularly in the transient details in complex parts, you can hear everything, it never goes to mush, and in the bass, tones are clear and body punches are "laserlike", less mess there too. Its like everything has more FOCUS (*). But this is highly subjective and woo-ish, I know. And its only in a few selected places I could hear this. I love this "superfine" sound, but am also aware that of an audience of 1000, only a couple will notice, they both will notice something different, and only 0 will really care, and more importantly, my heartrate watching the CPU meters during the show is inversely relative to the herz rate, so back to 44 we go. (* = So, I only know the difference, because for some of these, I had to give up and resample everything down to 44 khz and run the project at 44 to avoid buffer glitches, and then we all felt something "disappear" in the sound, it had less "sparkle". So I have A/B'ed it, with my own material, and for me there was a difference. YMMV.) But, a second usage scenario, a lot of these 96khz samples are used as source material for my own multisampled sampler instruments, and then I ALWAYS prefer using the 96khz versions, all the way, particularly since a lot of my sound and instrument design works with slowing down or speeding up material in realtime, and to me there is quite a difference in sound to a 96khz sample slowed down 2 octaves and to a 44 khz sample doing the same. And these instruments with 96k source material I can use as-is in projects running at 44khz (through Kontakt in LPX or ABX). Both 96k and 44k samples have different tonal texture when realtime pitched (also dependent on processing software). And PS, none of them are BETTER really, they are just DIFFERENT, sometimes I prefer the 44khz "sound", or even the glassy rough texture of a pitched 22khz sound, but mostly I use the 96khz originals. So for sampled instruments, I prefer then to stay at 96 for as long as possible. I can always go "down", but its a lot more effort and hassle to go "up" in quality. Works that are published, are almost always produced in 44khz. When publishing for streaming I still deliver 44/24, but my aggregator now also accepts 96/24 for delivery to "HD" services. I am not considering it ATM, but aware that it exists. My audience is about 80% streaming (of which 80% is Spotify) and 10% downloads (Bandcamp) and 10% vinyl. Vinyl is mastered separately and delivered 44/24 wav. Looking fwd to hear others thoughts on 96 and 44!
  11. Just a data point, I've had this happen. Extremely rarely, maybe once or twice a year, not enough to start looking for details or report it, feels like a freak accident, but, it happens, and more than once, I recognize this. Last was maybe half a year ago. When it happens its a super cruel deafening blast of übermax white noise, out of nowhere, happened twice on headpones while working with small details on low levels, that was a massive shock, I can still recall the shock. Not that it isn't gruesome in the room too. Indeed would be great for this never ever to happen. FWIW, I'm also working on RME hardware, multiple models of Firefaces or Babyfaces over the years. No idea if RME is the culprit, I find that hard to believe, but also find it hard to believe it could be Logic, but that's the data... I tend to use mostly NI Kontakt, u-he plugins and then internal plugins, with a lot of intricate flex audio region editing. Never happened in Ableton Live or other audio programs I'm regularly using, but I'm working in LPX probably 80% of the time, and the other audio apps a distributed maybe 3-4% each, so statistically if it happens, it would most likely happen "while working in LPX". PS. Using a more descriptive topic subject title would probably get more relevant replies and we could possibly see a pattern between everyone who experinces this? At first I thought this was a topic on app security. Only when it got multiple replies I wondered what that could be.
  12. Do you know why can't we use the Transpose and Fine Tune settings in this mode, to simply set an exact pitch change for the varispeed? (I can adjust Transpose, but it doesn't have any effect. A bug?) (Or do you guys know of a trick to set exact tune by dragging? Octave shifts are easy (double / halve the time), but others are too much hit and miss for me)
  13. Its a general quality measure made by AI algorithms based on Apple’s new big data cloud intelligences sourced by comparing your regions with equally similar pattern recognition material in listener behaviour captured through their billions of iTunes listeners. If your tracks are blue they are «okay», but not great, just oksy. If they are green, they really need some work. Green as in «newbie», kind of. If you get red regions, that means «potential hit», all regions in the the red and warm specter are good, all in the colder colours are problematic. The ultimate goal is to get regions in black or white, nobody knows exactly what those colors mean but they are the holy grail of Logic region colours.
  14. Wouldn’t it make more sense to bounce TWO times in each version, then compare first each internal bounce to each other, to figure out any randomized dsp processing pr version, THEN look for the «difference of the difference», between versions? (Only if multiple bounces within the same version always null, would i expect the same between versions?) I mean, i would never ever in a million years get a null test nulling out of the same project on the same version. I use so much DSP that introduces slight randomization and «life». But the null test would indicate to me which elements contain randomization and then i could by ear figure out if there were any noticable software changes between versions. (I for my part dont notice anything irregular in 10.4.3 yet)
  15. I agree 100% with Davids assessement and thought process, and I'm just going to add another "no don't do it!" but from a slightly different perspective. I tour and perform extensively and Apple laptops are almost "consumables" from my perspective, I get the new topmodel for production/showrunner every third year and maybe a supporting Air or lighter model for business every half time of that. Since 2000 I've had (and still have most of) about 10-12 laptops. Before 2010 some of them needed smaller repairs, most not. I think biggest problem was a screen that died, Apple replaced because general warranty, a faulty motherboard (not Applecare, just general warranty). Since 2010 no repairs, but again, fixes due to warranty, again, outside of Applecare. Many of these laptops still work, those that don't, are simply "used" up, from years on the road, I'm amazed they last as long as they do. It has never made any economic sense to get Applecare. In the few situations where they needed repair, the overall cost of these repair has been VERY VERY FAR AWAY from the total cost of all Applecare for all these laptops. I think Davids wild guess of about 10% chance is about right. Also... I don't know about US warranty rules but... I'm in Europe. If there is a technical problem with a product that is a manufacturer error, the manfucaturer has to fix it. Period. You don't have to get any extended warranty for this to be valid. The time limit of this is for most electronic devices 2 years. So if there IS a problem with the device, within 2 years, that is not made by wear and tear or accident, the repair or exchange will be covered by Apple. They've never failed to honor this. I also have extensive musicians travellers insurance that generally covers my equipment from accidents or theft, and ths WILL happen, has happened, and is totally worth it. I've gotten two ipads replaced for minimal cost when s#!+ happened. With all this combined it makes zero sense for me to get or recommend Applecare. Maybe like someone posted, it makes sense for phones or something, but again, I would cover those with my overall professional insurance, they are "business tools" just like my laptops. Please note this is of course only MY situation and YMMV!!! But still, if I only was to have ONE computer, i would not get Applecare for that, in the rare chance that it might need repair after two-three years, I'd take that chance and pay for that repair directly IF it happened, in my experience this happens 1/10 of computers, in a lifetime you'd never make benefit of Applecare. However, this is my world / worldview, maybe for some people it actually makes sense, I'd be very happy to hear opposing experiences!!
×
×
  • Create New...