Jump to content

Xigwon

Member
  • Posts

    72
  • Joined

  • Last visited

Xigwon's Achievements

Newbie

Newbie (1/14)

0

Reputation

2

Community Answers

  1. Ah, you just have to right-click the fade and change to "Crossfade" instead of "Equal Power Crossfade". If you do that then the bump goes away.
  2. Hello! Noob question. Why does the built-in X-Fade (in the "Drag" settings, alongside "Overlap", "No Overlap", etc.) not provide an even fade from one region to another? See the attached image. The top track contains the original X-Faded regions (each a sine wave on the same frequency), and the bottom track contains the bounce of the two regions. As you can see, the result is louder during the fade (and you can hear it in the top track, too). It goes up 3 dB. Now I have to go in with the pencil to hand-draw an automation curve to make the fade sound even—which isn't difficult, but shouldn't that be the purpose of the X-Fade? Does anyone know why this occurs, and if there are any better fade tools or easier workarounds?
  3. Thank you so much. Alas, I have a Mac, and am unwilling (at least right now, just to do this assignment) to download extra software to run .exe on my computer. But thank you, that would have been exactly what I was looking for.
  4. I mean, you can write a utility to do whatever you want - that's kind of what programming is. If you want to have a utility that's converts a text format file (of "instructions") into a midifile, you can of course implement that. Your program would open your source text file, and create a destination binary output file for the midifile, then then read the text input, and output the appropriate binary MIDI data into the MIDIfile. And yes, you could write this yourself, but if you have the Python code that does this already, it seems it would be far less work to modify that to work on your system than to reimplement it yourself from scratch (handy programmer tip - programmers don't reinvent wheels!). But like any skill, programming is a journey, so it's something you have to embark upon yourself and kind of figure out these things you go - programming is largely about solving problems much of the time. Luckily, there is a wealth of resources to draw upon these days, which is a massive help. But you'll need to understand the differences between text and binary files. You'll need to learn how to load and parse text files, and you'll need to learn how to create and write to binary files. Then you'll need to have code to output the various header data that always is in a midifile to be valid, and then how to parse whatever text commands are in your input text file into the necessary MIDI data to write to the MIDIfile. You'll also probably need to understand how timing and event timestamps works, as you'll need to write events with the correct time positions on them too. None of this stuff is hard, but does involve a learning curve if you're a novice programmer, so take it one step at a time, and solve each problem as you go, implement each stage, until you end up with the result you want - a program to parse your arbitary text file of "instructions" into a valid MIDI file that can be loaded into any program that supports loading of standard MIDI files. Thank you for the detailed response, I appreciate it! I am fluent only in creating text-based programs in Python to run in Terminal—not at all in writing files. But I will definitely look into that. I appreciate the help!
  5. Thank you for your reply and for clarifying. You're right that it is off-topic, so I will answer your question as to my goal and if you would like, you can provide some pointers. I am in an Algorithmic and Generative Composition in Python class. We were given a module (from the opensource library "pyknon") that allegedly can take user input (note lengths, pitches) and generate a MIDI file, but the module is too old for my version of Python. I figured that such a module was just doing some sort of conversion, so I wanted to just program my own converter. If MIDI files were (or could be converted from) text files, I could just take a day or two to figure out the conversions. But since, as you say, MIDI files are not text files, then I have no idea how to generate a MIDI file and no idea how pyknon could do such a thing (I dug through the library and did not find any python file that appeared to do the conversion work).
  6. Thank you for the recommendation. I read it, but I'm not exactly sure what to do. I understand that I need to type Hex characters for the track, channel, and events. But once I'm done with that, I just have a text file filled with Hex characters. How do I convert it to a MIDI file that Logic can read? Just changing the extension from .txt to .midi does not make the file readable. If someone just shows me a MIDI to HEX converter, then I can drop one of Logic's own MIDI files into that, look at the HEX, and then just figure out what's going on.
  7. Thank you so much for your response, I have looked around online but have not been able to find any programs or websites that could help me do the conversion or the creation of a MIDI file. Did you have any sources in mind that you might be able to point me towards? Thank you
  8. Hello, I am trying to write a program to generate a MIDI file for Logic to read. I've taken a look at the MIDI files that Logic generates, and they are littered with control characters (gremlins) that cannot be read by anywhere else I know. Converting to ASCII and back made the file unreadable for Logic. In addition, Logic cannot read hexadecimal MIDI files. What can I do?
  9. I have little hope this will be fixed, but it's worth a shot. I've been recording using Flex Pitch for almost 5 years now. About 4 days ago, a problem started happening that I have never seen before, and that I have no idea how to fix. Logic now doesn't recognize the beginnings of some notes. Attached is an example image. You'll see that the Flex note doesn't start at the beginning of the actual note. As a result, when I drop the note way down (to get a super-low bassline effect), only the latter portion of the note is dropped. So when I play it back, you can easily hear the note snap really quickly from the high portion to the low portion. Obviously this result sounds ridiculous. I want the entire note to be dropped, not just the latter portion. But Logic only drops the latter portion since it only recognizes the latter portion. This seems to happen when I sing lower notes with a "bop" or a "dmm" sound. Sometimes it is really extreme, recognizing only the second half of the note, when to me there is very clear pitch material in the first half. I know some people might write this off as "Logic doesn't work like the human ear." I realize that. The reason I am surprised is that this seems to have JUST started happening in the last few days. Like I said, I've recorded stuff like this for years without having such problems. Here's what I've tried: •Singing with clear versus breathy vocals •Singing with different mic trim levels •Recording in different projects •Recording with different versions of Logic (10.4.4 and 10.5.1) •Restarting the computer •Turning my interface off and on •Penciling in the missing section (still doesn't recognize it, the resulting note block doesn't have a pitch line; and it sounds terrible) •Deleting the entire note and re-penciling the whole thing in •Using time and pitch machine instead (doesn't work because the formant is way too low) •Downloading the free trial of melodyne (Melodyne recognizes the pitch, but its pitch-shifting doesn't sound as good as Flex Pitch) Nothing works. Logic, in the last week, has decided to just stop recognizing the beginnings of certain notes, and the work I do on a daily basis has ground to a halt. Does anyone have any ideas? Thanks. MacBook Pro Mid 2012 OS Catalina 10.15.5 1.02 TB SATA drive (438.5 GB available) 16 GB Ram AT4040 Scarlett 18i20
  10. Have you tried option-K to open up the list of key commands? Then you can do a word search. In addition, I personally keep a list of all my LPX keyboard commands in Notes.
  11. Okay thanks. In that case, from now on, I will only put plug-ins on stacks that are receiving a signal that is not super hot. Lol. I'm not calmed either way. I just want the final product to sound as good as possible! Thanks to you, my risk of being shouted out of a studio is now lower!
  12. Lol. He teaches intro to music tech, so I was surprised he got such a simple thing wrong.
  13. Okay thanks. Unfortunately I learned it from a young PhD electronic music professor just one year ago! Okay thanks. I will do some research. Couldn't I also give myself headroom afterwards, by just fading down the track? It really seems to me that the only disadvantage to recording loud is that I'll have to turn it down. Is that it? Granted, I'll take your advice and stop recording so loud. But I just would like to make sure that the only disadvantage is the wasted time of attenuating every track?
  14. Thank you. So would it be advisable for me to go back through the mixing session and make sure that every track never redlines?
  15. Apologies if I've asked in the wrong place... Let's say I record 60 vocal tracks, each as loud as possible without redlining. I put them in two stacks; each stack has 30 vocals tracks in it. When I play back, both stacks will be redlining because the sums are so loud. I then bus both stacks to a "Control" stack, which I fade way down, so it is not redlining. The "Control" stack is then bussed to the Stereo Output, which is obviously not going to redline. For Logic Pro X in particular, is this is a good or bad method? My goals are maximum sound quality and absolutely no clipping or mixing-produced artifacts. Similarly, when I pan certain tracks, they redline, because more than 0dB is being transferred to either the L or R channels. But again, as long as the resulting bounced audio AIFF doesn't get above 0dB, is it okay to have these inner tracks/stacks redlining?
×
×
  • Create New...