involver wrote:Perhaps a hardware controller like the Euphonix or Mackie Control would help?
Nublu wrote:involver wrote:Perhaps a hardware controller like the Euphonix or Mackie Control would help?
Or even an AlphaTrack. All of its functions are reachable after a person's hand is placed on it.
If there is a store close by where you could demo it, you might be pleasantly surprised.
kevjazz wrote:As a blind person myself, there is one important thing that some of you may not be realizing. Certain things can be assigned to a control surface. But, beyond the obvious of hearing the volume or pan change as you move a fader, certain parameter information will not be relayed to the blind user even if he or she moves a button or fader. What's needed here, along with the key shortcuts the original poster requested, is for Voiceover to work seamlessly with Logic. Since Apple owns Logic, I'm sure that this will happen some day. I'm actually joining this list to learn more about Logic in advance of that day.
kevjazz wrote:Respectfully, I think you may be misunderstanding what we're talking about here. What you want is a speech driven system. What we need is a system where text, and, almost more importantly, objects, can be spoken to the user. This is the reverse of what you describe.
The original poster never even discussed things like song position, screen location, track name, mute, solo and arm status and so on. We need all that and more.
I know that a lot of this can be inferred but you wouldn't want to guess what's going on by having to figure out what's missing.
kevjazz wrote:The original poster never even discussed things like song position, screen location, track name, mute, solo and arm status and so on. We need all that and more.
Users browsing this forum: Baidu [Spider], C4L88 and 19 guests