On Mon, Apr 28, 2014 at 8:37 PM, Rick Bolen(gm) <[email protected]> wrote:
> Hi all, > > I've been playing around with Non-sequencer for awhile now. I find it to > hold the best promise for a midi pattern based sequencer live performance > tool of anything I've experimented with. Of course, I've played with Seq24 > and assumed I'd use that tool, but that was before I knew of Non-sequencer > (NS). > > Both visually, and performance wise I find NS to be preferable. While I > haven't tested it much for recording\editing midi, I have imported midi > files with > 120 (drum pattern) tracks into NS and, so far, it hasn't > choked. IIRC, Seq24 supports 32 patterns, and 32 "sequences". while I don't > anticipate needing 128 patterns, I do expect to exceed 32 in what I'm > trying to do. Having 128 allows significant flexibility for a variety of > granularity in pattern complexity and variety. > > I've tested remote control of patterns, and I have that working fine with > a midi foot controller via mididings, but I don't really want to attempt to > manage NS pattern "state" within mididings. At this point it appears that > patterns are the only "performance component" remotely controllable within > NS. I need remote access to triggering\queuing sequences. > > In studying the code, the simplest solution I could come up with revolves > around binding sequence::insert() (sequencer.c) to either midi (via a > controller as with patterns) or OSC (which is relatively inmature within NS > currently, and explicit to NSM). > > While I haven't examined how the behavior is implemented in code, I > basically want to have the gui "Insert Phrase->[phrase-name] button clicks" > mapped to some kind of remote control. > > So I'm assuming there are some future plans for this kind of feature, and > before I put more time into this, I wanted to check for guidance, thinking > that perhaps a solution is nigh, or an architectural preference is > recommended. > > Thanks for your efforts toward great software! > > rickbol > > I think you need to start by clearly defining what you would like to control and how you would like it to behave. Theoretically, everything could be made OSC controllable. But having an OSC message to create a named object might not be practically useful (how do you specify the name, remember the slot, etc.). Also I'm not sure I understand how you would intend to use 'live' sequence programming. The sequence plays out along with the JACK transport, so rearranging it 'live' would not be likely to produce predictable or musical results. Or are you just saying that you'd like to be able to queue/trigger phrases like you can patterns? If that's what you're after then it's going to require some deeper architectural changes, as currently there's no way to play the same pattern simultaneously from different offsets (as would be necessary to play two phrases simultaneously if they each referred to the same pattern).
