I've managed to implement dynamic control (via midi CC) of inserting a
phrase from the phase list to the end of the sequence playlist. I'm
pondering also adding the ability to insert a phrase after
"the-current-playing-phrase" (as this could be useful to me in
performance - but I think I could live without it). I haven't yet
checked if I can toggle "current-playing-phrase" patterns (via CC20)
dynamically, although they are toggle-able via the UI, so that should be
able to work and could be interesting during live performance.
The general idea is that I'd have a consistent "list-of-phrases", like:
S1-verse
variant(s)
S1-chorus
variant(s)
S1-Bridge
variant(s)
S1-Groove
variant(s)
Transition
S2-verse(s)
variant(s)
S2-chorus
variant(s)
S2-Bridge
variant(s)
S2-Groove
variant(s)
End
Tag
and then be able to start the sequence playlist off with say,
S1-verse&S1-chorus, and afterwards, add the others to the playlist
dynamically during performance. My phrases are typically 64 (perhaps 32)
bars long, so I'm not that pressed for decisions on "what's next".
So I'm using midi CC for this, and that's fine, but since I'm only
"inserting at the end" (or maybe "inserting after current phrase") it
does seem that it would be OSC-able. I don't know that this is any
better or desirable. Seems processing incoming midi CC is probably more
efficient than processing OSC. Maybe I'll consider looking into that later.
What would be nice is for the UI playlist to track and highlight somehow
the currently playing phrase. I'm looking in FLTK\fluid and ui.fl but
I'm a little lost there at the moment. It might also be cool if those
pieces of info could be broadcast via osc when they change. As that
could be forwarded to other 'visual clients' (I'm thinking
mididings\webdings).
Should it be possible to get the playlist to track & highlight
(scroll\centered\different-color&&flashing) the current playing phrase
in the UI?
Thanks,
rickbol
On 04/29/2014 02:48 PM, J. Liles wrote:
On Mon, Apr 28, 2014 at 8:37 PM, Rick Bolen(gm) <[email protected]
<mailto:[email protected]>> wrote:
Hi all,
I've been playing around with Non-sequencer for awhile now. I find
it to hold the best promise for a midi pattern based sequencer live
performance tool of anything I've experimented with. Of course, I've
played with Seq24 and assumed I'd use that tool, but that was before
I knew of Non-sequencer (NS).
Both visually, and performance wise I find NS to be preferable.
While I haven't tested it much for recording\editing midi, I have
imported midi files with > 120 (drum pattern) tracks into NS and, so
far, it hasn't choked. IIRC, Seq24 supports 32 patterns, and 32
"sequences". while I don't anticipate needing 128 patterns, I do
expect to exceed 32 in what I'm trying to do. Having 128 allows
significant flexibility for a variety of granularity in pattern
complexity and variety.
I've tested remote control of patterns, and I have that working fine
with a midi foot controller via mididings, but I don't really want
to attempt to manage NS pattern "state" within mididings. At this
point it appears that patterns are the only "performance component"
remotely controllable within NS. I need remote access to
triggering\queuing sequences.
In studying the code, the simplest solution I could come up with
revolves around binding sequence::insert() (sequencer.c) to either
midi (via a controller as with patterns) or OSC (which is relatively
inmature within NS currently, and explicit to NSM).
While I haven't examined how the behavior is implemented in code, I
basically want to have the gui "Insert Phrase->[phrase-name] button
clicks" mapped to some kind of remote control.
So I'm assuming there are some future plans for this kind of
feature, and before I put more time into this, I wanted to check for
guidance, thinking that perhaps a solution is nigh, or an
architectural preference is recommended.
Thanks for your efforts toward great software!
rickbol
I think you need to start by clearly defining what you would like to
control and how you would like it to behave. Theoretically, everything
could be made OSC controllable. But having an OSC message to create a
named object might not be practically useful (how do you specify the
name, remember the slot, etc.). Also I'm not sure I understand how you
would intend to use 'live' sequence programming. The sequence plays out
along with the JACK transport, so rearranging it 'live' would not be
likely to produce predictable or musical results. Or are you just saying
that you'd like to be able to queue/trigger phrases like you can
patterns? If that's what you're after then it's going to require some
deeper architectural changes, as currently there's no way to play the
same pattern simultaneously from different offsets (as would be
necessary to play two phrases simultaneously if they each referred to
the same pattern).