I was pondering how we could make controls a bit more efficient and
also more intuitive to use. Here's a brain dump of that I've come
up with so far:

Terminology
-----------

- control = the thing that you physically interact with, e.g., a
  piano key, a knob, a fader, a joystick, a button, etc.
- controller = a device that has a number of controls. It
  communicates with the M1 via MIDI, DMX, OSC/Ethernet, etc.
- variables = registers used by the graphics FPU
- control variable = currently, a variable holding a value that
  reflects the state of a control
- patch = the program that runs on the graphics FPU and that defines
  the effect being produced


Efficiency
----------

Right now, controls produce values from 0.0 to 1.0 in variables like
midi1, idmx2, ..., which are then processed by the graphics subsystem.
Most patches will want to scale the value or do other simple
calculations with it. Doing this in the graphics hardware is wasteful,
eating precious cycles and registers.

At least in the case of MIDI, which has only 128 distinct values per
control, it would be better to pre-calculate these simple expressions
and just store an array of inputs to the graphics hardware. This
could even be extended to functions of two parameters, assuming there
isn't a lot of them.


Instrumentation
---------------

Now, instrumenting a patch with control variables is messy. You have
to remember what all those midiN do (e.g., whether they're assigned
to a potentiometer, a slider, a button, an axis of a joystick, etc.)
and you may need to design your patch such that it takes into account
physical properties of the controls.

For example, if you want a slider to cover a large range but still
have fine resolution, that may be perfectly possible if the slider is
quite big. But if it's something small, you may want to be able to
control the sensitivity.

Thus, a single control input to the patch, like the zoom speed, could
become the result of two controls, e.g.,

    zoom = 1+slider*(sensitivity*9+1)

If you leave the sensitivity setting at 0, your slider would cover the
range [0, 1]. If you dial the sensitivity up to the maximum, the slider
will cover the range [0, 10].

Of course, since the control variables don't have "nice" names, this
would currently be written as something like

    zoom = 1+midi1*(midi2*9+1);


Abstracting control values
--------------------------

This could be made more flexible by separating the "pre-processing"
from the control variables used by the patch. In the example above, all
the patch cares about is getting a speed input within a certain range.
How you implement it shouldn't matter to the algorithm producing the
visual effect.

Thus, the above could become

    speed = midi1*(midi2*9)+1;
    zoom = 1+speed;

where at least "speed" could be pre-calculated, thus just needing a
table lookup. In this simple example, we could directly pre-calculate
"zoom", but if zoom itself is also modulated somehow, it may make sense
to keep "speed" as an abstraction.


Abstracting control inputs
--------------------------

The next problem is how to assign variables to physical controls. E.g.,
I may decide that midi1 is always my slider and midi2 is always my
potentiometer, but other people may make different choices, and would
need to adapt patches to their layout.

A useful abstraction may therefore be to declare what characteristics
are expected from a control. I could think of a few:

- range: something that moves from a minimum (0) to a maximum value (1),
  with a stop at both ends, e.g., a slider.
- cyclic: something that wraps around the ends of the range, e.g., a
  rotary encoder.
- pulse: a button that sends 1 when pushed and returns to 0 when
  released
- toggle: a button that toggles between 0 and 1 on each push

In the above example both inputs would be "range", so we could write
it as something like this, borrowing from C syntax:

    speed(range slider, range sensitivity) = slider*(sensitivity*9+1);

Now the connection between (midi1, midi2) and (slider, sensitivity)
could be made completely outside the patch. The associations should also
be maintained at a per patch level, not per performance as it is
currently the case.

Even better, if we have some description of the structure of the
controller - typically called a "profile" - we could already guide the
user towards suitable controls. And we could emulate controls that
don't exist in the desired form, e.g., turn a pulse button into a
toggle or a rotary encoder into a potentiometer.


Visualization
-------------

Once the controls a patch desires are known, the GUI could present a
virtual control surface containing them. The user could then map them
to physical controls by arranging them and using a "learn" feature to
establish associations.

The same system could also be used to show and modify profiles and
associations made based on them.


Limitations
-----------

A design like the one I described has a few limitations, though. First
of all, it increases overall complexity by introducing intermediate
variables. This has to be weighed against the benefit of reducing
complexity at each step of the data processing.

>From an implementation point of view, separating calculations performed
on controls and the actual patch isn't necessary - a decent compiler
could find subexpressions that depend only on control inputs on its own
and treat them differently.

Second, the separation between the "pure math" of the patch and the
physical world of controls isn't perfect. The section of a patch that
contains functions like the speed(...) above would have to know about
both worlds.

One could resolve this by adding smarts to the layer that matches the
profile with a patch's requirements, so that it could be told to
combine, say, slider plus pot into a value plus sensitivity combo,
and expose only the single resulting value to the patch.

Opinions ?

- Werner
_______________________________________________
http://lists.milkymist.org/listinfo.cgi/devel-milkymist.org
IRC: #milkymist@Freenode

Reply via email to