[linux-audio-dev] XAP status

2002-12-11 Thread David Olofson

What's going on with headers, docs, names and stuff?


I've ripped the event system and the FX API (the one with the state() 
callback) from Audiality, and I'm shaping it up into my own XAP 
proposal. There are headers for plugins and hosts, as well as the 
beginnings of a host SDK lib. It's mostly the event system I'm 
dealing with so far.

The modified event struct:

typedef struct XAP_event
{
struct XAP_event*next;
XAP_timestamp   when;   /* When to process */
XAP_ui32action; /* What to do */
XAP_ui32target; /* Target Cookie */
XAP_f32 value;  /* (Begin) Value */
XAP_f32 value2; /* End Value */
XAP_ui32count;  /* Duration */
XAP_ui32id; /* VVID */
} XAP_event;


The "global" event pool has now moved into the host struct, and each 
event queue knows which host it belongs to. (So you don't have to 
pass *both* queue and host pointers to the macros. For host side 
code, that means you can't accidentally send events belonging to one 
host to ports belonging to another.)


Oh, well. Time for some sleep...


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



[linux-audio-dev] Temporary XAP website

2002-12-11 Thread David Olofson

Well, this might be early, but I needed to do something slightly less 
demanding for a while. So I hacked a small presentation:

http://olofson.net/xap/


Please, check facts and language (not my native tongue), and suggest 
changes or additions.


(Oops! Clicked on dat doggy-like animal in da process... ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Thursday 12 December 2002 03.10, David Gerard Matthews wrote:
> David Olofson wrote:
> >That's not rude - I don't think anyone is *totally* sure about
> > this...
> >
> >Though, you might want to note (pun not intended) that I'm really
> >talking about "continous pitch" - not note numbers, as in
> > "integer, MIDI style". You could think of the relation as
> >
> > linear_pitch = f(note_pitch)
> >
> >where f() is a function of your choice. You could write it as
> >
> > pitch = f(pitch)
> >
> >but how long would it take before the first user wonders why you
> > get 1 tone/octave if you connect a sequencer directly to a synth?
> > :-)
>
> Right, but would the *user* actually see any of this?

Yes - unless you make it possible for hosts to tell note and linear 
pitch ports apart.


> I was under
> the impression that any
> details of pitch control would be handled by the plugins
> themselves, and any mapping of
> the user's preferred frequency-based frame of reference to
> note_pitch or linear_pitch
> would be handled transparently.

It can't be *entirely* transparent, since users might actually want 
to use something else than 12tET, or even different scales in the 
same net.

However, if you just say that note_pitch and linear_pitch are 
incompatible, host can deal with it automatically, or just refuse 
direct connections, since the controls are incompatible.

12tET-only hosts can simply insert something that does linear_pitch = 
note_pitch * 12.0, and be done with it. More sophisticated hosts 
would allow the user to select other scales.

*Any* host will be able to host a plugin that takes note_pitch and 
generates linear_pitch (that is, a scale converter plugin), as that 
will just result in connections between 100% compatible controls. 
Host doesn't need to understand what the plugin is doing.


>  The only people who might need to
> worry about this
> would be coders,

Yes, host coders - and the few that hack pitch converters.


> and as I think someone pointed out, anyone who can
> write DSP
> code can do the conversion from whatever pitch system they wish to
> use (if any)
> to whatever pitch system XAP eventually ends up using internally.

Yes, but if you are supposed to apply some traditional music theory, 
and get 1.0/octave, what do you do? Who tells you what scale to use, 
to "reverse engineer" each pitch change? (Note that you may have 
several senders on each Channel, each one controlling it's own set of 
Voices. Assume same scale?)

And supposed you're a sequencer, thinking in notes. How do you know 
what scale the user wants you to use for the note -> 1.0/octave 
conversion?


[...]
> >If you want to do the Right Thing (IMHO), you could consider
> > coding uniersal harmonizers and other event and/or audio
> > processors that think entirely in linear pitch. :-) (This is what
> > the VST guys told me would be very, very hard, next to
> > impossible, no one would ever want to do that, etc etc. Ok, ok!
> > Have notes, then...)
>
> I'm pretty sure I don't understand enough about DSP coding to even
> think about this.  :)  I'll leave
> that one to Steve (who seems to have no shortage of projects at the
> moment anyway.)

Well, I was thinking about doing it with controls; not audio - which 
would mean that basic physics and music theory apply. Say, you have 
fundamental notes and a melody, and you want to add suitable chords; 
that kind of stuff. (Which I normally do manually by ear, but 
anyway... :-)


> >The need is there, because the API is supposed to support
> > sequencers and event processors that are based on traditional (or
> > other note oriented) music theory. Preferably, this should be
> > possible without confusing users too much, which is why a
> > separation of pitch "before scale converter" and "after scale
> > conerter" is needed.
>
> Right, but couldn't you just have a function that maps arbitrary
> pitch systems to whatever
> ends up being the internal unit?

But that's what this *is*. It's just that instead of calling a host 
function to convert your internal note_pitch values, you send them as 
is, in the form of 1.0/note. That way, the host or the user can 
insert a plugin (or specialized host object) that performs the 
conversion. The user gets the benefits of not having scale support 
hardcoded into the host, and the plugin developer doesn't have to 
figure out what scale to ask for when converting.


> I'm pretty sure this was thrown
> around already anyway...

A few times, I think. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.c

Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Gerard Matthews
David Olofson wrote:


That's not rude - I don't think anyone is *totally* sure about this...

Though, you might want to note (pun not intended) that I'm really 
talking about "continous pitch" - not note numbers, as in "integer, 
MIDI style". You could think of the relation as

	linear_pitch = f(note_pitch)

where f() is a function of your choice. You could write it as

	pitch = f(pitch)

but how long would it take before the first user wonders why you get 
1 tone/octave if you connect a sequencer directly to a synth? :-)

Right, but would the *user* actually see any of this?  I was under the 
impression that any
details of pitch control would be handled by the plugins themselves, and 
any mapping of
the user's preferred frequency-based frame of reference to note_pitch or 
linear_pitch
would be handled transparently.  The only people who might need to worry 
about this
would be coders, and as I think someone pointed out, anyone who can 
write DSP
code can do the conversion from whatever pitch system they wish to use 
(if any)
to whatever pitch system XAP eventually ends up using internally.

That said, continous note_pitch is not more bound to notes than 
linear_pitch is to octaves. Both are *continous*, and 1.0/note, 
1.0/octave or whatever are little more than units.

Fair enough.


2) my coding skills are still pretty rudimentary.


...but if your math and music theory is strong, you can work with 
notes (rounding...), continous pitch or directly with linear pitch. 
If you're lazy, notes are easy, and work with traditional theory.

If you want to do the Right Thing (IMHO), you could consider coding 
uniersal harmonizers and other event and/or audio processors that 
think entirely in linear pitch. :-) (This is what the VST guys told 
me would be very, very hard, next to impossible, no one would ever 
want to do that, etc etc. Ok, ok! Have notes, then...)

I'm pretty sure I don't understand enough about DSP coding to even think 
about this.  :)  I'll leave
that one to Steve (who seems to have no shortage of projects at the 
moment anyway.)


The need is there, because the API is supposed to support sequencers 
and event processors that are based on traditional (or other note 
oriented) music theory. Preferably, this should be possible without 
confusing users too much, which is why a separation of pitch "before 
scale converter" and "after scale conerter" is needed.

Right, but couldn't you just have a function that maps arbitrary pitch 
systems to whatever
ends up being the internal unit?  I'm pretty sure this was thrown around 
already anyway...

You *could* do only 1.0/octave - but how logical is that when you 
have a scale converter ahead of you? 

Not very, I suppose.


Should that scale converter 
assume you're feeding it 12tET ((1/12)/octave), 1.0/note, or what? 
The whole point with note_pitch is to answer that question once and 
for all: "It's 1.0/note before, and 1.0/octave after, period."

Right
(Off to ponder.)



//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
  --- http://olofson.net --- http://www.reologica.se ---








Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
Tim Hockin wrote:

>> i'm becoming tired of discussing this matter. fine by me if 
>> you can live with a plugin system that goes only half the way 
>> towards usable event handling. 
>
>I haven't been following this issue too closely, rather waiting for some
>decision.  I have been busy incorporating other ideas.  What do you suggest
>as an alternative to an unsigned 32 bit sample-counter?

i'm using event structures with a timestamp measured in
'ticks' for all plugins. the 'tick rate' is defined for 
any point in time through a tempo map in my implementation. 
the 'tick' type is floating-point.

yes, all plugins need to issue 'host' calls if they want
to map 'tick' to 'time' or 'frame' or reverse. however,
the overhead is not palpable in terms of performance.

allow me to excur somewhat beyond the scope of the question:

event outputs are implemented as lock-free fifos. 1-n outputs 
can connect to 1 inputs. because events remain in the 
outbound fifos until fetched, sorting is simple as long as 
individual fifos are filled in correct order -- which hasn't 
yet proved problematic.

two strategies for block-based processors are possible:
 
* fixed blocks -- calculate 'tick' at the end of the 
  block and process all events from all inbound fifos
  that are stamped <= 'tick'.

note that in this case, only one 'tick mapping' is needed,
the rest is simply comparison. of course dividing the cycle
into subcycles for better time resolution is possible too.

* sample-accurate -- determine the next event from all
  inbound connections, map this tick to audio frames,
  process until this frame, process the event(s) found, 
  repeat until the block is complete.

yes, this introduces some overhead if lots of events are
hurled at a plugin implementing sample-accuracy. however,
this is less problematic i think, having come to believe
that good interpolation methods should be preferred over
massive event usage.

please let me go into yet more depth:

another, quite substantial, benefit of the design is that 
the fifos can be filled in one thread (midi-in for
example) and fetched from in another (audio for example).
it also allows for least-latency routing of events across 
threads.

the current manifestation of this system handles plugins 
operating on the same sets of data and events in six major 
threads, or in fact any combination of these in one plugin:

periodic:
* audio (pcm interrupt)
* low-latency, high-frequency time (rtc interrupt, midi out)
* high-latency, low-frequency time (sequencer prequeuing)

on-demand:
* midi in (in fact anything that's pollable)
* script plugins (i use python which is not rt-capable)
* disk access.

the design was chosen because i deem it to impose the
least limitations on the who and how of plugins and their
connections, and so far it hasn't failed to live up to 
this promise. 

currently it comprises midi in and -out, jack and alsa 
(duplex), event sequencing, scheduled audio playback and 
recording, ladspa units (with event-based parameter i/o), 
tempo maps (rt modifiable), a few native filters and 
oscillators, and the ability to code event-based plugins 
in python (there's even the possibility of processing audio 
with python, but it does introduce a good deal of latency).

i consider myself as far from being a coding wizard. this
enumeration serves the purpose of proving that the design
i've chosen, which uses 'musical time' stamps throughout,
can in fact support a great variety of functionality, and
that this universality is a worthy goal. 

i'd also like you to understand this post as describing
the workings of my ideal candidate for a generic plugin
API, or parts thereof.

code is coming soon to a http server near you, when time
permits.

>I'd hate to lose good feedback because you got tired of it..

thanks. :)

tim




Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 23.56, David Gerard Matthews wrote:
[...]
> >The need for 1.0/note or similar arrise when you want to work with
> >something like 12t without deciding on the exact tuning, and also
> >when you want to write "simple" event processor plugins that think
> > it terms of notes rather than actual pitch.
>
> Not to sound rude or anything, but I've been following this thread
> and still
> have yet to be convinced of the necessity for an internal conept of
> "note".

That's not rude - I don't think anyone is *totally* sure about this...

Though, you might want to note (pun not intended) that I'm really 
talking about "continous pitch" - not note numbers, as in "integer, 
MIDI style". You could think of the relation as

linear_pitch = f(note_pitch)

where f() is a function of your choice. You could write it as

pitch = f(pitch)

but how long would it take before the first user wonders why you get 
1 tone/octave if you connect a sequencer directly to a synth? :-)


> Disclaimers: 1) Although schooled intensively in classical
> music theory (I have even taught it at the university level), I
> consider the whole conept of "notes" a little outdated; and (more
> importantly)

Although I (still) effectively use 12tET most of the time, I agree. 
Harmonies and melodies are just about *frequencies*, and notes, 
scales etc are just handy abstractions, built around one, single 
scale that happens to be really rather popular.

That said, continous note_pitch is not more bound to notes than 
linear_pitch is to octaves. Both are *continous*, and 1.0/note, 
1.0/octave or whatever are little more than units.


> 2) my coding skills are still pretty rudimentary.

...but if your math and music theory is strong, you can work with 
notes (rounding...), continous pitch or directly with linear pitch. 
If you're lazy, notes are easy, and work with traditional theory.

If you want to do the Right Thing (IMHO), you could consider coding 
uniersal harmonizers and other event and/or audio processors that 
think entirely in linear pitch. :-) (This is what the VST guys told 
me would be very, very hard, next to impossible, no one would ever 
want to do that, etc etc. Ok, ok! Have notes, then...)


> I can see the need for conversion from, say midi note numbers,

Yes, but MIDI is just a wire protocol, that some "driver plugin" 
would deal with. The actual MIDI note numbers should never get into 
events at any point.


> but
> I have to admit that I still don't really see the need for an API
> to know about "notes".

The need is there, because the API is supposed to support sequencers 
and event processors that are based on traditional (or other note 
oriented) music theory. Preferably, this should be possible without 
confusing users too much, which is why a separation of pitch "before 
scale converter" and "after scale conerter" is needed.

You *could* do only 1.0/octave - but how logical is that when you 
have a scale converter ahead of you? Should that scale converter 
assume you're feeding it 12tET ((1/12)/octave), 1.0/note, or what? 
The whole point with note_pitch is to answer that question once and 
for all: "It's 1.0/note before, and 1.0/octave after, period."


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



[linux-audio-dev] XAP and these timestamps...

2002-12-11 Thread David Olofson

I (still) don't think musical time belongs in timestamps of your 
average event in XAP. Those events are meant to act as an alternative 
to audio rate controls or blockless processing. The host gives you a 
time frame to work with (expressed as a number of audio frames), and 
that's the timeframe you're meant to work with. This applies to audio 
as well as events.

If an analog synth does not need timestamps at all on CV changes, why 
should our plugins?


A sequencer must obviously think in terms of transport or musical 
time, or something else that is not free running. A sequencer will 
also most probably need some form of editor to be useful. This may be 
a replica of diode programming matrix, a piano roll, various 
parametric selection and event processing tools, or all of those. 
Thinking about this, one realizes that these tools must probably have 
more or less random access to the sequencer's database to do anything 
useful.

One might argue that the event processing tools should be the same 
thing as the real time event processor plugins. However, for that to 
be of much use, plugins would have to be able to see events from 
virtually the whole timeline, or you wouldn't be able to do 
*anything* you couldn't just as well do in real time, with audio time 
timestamps.


So, imagine a simple, function call based Sequencer DataBase API. 
Sequencer plugins could implement that, and then editors could use it 
to manipulate the events in the sequencer in any way they like. The 
API could be designed so that sequencers may make the calls RT safe, 
so the API could be used from within the RT thread of a running host. 
(Most users of audio/MIDI sequencers will definitely expect editing 
during playback to work properly!)

It would probably be possible to implement the Sequencer DataBase API 
as a protocol over the XAP event system, but I don't think it makes 
much sense to take it any further than that - if even that far.


Now, are there *still* things you can't do with this, and if so, 
what? I'd like a list of operations and effects, or something...


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Gerard Matthews
David Olofson wrote:


On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:


Steve Harris wrote:


On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:


I can't really say I can think of a better way though. 
Personally I'd leave scales out of the API and let the host deal
with it, sticking to 1.0/octave throughout, but I can see the
advantages of this as well.

We could put it to a vote ;)

- Steve


I vote 1.0/octave.



So do I, definitely.

There has never been an argument about /octave, and there 
no longer is an argument about 1.0/octave.

The "argument" is about whether or not we should have a scale related 
pitch control type *as well*. It's really more of a hint than an 
actual data type, as you could just assume "1tET" and use both as 
1.0/octave.

The need for 1.0/note or similar arrise when you want to work with 
something like 12t without deciding on the exact tuning, and also 
when you want to write "simple" event processor plugins that think it 
terms of notes rather than actual pitch.

Not to sound rude or anything, but I've been following this thread and 
still
have yet to be convinced of the necessity for an internal conept of "note".
Disclaimers: 1) Although schooled intensively in classical music theory (I
have even taught it at the university level), I consider the whole conept
of "notes" a little outdated; and (more importantly) 2) my coding skills
are still pretty rudimentary.
I can see the need for conversion from, say midi note numbers, but I
have to admit that I still don't really see the need for an API to know
about "notes".
-dgm



//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
  --- http://olofson.net --- http://www.reologica.se ---








Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 23.12, Tim Hockin wrote:
> > i'm becoming tired of discussing this matter. fine by me if
> > you can live with a plugin system that goes only half the way
> > towards usable event handling.
>
> I haven't been following this issue too closely, rather waiting for
> some decision.  I have been busy incorporating other ideas.  What
> do you suggest as an alternative to an unsigned 32 bit
> sample-counter?

It would have to be something that doesn't wrap, I think, or the only 
significant advantage I can see (being tied to the timeline instead 
of "free running time") is lost.


> I'd hate to lose good feedback because you got tired of it..

Dito. I'm (actually) trying to figure out what I missed, so I'm 
definitely interested in finding out. (If I don't know why I'm 
implementing a feature, how the h*ll am I going to get it right...?)

As far as I can tell, you can always ask the host to convert 
timestamps between any formats you like. If you absolutely cannot 
accept pluigns implementing qeueing of events after the end of the 
buffer time frame, the host could provide "long time queueing" - with 
whatever timestamp format you like. (All we need is room for some 64 
bits for timestamps - and events have to be 32 bytes anyway.)

But, when is musical time in ordinary events *required*?


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Hockin
> i'm becoming tired of discussing this matter. fine by me if 
> you can live with a plugin system that goes only half the way 
> towards usable event handling. 

I haven't been following this issue too closely, rather waiting for some
decision.  I have been busy incorporating other ideas.  What do you suggest
as an alternative to an unsigned 32 bit sample-counter?

I'd hate to lose good feedback because you got tired of it..



Re: [linux-audio-dev] Re: [vst-plugins] Plugin server

2002-12-11 Thread Fernando Pablo Lopez-Lezcano
On Sun, 2002-12-08 at 16:14, Kai Vehmanen wrote:
> On Sun, 8 Dec 2002, Paul Davis wrote:
> 
> > you also haven't addressed kernel scheduling issues; the context
> > switch doesn't happen till the kernel has decided what task is going
> > to run next. if it picks the wrong one, for whatever reason, then
> > you/we lose. right now, it picks the wrong one too often, even with
> > SCHED_FIFO+mlockall.
> 
> Btw; have you tried the O(1) scheduler? It has a number of interesting
> charasteristics from an audio app developer's pov [1]:

I tried it (using the Con Kolivas patches on top of 2.4.20). I still get
xruns in jackd, although that particular patchset (plus the drm lowlat
patches) seems to keep them to more reasonable values at least in a
short test I just did. I managed to also hang the machine, after running
jackd + qjackconnect + freqtweak + ams for a while I got greedy and
started ardour - created a new session and when I went to connect things
in qjackconnect the machine stopped... it still responds to alt-sysrq-b.
BTW, with latest versions of alsa the hanging problems seems to happen
less frequently so it may be some interaction between alsa and the
kernel (hangs do not seem to happen in 2.4.19).

Besides that, it looks like the O(1) scheduler is less responsive to
user interaction in certain cases. While doing an alsa driver compile
the mouse was freezing for fractions of a second at a time (this was
during the depend phase, actually compiling the modules did not have
that effect). 

[anyone out there has a patch to schedutils to make them work with the
O(1) scheduler?, mine just report all tasks as Other]

-- Fernando





Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 22.26, Tim Goetze wrote:
> David Olofson wrote:
> >> so eventually, you'll need a different event system for
> >> plugins that care about musical time.
> >
> >No. You'll need a different event system for plugins that want to
> >look at future events.
>
> which is an added level of complexity, barring a lot of ways
> to head for plugins.

I don't even think the kind of plugins that would need musical 
timestamps to work at all, would fit very well into an API that's 
designed for block based processing. I'm concerned that merging two 
entirely different ways of thinking about audio and events into one 
will indeed be *more* complex than having two different APIs.

Some people want to keep LADSPA while adding support for XAP. Now, 
are we about to make XAP so complex that we'll need a *third* API, 
just because most synth programmers think XAP is too complex and/or 
expensive?

(Meanwhile, the Bay/Channel/Port thing is considered a big, complex 
mess... *heh*)


> >> i'm convinced it's better to design one system that works
> >> for event-only as well as audio-only plugins and allows for
> >> the mixed case, too. everything else is an arbitrary
> >> limitation of the system's capabilities.
> >
> >So, you want our real time synth + effect API to also be a
> > full-blown off-line music editing plugin API? Do you realize the
> > complexity consequences of such a design choice?
>
> a plugin that is audio only does not need to care, it simply
> asks the host for time conversion when needed. complexity is
> a non-issue here.

But it's going to be at least one host call for every event... Just 
so a few event processors *might* avoid a few similar calls?


> and talking about complexity: two discrete
> systems surely are more complex to implement than one alone.

Yes - but you ignore that just supporting musical time in timestamps 
does not solve the real problems. In fact, some problems even become 
more complicated. (See other post, on transport movements, looping, 
musical time delays etc.)


> i'm becoming tired of discussing this matter. fine by me if
> you can live with a plugin system that goes only half the way
> towards usable event handling.

This is indeed a tiresome thread...

However, I have yet to see *one* valid example of when musical time 
timestamps would help enough to motivate that all other plugins have 
to call the host for every event. (I *did*, however, explain a 
situation where it makes things *worse*.) I have not even seen a 
*hint* towards something that would be *impossible* to do with audio 
time timestamps + host->get_musical_time() or similar.

To me, it still looks like musical time timestamps are just a 
shortcut to make a few plugins slightly easier to code - *not* an 
essential feature.

Prove me wrong, and I'll think of a solution instead of arguing 
against the feature.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
David Olofson wrote:

>> so eventually, you'll need a different event system for
>> plugins that care about musical time.
>
>No. You'll need a different event system for plugins that want to 
>look at future events.

which is an added level of complexity, barring a lot of ways 
to head for plugins.

>> i'm convinced it's better to design one system that works
>> for event-only as well as audio-only plugins and allows for
>> the mixed case, too. everything else is an arbitrary
>> limitation of the system's capabilities.
>
>So, you want our real time synth + effect API to also be a full-blown 
>off-line music editing plugin API? Do you realize the complexity 
>consequences of such a design choice?

a plugin that is audio only does not need to care, it simply
asks the host for time conversion when needed. complexity is
a non-issue here. and talking about complexity: two discrete 
systems surely are more complex to implement than one alone.

i'm becoming tired of discussing this matter. fine by me if 
you can live with a plugin system that goes only half the way 
towards usable event handling. 

tim




Re: Fwd: Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 21.50, Steve Harris wrote:
> On Wed, Dec 11, 2002 at 06:49:17 +, Nathaniel Virgo wrote:
> > Sorry, I just tend to hit "reply to all" because some lists seem
> > to be set up so that "reply" doesn't go to the list.
>
> See if your mail client has a reply-to-list deature, mutt does
> (+L).

Only found "Post to mailing list"... :-/


> > > I like the idea of enforced "explicit casting", but I think
> > > it's rather restrictive not to allow synths to take note_pitch.
> > > That would make it impossible to have synths with integrated
> > > event processors (including scale converters; although *that*
> > > might actually be a good idea)
> >
> > That would be bad.  If a synth takes note_pitch it's bound to
> > interpret it as 12tET, which would be annoying to someone trying
> > to use a different scale.  A synth could still have a built in
> > event processor, but it should only process linear_pitch events. 
> > Scale converters should definately not be built into synths.
>
> Agreed.

Well, I suggested scale converters in the first place, so I shouldn't 
be complaining. ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 21.00, Tim Hockin wrote:
> > > i'm convinced it's better to design one system that works
> > > for event-only as well as audio-only plugins and allows for
> > > the mixed case, too. everything else is an arbitrary
> > > limitation of the system's capabilities.
> >
> > So, you want our real time synth + effect API to also be a
> > full-blown off-line music editing plugin API? Do you realize the
> > complexity consequences of such a design choice?
>
> Umm, I want that.

Well, so do I, actually - but the thing has to be designed, and it 
should preferably take less than a few years to fully understand the 
API. ;-)


> I have little need for the RT features, myself. 
> I want to use this API in a FruityLoops like host, where the user
> is not bothered with making wiring decisions or RT/non-RT behavior.
>  I want to use it to develop tracks in the studio.  So far, I don't
> see anything preventing that. My host, as it evolves in my mind,
> will allow things that you won't.  You can load a new instrument at
> run time.  It might glitch.  So what.  It will certainly be usable
> live, but that is not the primary goal.

I always jam and record "live" data from MIDI or other stuff, so I 
definitely need plugins in a net to run perfectly with very low 
latency - with sequencer control, "live" control, or both.

As to loading instruments at run time, making connections and all 
that, it's not absolutely required for me, but I'd really rather be 
*able* to implement a host that can do it, should I feel like it. I 
don't think this will matter much to the design of the API. The 
details I can think of are required to support SMP systems as well, 
so it isn't even RT-only stuff.


> As for time vs. time debates, my original idea was that each block
> was based on musical time (1/100th of a quarter note or something).

That would imply a rather low resolution on the tempo control, I 
think...


>  I've been convinced that sample-accurate events are good.  That
> doesn't mean I need to change the tick-size, I think.

Of course not - but you *can* if you like. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Synth APIs, MONKEY

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 20.25, Sami P Perttu wrote:
[...]
> > That sounds a lot like a specialized event system, actually. You
> > have structured data - and that is essentially what events are
> > about.
>
> Hmm, that's one way of looking at it. I had thought of the subblock
> aspect as something that is "peeled away" to get at the continuous
> signal underneath.

A sort of combined "rendering language" and compressed format.

An event system with "set" and "ramp" events can do the same thing - 
although it does get pretty inefficient when you want to transfer 
actuall audio rate data! ;-)


> > > About the cost: an expression for pitch would be evaluated,
> > > say, 100 times a second, and values in between would be
> > > linearly interpolated, so that overhead is negligible.
> >
> > I see. This is what I intend to do in Audiality later on,
> > although it will be more event centered and not "just"
> > expressions. As an alternative to the current mono, poly and
> > sequencer "patch plugins", there will be one that lets you code
> > patch plugins in a byte compiled scripting language. Timing is
> > sample accurate, but since we're dealing with "structured
> > control", there's no need to evaluate once per sample, or even
> > once per buffer. You just do what you want when you want.
>
> Sounds cool. So these would be scripts that read and write
> events..?

Yes. The same language (although interpretted) is alreay used for 
rendering waveforms off-line. (Optimized for quality and flexibility, 
rather than speed.) It will eventually be able to construct (or 
rather, describe) simple networks that the real time part of the 
scripts can control. Currently, the real time synth is little more 
than a sampleplayer with an envelope generater, so there isn't much 
use for "net definition code" yet. :-)


> I also have something similar in mind but writing the
> compiler is an effort in itself.

No kidding...!


> Especially because it has to be as
> fast as possible: in MONKEY real-time control is applied by
> redefining functions. So when you turn a knob an arbitrary number
> of expressions may have to be re-evaluated or even reparsed.

I prefer to think in source->compiler->code terms, to avoid getting 
into these kinds of situations. (I guess the 8 and 16 bit ages still 
have some effect on me. ;-)


> The
> benefit is that since basically all values are given as
> expressions, the system is very flexible.

Yeah, that's a great idea. I'm not quite sure I see how that can 
result in expressions being reparsed, though. When does this happen? 

I would have thought you could just compile all expressions used in 
your net, and then plugin the compiled "code". You can create, load 
or modify a net, and then you compile and run.


> > Yes, but there is a problem with fixed control rate, even if you
> > can pick one for each expression: If you set it low, you can't
> > handle fast transients (percussion attacks and the like), and if
> > you set it high, you get constantly high CPU utilization.
> >
> > That's one of the main reason why I prefer timestamped events:
> > One less descision to make. You always have sample accurate
> > timing when you need it, but no cost when you don't.
>
> Isn't that one more decision to make? :) What do you do in between
> events? Do you have a set of prescribed envelope shapes that you
> can choose from, or something else?

This is a good point. So far, only "set" and "linear ramp" has been 
discussed, really, and that's what some of the proprietary plugin 
APIs use. It seems to work well enough for most things, and in the 
cases where linear is insufficient for quality reasons, plugins are 
*much* better off with linear ramp input than just points with no 
implied relation to the actual signal.

Why? Well, if you consider what a plugin would have to do to 
interpolate at all, it becomes obvious that it either needs two 
points, or a time constant. Both result in a delay - and a delay that 
is not known to the host or other plugins, at that! Nor can it be 
specified in the API in any useful way (some algos are more sensitive 
than others), and agreeing on a way for plugins to tell the host 
about their control input latency may not be easy either.

Linear ramping doesn't eliminate this problem entirely, but at least, 
it lets you tell the plugin *explicitly* what kind of steepness you 
have in mind.

A ramp event effectiely spans the whole time of the change, whereas 
"set" events always arrive exactly when you want the target value 
reached - and that's rather late if you want to avoid clicks! :-)


> > However, even relatively simple FIR filters and the like may have
> > rather expensive initialization that you cannot do much about,
> > without instantiating "something" resident when you load the
> > plugin.
>
> True; I don't have that problem yet because I only have a class
> interface, and classes can have static data.

I see. Then you actually *have* a form of load time initi

Re: Fwd: Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 06:49:17 +, Nathaniel Virgo wrote:
> Sorry, I just tend to hit "reply to all" because some lists seem to be set up 
> so that "reply" doesn't go to the list.

See if your mail client has a reply-to-list deature, mutt does
(+L).
 
> > I like the idea of enforced "explicit casting", but I think it's
> > rather restrictive not to allow synths to take note_pitch. That would
> > make it impossible to have synths with integrated event processors
> > (including scale converters; although *that* might actually be a good
> > idea)
> 
> That would be bad.  If a synth takes note_pitch it's bound to interpret it as 
> 12tET, which would be annoying to someone trying to use a different scale.  A 
> synth could still have a built in event processor, but it should only process 
> linear_pitch events.  Scale converters should definately not be built into 
> synths.

Agreed.

- Steve 



Re: Fwd: Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 19.49, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 5:19 pm, David Olofson wrote:
> > (Oops. Replied to the direct reply, rather than via the list.
> > Please, don't CC me - I'm on the list! :-)
>
> Sorry, I just tend to hit "reply to all" because some lists seem to
> be set up so that "reply" doesn't go to the list.
>
> > I like the idea of enforced "explicit casting", but I think it's
> > rather restrictive not to allow synths to take note_pitch. That
> > would make it impossible to have synths with integrated event
> > processors (including scale converters; although *that* might
> > actually be a good idea)
>
> That would be bad.  If a synth takes note_pitch it's bound to
> interpret it as 12tET, which would be annoying to someone trying to
> use a different scale.

That would be a sloppy implementation, which pretends to understand 
scales, but is hardcoded to 12tET.

Either way, point taken; it's probably a good idea to at least 
*strongly* discourage people from ever touching note_pitch in synths 
and the like.


> A synth could still have a built in event
> processor, but it should only process linear_pitch events.

Yes - but you could not implement a useful arpeggiator that way, for 
example. It would do the wrong thing as soon as you're not using 
12tET anymore - and *now*, users wouldn't have clue as to why this 
happens, because the synth *lies* and says that it cares only about 
linear_pitch...


>  Scale
> converters should definately not be built into synths.

I think I agree, but I bet *someone* will eventually figure out a 
valid reason to do it... ;-)


> > Either way, there will *not* be a distinction between synths and
> > other plugins in the API. Steinberg did that mistake, and has
> > been forced to correct it. Let's not repeat it.
>
> I wasn't thinking so much of an API distinction as a very
> well-documented convention.  Also I was thinking more of the
> distinction being between scale-related event processors and
> everything else, rather than synths and everything else which I
> agree would be bad.

Ok, then I agree.


> You could enforce it with rules like "if it's got a note_pitch
> input port it's not allowed to have any other kind of port, except
> in the case of a plugin with one note_pitch input and one
> linear_pitch output, which is a scale converter" - but there might
> be the odd case where these rules don't make sense.

Yeah. We could "strongly suggest" things, but officially saying "you 
cannot do this" about things that are physically possible is 
dangerous. Host coders might actually take your word for it!

And then everyone goes "Hmm... That's actually useful, after all." 
Bang! VST host incompatibilities reinvented... *heh*


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 19.42, Tim Hockin wrote:
> > delays based on musical time do, whatever you like to call
> > it.
>
> I always assumed that tempo-delays and thinsg would just ask the
> host for the musical time at the start of each buffer.

That's a hack that works ok in most cases, but it's not the Right 
Thing(TM), if you're picky.


> With
> sample-accurate events, the host can change tempo even within a
> buffer.

Yes. And it can also slide the tempo smoothly by changing it once per 
sample. To be entirely safe, you must take that in account.


>  If a plugin is concerned with muscial time, perhaps it
> should ask for the musical time at the start and end of the buffer.
>  If the musical time stamp the plugin wants is within the buffer,
> it can then find it and act.

Yes, that could work...


> This breaks down, though when the host can do a transport
> mid-buffer, and sample-accuracy permits that.

Yes.


> Perhaps plugins that
> care about musical time should receive events on their 'tempo'
> control.  Tempo changes then become easy.

Great idea!

For MAIA, I once had the idea of sending musical time events - but 
that would have been rather useless, as the host/timeline 
plugin/whatever couldn't sensibly send more than one event per buffer 
or something, or the system would be completely flooded.

However, tempo changes would only occur once in a while in the vast 
majority of songs, and even if the host limits the number of tempo 
change events to one every N samples, plugins can still work with 
musical time with very high accuracy.

And, there's another major advantage with tempo: Whereas looping 
unavoidably means a "skip" in musical time, but does *not* have to do 
that in tempo. If your whole song is it 120 BPM, you'll probably want 
arpeggiators and stuff to work with that even if you loop.

This is not the whole answer, though. As an example, you'll probably 
want an arpeggiator to be able to lock to musical *time*; not just 
tempo. That is, tempo is not enough for all plugins. Some will also 
have to stay in sync with the timeline - and this should preferably 
work even if you loop at weird ponts, or just slap the transport 
around a bit.


> Transports are still
> smarmy.

They always are. To make things simple, we can just say that plugins 
are not really expected to deal with time running backwards, jumping 
at "infinite" speed and that kind of stuff.

However, it would indeed be nice if plugins (the ones that care about 
musical time, that is) could handle looping properly.


> Is it sane to say 'don't do a transport mid-buffer' to the
> host developers?

I don't think that helps. Properly implemented plugins will work (or 
be confused) no matter when you do the transport operation.

Don't think too much in terms of buffers in relation to timestamps. 
It only inspires to incorrect implementations. :-)


So, what's the Right Thing(TM)?

Well, if you have an event and want it in musical time, ask the host 
to translate it.

If you want the audio time for a certain point on the musical 
timeline, same thing; ask the host. In this case, it might be 
interesting to note that the host may not at all be able to give you 
a reliable answer, if you ask about the future! How could it, when 
the user can change the tempo or mess with the transport at any time?


Now, if you want to delay an event with an *exact* amount, expressed 
as musical time, translate the event's timestamp into musical time, 
add the delay value, and then ask the host about the resulting audio 
time. If it's within the current buffer; fine - send it. If it's not, 
you'll have to put it on hold and check later.

There are at least two issues with doing it this way, tough:

* You *will* have to check the order of events on your
  outputs, since musical time is not guaranteed to be
  monotonous.

* You'll have to decide what to do when you generate an
  event that ends up beyond the end of a loop in musical
  time. Since you cannot really know this, the "correct"
  way would be to just accept that the event will never
  be sent.

So, if you only want an exact delay, you're probably *much* better of 
just keeping track of the tempo. It's so much easier, and 
automatically results in behavior that makes sense to the vast 
majority of users.


It's more complicated with a timeline synchronized arpeggiator, which 
*has* to keep track of the timeline, and not just the tempo. Sticking 
with the tempo idea and adding a PLL that locks the phase of the 
internal metronome to the timeline would probably be a better idea.

And no, timestamps in musical time would not help, because they don't 
automatically make anyone understand which events belong together. 
Even if they would, the *sender* of the events would normally know 
best what is sensible to do in these "timeline skip" situations. You 
would not be able to avoid hanging notes after looping and that kind 

Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Hockin
> > i'm convinced it's better to design one system that works
> > for event-only as well as audio-only plugins and allows for
> > the mixed case, too. everything else is an arbitrary
> > limitation of the system's capabilities.
> 
> So, you want our real time synth + effect API to also be a full-blown 
> off-line music editing plugin API? Do you realize the complexity 
> consequences of such a design choice?

Umm, I want that.  I have little need for the RT features, myself.  I want
to use this API in a FruityLoops like host, where the user is not bothered
with making wiring decisions or RT/non-RT behavior.  I want to use it to
develop tracks in the studio.  So far, I don't see anything preventing that.
My host, as it evolves in my mind, will allow things that you won't.  You
can load a new instrument at run time.  It might glitch.  So what.  It will
certainly be usable live, but that is not the primary goal.

As for time vs. time debates, my original idea was that each block was based
on musical time (1/100th of a quarter note or something).  I've been
convinced that sample-accurate events are good.  That doesn't mean I need to
change the tick-size, I think.

Tim



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 18.54, Tim Goetze wrote:
> David Olofson wrote:
> >On Wednesday 11 December 2002 15.25, Tim Goetze wrote:
> >> David Olofson wrote:
> >> >So, sort them and keep track of where you are. You'll have to
> >> > sort the events anyway, or the event system will break down
> >> > when you send events out-of-order. The latter is what the
> >> > event processing loop of every plugin will do, BTW - pretty
> >> > trivial stuff.
> >>
> >> what you describe here has a name: it's called queuing.
> >
> >Of course. But it doesn't belong in the event system, except
> > possibly as a host or SDK service that some plugins *may* use if
> > they like. Most plugins will never need this, so I think it's a
> > bad idea to force that overhead into the basic event system.
>
> above, you claim that you need queuing in the event system,
> and that it is 'pretty trivial stuff', in 'every plugin'.
> now you say you don't want to 'force that overhead'.

I did not say that; read again. I was referring to "the latter" - 
that is "keep track of where you are".

That is, look at the timestamp of the next event, so see whether or 
not you should handle the event *now*, or do some audio processing 
first. The second case implies that you may hit the frame count of 
the current buffer before it's time to execute that next event.


Either way, this is not the issue. Allowing plugins to send events 
that are meant to be processed in future buffers is, and this is 
because it requires that you timestamp with musical time in order to 
handle tempo changes correctly. *That* is what I want to avoid.


> >> >Do event processors posses time travelling capabilites?
> >>
> >> delays based on musical time do, whatever you like to call
> >> it.
> >
> >Then they cannot work within the real time net. They have to be an
> >integral part of the sequencer, or act as special plugins for the
> >sequencer and/or the editor.
>
> so eventually, you'll need a different event system for
> plugins that care about musical time.

No. You'll need a different event system for plugins that want to 
look at future events.


> and what if you come
> to the point where you want an audio plugin that needs to
> handle musical time, or prequeued events? you'll drown in
> 'special case' handling code.

Can you give me an example? I think I'm totally missing the point.


> i'm convinced it's better to design one system that works
> for event-only as well as audio-only plugins and allows for
> the mixed case, too. everything else is an arbitrary
> limitation of the system's capabilities.

So, you want our real time synth + effect API to also be a full-blown 
off-line music editing plugin API? Do you realize the complexity 
consequences of such a design choice?


> using audio frames as the basic unit of time in a system
> producing music is like using specific device coordinates
> for printing. they used to do it in the dark ages, but
> eventually everybody agreed to go independent of device
> limitations.

Expressing coordinates in a document is trivial in comparison the 
intheraction between plugins in a network. Printing protocols are 
rather similar to document formats, and not very similar at all to 
something that would be used for real time interaction between units 
in a net. But that's besides the point, really...

To make my point clear:

We might alternatively do away with the event system altogether, and 
switch to blockless processing. Then it becomes obvious that musical 
time, as a way of saying when something is supposed to happen, makes 
sense only inside the sequencer. Synths and effects would not see any 
timestamps *at all*, so there could be no argument about the format 
of timestamps in the plugin API.

As to plugins being *aware* of musical time, that's a different 
matter entirely.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 19.32, Tim Hockin wrote:
> > This is *exactly* why I'm proposing the use of a structured text
> > file that matches the structure of the plugin's "exported" names.
> > The *structure* is what you go by; not the actual words. A host
> > would not even have to ask the plugin for the english names, but
> > just look up the corresponding position in the XML and get the
> > name from there instead.
>
> You've basically described gettext(), which happens to be the
> standard way of doing localization in modst code.

Yes, then there is no problem, is there?

[...]
> I had thought about making reccomendations about this, but decided
> there was no need.  Plugins don't display anything themselves
> (modulo UIs which we haven't even breached, and let's not, yet). 
> Leave loclaization to the host and the translators.

Exactly.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Synth APIs, MONKEY

2002-12-11 Thread Sami P Perttu
On Wed, 11 Dec 2002, David Olofson wrote:

> > Well, in MONKEY I have done away with separate audio and control
> > signals - there is only one type of signal. However, each block of
> > a signal may consist of an arbitrary number of consecutive
> > subblocks. There are three types of subblocks: constant, linear and
> > data. A (say) LADSPA control signal block is equivalent to a MONKEY
> > signal block that has one subblock which is constant and covers the
> > whole block. Then there's the linear subblock type, which specifies
> > a value at the beginning and a per-sample delta value. The data
> > subblock type is just audio rate data.
>
> That sounds a lot like a specialized event system, actually. You have
> structured data - and that is essentially what events are about.

Hmm, that's one way of looking at it. I had thought of the subblock aspect
as something that is "peeled away" to get at the continuous signal
underneath.

> > About the cost: an expression for pitch would be evaluated, say,
> > 100 times a second, and values in between would be linearly
> > interpolated, so that overhead is negligible.
>
> I see. This is what I intend to do in Audiality later on, although it
> will be more event centered and not "just" expressions. As an
> alternative to the current mono, poly and sequencer "patch plugins",
> there will be one that lets you code patch plugins in a byte compiled
> scripting language. Timing is sample accurate, but since we're
> dealing with "structured control", there's no need to evaluate once
> per sample, or even once per buffer. You just do what you want when
> you want.

Sounds cool. So these would be scripts that read and write events..? I
also have something similar in mind but writing the compiler is an effort
in itself. Especially because it has to be as fast as possible: in MONKEY
real-time control is applied by redefining functions. So when you turn a
knob an arbitrary number of expressions may have to be re-evaluated or
even reparsed. The benefit is that since basically all values are given as
expressions, the system is very flexible.

> Yes, but there is a problem with fixed control rate, even if you can
> pick one for each expression: If you set it low, you can't handle
> fast transients (percussion attacks and the like), and if you set it
> high, you get constantly high CPU utilization.
>
> That's one of the main reason why I prefer timestamped events: One
> less descision to make. You always have sample accurate timing when
> you need it, but no cost when you don't.

Isn't that one more decision to make? :) What do you do in between events?
Do you have a set of prescribed envelope shapes that you can choose from,
or something else?

> However, even relatively simple FIR filters and the like may have
> rather expensive initialization that you cannot do much about,
> without instantiating "something" resident when you load the plugin.

True; I don't have that problem yet because I only have a class interface,
and classes can have static data.

> > standard block-based processing, though. Yes, sample accurate
> > timing is implemented: when a plugin is run it is given start and
> > end sample offsets.
>
> As in "start processing HERE in your first buffer", and similarly for
> the last buffer? Couldn't that be handled by the host, though "buffer
> splitting", to avoid explicitly supporting that in every plugin?

No, as in "process this block from offset x to offset y". The complexity
is hidden inside an iterator - plugins can mostly ignore it. The clever
plugin writer can also parameterize her processing for different subblock
types via C++ templates, etc.

> It's probably time to start working on a prototype, as a sanity check
> of the design. Some things are hard to see until you actually try to
> implement something.

Especially when it comes to the user interface. Ever since I started to
design the GUI I have found myself evaluating features based more on their
value to the user and less on their technical merits.

--
Sami Perttu   "Flower chase the sunshine"
[EMAIL PROTECTED]   http://www.cs.helsinki.fi/u/perttu




Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 18.26, Steve Harris wrote:
> On Wed, Dec 11, 2002 at 04:35:16 +0100, David Olofson wrote:
> > > Maybe. My objection to converters is more that they imply two
> > > parallel representations of frequency (in the broad sense of
> > > the word), which seems like a mistake.
> >
> > They are not parallel. One actually *is* frequency, while the
> > other expresses pitch in relation to a scale.
> >
> > It's like comparing inline code with calls through function
> > pointers, basically.
>
> I dont see how, its more like having a string and int
> representation of the same thing.

No, not unless the string representation is supposed to translate in 
a number of different ways, depending on what character table you use.

In code, for linear_pitch:

actual_pitch = linear_pitch;


whereas for note_pitch (simplified; no interpolation):

actual_pitch = scale[note_pitch];


There is only one valid relation between actual pitch and linear 
pitch, while the relation between actual pitch and *note* pitch is 
user defined.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Plugin APIs (again)

2002-12-11 Thread Pascal Haakmat
11/12/02 18:41, Kjetil S. Matheussen wrote:

> > This is VERY important in my worldview.  Assuming the work being done on a
> > VST hack on wine, a VST wrapper plugin or a LADSPA wrapper plugin makes all
> > those bits of code available.
> 
> The vst ladspa plugin just makes vst plugins appear as ladspa plugins, so
> thats not a problem.

I'm sorry if this is old news but I'm new to this list... Is there in
fact such a beast as a LADSPA -> VST plugin (however experimental)?

Pascal.



Re: [linux-audio-dev] The beginnings of an ladcca manual

2002-12-11 Thread Paul Winkler
On Wed, Dec 11, 2002 at 04:31:24PM +, Bob Ham wrote:
> of ladcca will have libladcca under the LGPL license.  I do still not want
> cubase if it's under proprietary license, and I do still very much fear a
> linux-audio-dev world dominated by proprietary licenses, but libladcca under
> the GPL will probably make things worse for free audio software, I see this
> now.  Oh the woes of a proprietary software world :)

Ogg is an interesting example of these issues. The Ogg team were
worried that GPL would hurt their ability to get Ogg widely adopted,
which was clearly a priority given the icky license issues around
MP3.  They ended up with:

- the Ogg Vorbis spec is in the public domain.
- the "reference" encoder and decoder software are under the GPL.
- the libraries and SDKs developed by the Ogg team are under
  a BSDish license.
 
I think they did a pretty good job of weighing the issues
and coming up with something that optimally promotes open
standards and free software.

-- 

Paul Winkler
http://www.slinkp.com
"Welcome to Muppet Labs, where the future is made - today!"



Re: Fwd: Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Nathaniel Virgo
On Wednesday 11 December 2002 5:19 pm, David Olofson wrote:
> (Oops. Replied to the direct reply, rather than via the list. Please, 
> don't CC me - I'm on the list! :-)

Sorry, I just tend to hit "reply to all" because some lists seem to be set up 
so that "reply" doesn't go to the list.

> I like the idea of enforced "explicit casting", but I think it's
> rather restrictive not to allow synths to take note_pitch. That would
> make it impossible to have synths with integrated event processors
> (including scale converters; although *that* might actually be a good
> idea)

That would be bad.  If a synth takes note_pitch it's bound to interpret it as 
12tET, which would be annoying to someone trying to use a different scale.  A 
synth could still have a built in event processor, but it should only process 
linear_pitch events.  Scale converters should definately not be built into 
synths.

> Either way, there will *not* be a distinction between synths and
> other plugins in the API. Steinberg did that mistake, and has been
> forced to correct it. Let's not repeat it.

I wasn't thinking so much of an API distinction as a very well-documented 
convention.  Also I was thinking more of the distinction being between 
scale-related event processors and everything else, rather than synths and 
everything else which I agree would be bad.

You could enforce it with rules like "if it's got a note_pitch input port 
it's not allowed to have any other kind of port, except in the case of a 
plugin with one note_pitch input and one linear_pitch output, which is a 
scale converter" - but there might be the odd case where these rules don't 
make sense.  

> > If you have an algorithm that needs
> > to know something about the actual pitch rather than position on a
> > scale then it should operate on linear_pitch instead.
>
> Yes indeed - that's what note_pitch vs linear_pitch is all about.
>
> > I think that
> > in this scheme note_pitch and linear_pitch are two completely
> > different things and shouldn't be interchangeable.
>
> You're right. Allowing implicit casting in the 1tET case is a pure
> performance hack.
>
> > That way you
> > can enforce the correct order of operations:
> >
> > Sequencer
> >
> > | note_pitch signal
> >
> > V
> > scaled pitch bend (eg +/- 2 tones) /
> > arpeggiator / shift along scale /
> > other scale-related effects
> >
> > | note_pitch signal
> >
> > V
> > scale converter (could be trivial)
> >
> > | linear_pitch signal
> >
> > V
> > portamento / vibrato /
> > relative-pitch arpeggiator /
> > interval-preserving transpose /
> > other frequency-related effects
> >
> > | linear_pitch signal
> >
> > V
> >   synth
> >
> > That way anyone who doesn't want to worry about notes and scales
> > can just always work in linear_pitch and know they'll never see
> > anything else.
>
> Yes. But anyone who doesn't truly understand all this should not go
> into the advanced options menu and check the "Allow implicit casting
> of note_pitch into linear_pitch" box.
>
> So, I basically agree with you. I was only suggesting a host side
> performance hack for 1.0/octave diehards. It has nothing to do with
> the API.
>
>
> //David Olofson - Programmer, Composer, Open Source Advocate
>
> .- The Return of Audiality! .
>
> | Free/Open Source Audio Engine for use in Games or Studio. |
> | RT and off-line synth. Scripting. Sample accurate timing. |
>
> `---> http://olofson.net/audiality -'
> .- M A I A -.
>
> |The Multimedia Application Integration Architecture|
>
> `> http://www.linuxdj.com/maia -'
>--- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Hockin
> delays based on musical time do, whatever you like to call
> it.

I always assumed that tempo-delays and thinsg would just ask the host for
the musical time at the start of each buffer. With sample-accurate events,
the host can change tempo even within a buffer.  If a plugin is concerned
with muscial time, perhaps it should ask for the musical time at the start
and end of the buffer.  If the musical time stamp the plugin wants is within
the buffer, it can then find it and act.

This breaks down, though when the host can do a transport mid-buffer, and
sample-accuracy permits that.  Perhaps plugins that care about musical time
should receive events on their 'tempo' control.  Tempo changes then become
easy.  Transports are still smarmy.  Is it sane to say 'don't do a transport
mid-buffer' to the host developers?

Tim



Re: [linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread Tim Hockin
> This is *exactly* why I'm proposing the use of a structured text file 
> that matches the structure of the plugin's "exported" names. The 
> *structure* is what you go by; not the actual words. A host would not 
> even have to ask the plugin for the english names, but just look up 
> the corresponding position in the XML and get the name from there 
> instead.

You've basically described gettext(), which happens to be the standard way
of doing localization in modst code.

The plugin can be aware of it, if they want, or not.  Let's assume not, and
they just get translated.

Plugin has English strings.
Host reads English strings.
Host examines some environment variables set by user to ID their locale.
Host looks up English string in the $locale specific hash.
English works.
Other Latin languages works.
Other character set languages work, if no one tries to index into strings.

I had thought about making reccomendations about this, but decided there was
no need.  Plugins don't display anything themselves (modulo UIs which we
haven't even breached, and let's not, yet).  Leave loclaization to the host
and the translators.

Tim



Re: [linux-audio-dev] Plugin APIs (again)

2002-12-11 Thread Tim Hockin
> > This is VERY important in my worldview.  Assuming the work being done on a
> > VST hack on wine, a VST wrapper plugin or a LADSPA wrapper plugin makes all
> > those bits of code available.
> 
> The vst ladspa plugin just makes vst plugins appear as ladspa plugins, so
> thats not a problem.

Seeing the VST hack made me smile ear to ear.  All I want is to be able to
make my music in Linux.  I have some VST instruments and effects that I
LOVE.  It was the next major hurdle for me - to get Wine to run VSTs.  And
you beat me to it.

:)

Tim



Re: [linux-audio-dev] The beginnings of an ladcca manual

2002-12-11 Thread Bob Ham
On Tue, Dec 10, 2002 at 09:25:15PM +, Nathaniel Virgo wrote:

> programs.  This means that if a commercial program comes along it won't be 
> able to use the library, and anyone who wanted to use that program would have 
> to manually keep loads of files in sync like we do now.  

Like I said before, I'm very well aware of what it means for the library to
be under the GPL.  I'm aware that proprietary (who says that free software
isn't "commercial"?  Have a look at Red Hat's stock quotes) programs are not
allowed to link to libladcca.  Like I said, this is *why* I chose to release
under the GPL.  The issue is: what is best for free audio software (on
linux or any other system.)  Will it aid free audio software by having a
proprietary applications like, eg cubase or cakewalk or reason, ported
to a free operating system like gnu/linux, and remain proprietary?  I think
not.  If cubase was ported to gnu/linux, its users would still be told
"you cannot change this program."  What good will this do free audio
software?  It may bring more users in to using other free software such as
jack, portaudio, ardour, sweep, alsa, etc, but if such a thing ever happened,
I very much fear that it would make the linux-audio-dev world a place where
proprietary licenses were dominant.  Do I want cubase to be ported to gnu/linux?
No, I do not.  Not if it's under a proprietary license.  Of course, I
realise that I am (unfortunately for software freedom) in a minority here. 
I also realise that the license that libladcca is released under will have
little or no effect on the issue, but I'm not quite prepared to put a stamp
of approval on proprietary audio software, either, and that's what I would
feel like I was doing if I released ladcca under the LGPL.  Regardless, this
is a non-issue for the moment, as, thankfully, freely licensed audio software
abounds atm. If/when the situation changes, it may be prudent to readdress the
issue, but for the moment, I think the GPL will stay.

Having just reread the above, and thinking back to the reasons why glibc was
released under the LGPL, I have, in fact, concluded that I am wrong.  I'll
leave it up there anyway as it's an interesting argument.  The next release
of ladcca will have libladcca under the LGPL license.  I do still not want
cubase if it's under proprietary license, and I do still very much fear a
linux-audio-dev world dominated by proprietary licenses, but libladcca under
the GPL will probably make things worse for free audio software, I see this
now.  Oh the woes of a proprietary software world :)

Anyway, back to hacking :)

Bob



Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Frank Barknecht
Antti Boman schrieb:
> Antti Boman wrote:
> > Frank Barknecht wrote:
> > 
> >> To wet your appetite: I really should finish my PD quicktoot, which
> >> even in its current unfinished form is longer then three standard
> >> quicktoots :(
> > 
> > You wet my appetite so that I have to ask if there's a version online 
> > for a quick look beforehand. A question mark.
> 
> Uh, funny, doubled the mistake of wetting and not whetting.
Ooops, my fault. I didn't know that those were different "w[h]et"s... 

Regarding the quicktoot: It explains how my angriff-drummachine for Pd
works and was build. You could try to find out yourself at my site,
http://footils.org until I really do finish the Toot.

ciao
-- 
Frank Barknecht 



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
David Olofson wrote:

>On Wednesday 11 December 2002 15.25, Tim Goetze wrote:
>> David Olofson wrote:
>> >So, sort them and keep track of where you are. You'll have to sort
>> >the events anyway, or the event system will break down when you
>> > send events out-of-order. The latter is what the event processing
>> > loop of every plugin will do, BTW - pretty trivial stuff.
>>
>> what you describe here has a name: it's called queuing.
>
>Of course. But it doesn't belong in the event system, except possibly 
>as a host or SDK service that some plugins *may* use if they like. 
>Most plugins will never need this, so I think it's a bad idea to 
>force that overhead into the basic event system.

above, you claim that you need queuing in the event system,
and that it is 'pretty trivial stuff', in 'every plugin'. 
now you say you don't want to 'force that overhead'. 

>> >Do event processors posses time travelling capabilites?
>>
>> delays based on musical time do, whatever you like to call
>> it.
>
>Then they cannot work within the real time net. They have to be an 
>integral part of the sequencer, or act as special plugins for the 
>sequencer and/or the editor.

so eventually, you'll need a different event system for 
plugins that care about musical time. and what if you come 
to the point where you want an audio plugin that needs to 
handle musical time, or prequeued events? you'll drown in
'special case' handling code.

i'm convinced it's better to design one system that works
for event-only as well as audio-only plugins and allows for
the mixed case, too. everything else is an arbitrary 
limitation of the system's capabilities.

using audio frames as the basic unit of time in a system
producing music is like using specific device coordinates 
for printing. they used to do it in the dark ages, but 
eventually everybody agreed to go independent of device 
limitations.

tim




Re: [linux-audio-dev] Plugin APIs (again)

2002-12-11 Thread Kjetil S. Matheussen


On Mon, 9 Dec 2002, Tim Hockin wrote:

> > Well, why would you ever want to *change* the number of Bays of a
> > plugin? Well, consider a plugin that wraps other plugins... If
>
> This is VERY important in my worldview.  Assuming the work being done on a
> VST hack on wine, a VST wrapper plugin or a LADSPA wrapper plugin makes all
> those bits of code available.

The vst ladspa plugin just makes vst plugins appear as ladspa plugins, so
thats not a problem.


> The VST stuff may be useless in RT,

No, its definitely not useless in RT. Works very fine. :)


-- 




Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 04:35:16 +0100, David Olofson wrote:
> > Maybe. My objection to converters is more that they imply two
> > parallel representations of frequency (in the broad sense of the
> > word), which seems like a mistake.
> 
> They are not parallel. One actually *is* frequency, while the other 
> expresses pitch in relation to a scale.
> 
> It's like comparing inline code with calls through function pointers, 
> basically.

I dont see how, its more like having a string and int representation of
the same thing.

- Steve 



Fwd: Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
(Oops. Replied to the direct reply, rather than via the list. Please, 
don't CC me - I'm on the list! :-)

--  Forwarded Message  --

Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:05:57 +0100
From: David Olofson <[EMAIL PROTECTED]>
To: Nathaniel Virgo <[EMAIL PROTECTED]>

On Wednesday 11 December 2002 17.50, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 3:41 pm, David Olofson wrote:
> > On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> > > I can't really say I can think of a better way though.
> > > Personally I'd leave scales out of the API and let the host
> > > deal with it, sticking to 1.0/octave throughout, but I can see
> > > the advantages of this as well.
> >
> > Problem with letting the host worry about it is that the host
> > would normally not understand anything of this whatsoever, since
> > the normal case would be that a sequencer *plugin* controls the
> > synths. It would be a hack.
>
> Oh.  Well, when I said host I meant sequencer.

I see. Well, either way, I still prefer thinking of scale converters
as something I may just plugin, rather than waiting for my favourite
sequencer to support the kind of scales I want. One multichannel
event processor plugin more or less in the net won't be a disaster -
and again, you *can* use 1.0/octave in the sequencer as well as an
alternative.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Fwd: Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
(Same thing again...)

--  Forwarded Message  --

Subject: Re: [linux-audio-dev] XAP: Pitch control
Date: Wed, 11 Dec 2002 18:15:59 +0100
From: David Olofson <[EMAIL PROTECTED]>
To: Nathaniel Virgo <[EMAIL PROTECTED]>

On Wednesday 11 December 2002 18.09, Nathaniel Virgo wrote:
> On Wednesday 11 December 2002 4:29 pm, David Olofson wrote:
> > On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> > > Steve Harris wrote:
> > > >On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> > > >>I can't really say I can think of a better way though.
> > > >> Personally I'd leave scales out of the API and let the host
> > > >> deal with it, sticking to 1.0/octave throughout, but I can
> > > >> see the advantages of this as well.
> > > >
> > > >We could put it to a vote ;)
> > > >
> > > >- Steve
> > >
> > > I vote 1.0/octave.
> >
> > So do I, definitely.
> >
> > There has never been an argument about /octave, and
> > there no longer is an argument about 1.0/octave.
> >
> > The "argument" is about whether or not we should have a scale
> > related pitch control type *as well*. It's really more of a hint
> > than an actual data type, as you could just assume "1tET" and use
> > both as 1.0/octave.
>
> I don't think that should be permitted.  I think that this case
> should be handled by a trivial scale converter that does nothing.
> No synth should be allowed to take a note_pitch input, and nothing
> except a scale converter should be allowed to assume any particular
> meaning for a note_pitch input.

I like the idea of enforced "explicit casting", but I think it's
rather restrictive not to allow synths to take note_pitch. That would
make it impossible to have synths with integrated event processors
(including scale converters; although *that* might actually be a good
idea)

Either way, there will *not* be a distinction between synths and
other plugins in the API. Steinberg did that mistake, and has been
forced to correct it. Let's not repeat it.

> If you have an algorithm that needs
> to know something about the actual pitch rather than position on a
> scale then it should operate on linear_pitch instead.

Yes indeed - that's what note_pitch vs linear_pitch is all about.

> I think that
> in this scheme note_pitch and linear_pitch are two completely
> different things and shouldn't be interchangeable.

You're right. Allowing implicit casting in the 1tET case is a pure
performance hack.

> That way you
> can enforce the correct order of operations:
>
>   Sequencer
>
>   | note_pitch signal
>
>   V
>   scaled pitch bend (eg +/- 2 tones) /
>   arpeggiator / shift along scale /
>   other scale-related effects
>
>   | note_pitch signal
>
>   V
>   scale converter (could be trivial)
>
>   | linear_pitch signal
>
>   V
>   portamento / vibrato /
>   relative-pitch arpeggiator /
>   interval-preserving transpose /
>   other frequency-related effects
>
>   | linear_pitch signal
>
>   V
> synth
>
> That way anyone who doesn't want to worry about notes and scales
> can just always work in linear_pitch and know they'll never see
> anything else.

Yes. But anyone who doesn't truly understand all this should not go
into the advanced options menu and check the "Allow implicit casting
of note_pitch into linear_pitch" box.

So, I basically agree with you. I was only suggesting a host side
performance hack for 1.0/octave diehards. It has nothing to do with
the API.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Nathaniel Virgo
On Wednesday 11 December 2002 4:29 pm, David Olofson wrote:
> On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> > Steve Harris wrote:
> > >On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> > >>I can't really say I can think of a better way though.
> > >> Personally I'd leave scales out of the API and let the host deal
> > >> with it, sticking to 1.0/octave throughout, but I can see the
> > >> advantages of this as well.
> > >
> > >We could put it to a vote ;)
> > >
> > >- Steve
> >
> > I vote 1.0/octave.
>
> So do I, definitely.
>
> There has never been an argument about /octave, and there
> no longer is an argument about 1.0/octave.
>
> The "argument" is about whether or not we should have a scale related
> pitch control type *as well*. It's really more of a hint than an
> actual data type, as you could just assume "1tET" and use both as
> 1.0/octave.

I don't think that should be permitted.  I think that this case should be 
handled by a trivial scale converter that does nothing.  No synth should be 
allowed to take a note_pitch input, and nothing except a scale converter 
should be allowed to assume any particular meaning for a note_pitch input.  
If you have an algorithm that needs to know something about the actual pitch 
rather than position on a scale then it should operate on linear_pitch 
instead.  I think that in this scheme note_pitch and linear_pitch are two 
completely different things and shouldn't be interchangeable.  That way you 
can enforce the correct order of operations:

Sequencer
|
| note_pitch signal
|
V
scaled pitch bend (eg +/- 2 tones) / 
arpeggiator / shift along scale /
other scale-related effects
|
| note_pitch signal
| 
V
scale converter (could be trivial)
| 
| linear_pitch signal
|
V
portamento / vibrato / 
relative-pitch arpeggiator / 
interval-preserving transpose /
other frequency-related effects
|
| linear_pitch signal
|
V
  synth

That way anyone who doesn't want to worry about notes and scales can just 
always work in linear_pitch and know they'll never see anything else.

> The need for 1.0/note or similar arrise when you want to work with
> something like 12t without deciding on the exact tuning, and also
> when you want to write "simple" event processor plugins that think it
> terms of notes rather than actual pitch.
>
>
> //David Olofson - Programmer, Composer, Open Source Advocate
>
> .- The Return of Audiality! .
>
> | Free/Open Source Audio Engine for use in Games or Studio. |
> | RT and off-line synth. Scripting. Sample accurate timing. |
>
> `---> http://olofson.net/audiality -'
> .- M A I A -.
>
> |The Multimedia Application Integration Architecture|
>
> `> http://www.linuxdj.com/maia -'
>--- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 17.17, Steve Harris wrote:
> On Wed, Dec 11, 2002 at 04:25:56 +0100, David Olofson wrote:
> > > (1/12)/note makes more sense because theres /is/ someting very
> > > 12ey about 12tET notes (the clues in the name ;), whereas there
> > > is nothing 12ey about octaves. At all.
> >
> > There is nothing 12ey *at all* about notes if you're into 16t...
> >
> > So, 1.0/note makes sense, (1/12)/note does *not*. :-)
>
> Well I was only talking about 12tET, if youre working in 16tET then
> its 1/16. If your working in a non ET scale then its non trivial,
> but we know that.

It's always /note, and it's always trivial. The non-trivial 
stuff goes on in the scale converter plugin.

MIDI pitch is always MIDI pitch, which is 1/note. This is the same 
thing, only you can say

note pitch 0.5

instead of

pitch bend range +/-2; pitch bend 2048; note pitch 60

and do that independently for each note without going Universal SysEx 
or one note/channel.

You don't change the value of "1" in the MIDI protocol when using a 
MIDI scale converter. You don't change the value of 1.0 when using a 
scale converter plugin.


> Your piano argument is not really a problem as its the piano
> mechaism that generates the off-notes, that would be done at the
> midi->pitch stage, sureley?

That *is* the scale converter, if you're dealing with a "primitive" 
sampler that cannot implement this properly. But indeed, you may do 
it directly in the sampler as well. (And IMHO, that's the way you 
*should* do it, since, as I suggested, the need for this tuning is 
directly related to the sound of the instrument.)


> By the time it reaches the oscilators
> its allready been shifted.
>
> Maye I'm thinking at a different scope to you, but I view things
> like big complex sequencers as working outside this API, for ont
> thing it will have the same GUI issues as LADSPA.

A sequencer is just something that records and plays back events. A 
kind of event processor. You may integrate it with your host, or you 
may use some sequencer plugin that comes with the SDK, or whatever. 
The API shouldn't rule any of this out, or it will rule out a number 
of interesting plugins, such as phrase sequencers and virtual analog 
sequencers.


The GUI is another issue - which indeed, is something we have to 
consider for all plugins, if we're going anywhere with this API. The 
way I see it, the Control interface should be a sufficient plugin/GUI 
interface, so that all we need is a standard way of connecting 
controls to the external, non-RT applications that GUIs will normally 
be. Whatever comes in should go throgh the host's "preset database", 
so that the host knows the current value of every control at all 
times.

I'm thinking in terms of GUI == non-RT plugin, or rather client to 
the event system. That is, inside a XAP host, you would see GUI as an 
ordinary plugin in the net. Although it's not really physically in 
there; what you see is a gateway plugin that interfaces with the 
non-RT GUI plugin outside.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Nathaniel Virgo
On Wednesday 11 December 2002 3:41 pm, David Olofson wrote:
> On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> > I can't really say I can think of a better way though.
> > Personally I'd leave scales out of the API and let the host deal
> > with it, sticking to 1.0/octave throughout, but I can see the
> > advantages of this as well.
>
> Problem with letting the host worry about it is that the host would
> normally not understand anything of this whatsoever, since the normal
> case would be that a sequencer *plugin* controls the synths. It would
> be a hack.

Oh.  Well, when I said host I meant sequencer.



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 15.25, Tim Goetze wrote:
> David Olofson wrote:
> >So, sort them and keep track of where you are. You'll have to sort
> >the events anyway, or the event system will break down when you
> > send events out-of-order. The latter is what the event processing
> > loop of every plugin will do, BTW - pretty trivial stuff.
>
> what you describe here has a name: it's called queuing.

Of course. But it doesn't belong in the event system, except possibly 
as a host or SDK service that some plugins *may* use if they like. 
Most plugins will never need this, so I think it's a bad idea to 
force that overhead into the basic event system.

It's sort of like saying that every audio stream should have built-in 
EQ, delay and phase inversion, just because mixer plugins will need 
to implement that.


> >Do event processors posses time travelling capabilites?
>
> delays based on musical time do, whatever you like to call
> it.

Then they cannot work within the real time net. They have to be an 
integral part of the sequencer, or act as special plugins for the 
sequencer and/or the editor.


> >It sounds like you're talking about "music edit operation plugins"
> >rather than real time plugins.
>
> you want to support 'instruments', don't you? 'instruments'
> are used to produce 'music' (usually), and 'music' has a
> well-defined concept of 'time'.

Yes - and if we want to deal with *real* time, we have to accept that 
we cannot know about the future.

One may argue that you *do* know about the future when playing 
something from a sequencer, but I strongly believe that is way beyond 
the scope of an instrument API primarilly meant for real time work.


> >If you just *use* a system, you won't have a clue what kind of
> >timestamps it uses.
>
> yeah, like for driving a car you don't need to know how
> gas and brakes work.

Well, you don't need to know how they *work* - only what they *do*.


> >Do you know how VST timestamps events?
>
> nope, i don't touch proprietary music software.

I see.

Either way, it's using sample frame counts.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 17.02, Sebastien Metrot wrote:
> This doesn't work most of the time because many names can have
> multiple meanings and vice versa.

This is *exactly* why I'm proposing the use of a structured text file 
that matches the structure of the plugin's "exported" names. The 
*structure* is what you go by; not the actual words. A host would not 
even have to ask the plugin for the english names, but just look up 
the corresponding position in the XML and get the name from there 
instead.


> Also you'll have to manage
> encodings correctly, and most developpers are just not aware of
> what an encoding really is.

Then don't try to write a host that supports non-english languages. 
Just use what the plugins give you (english) and be happy.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 13.59, David Gerard Matthews wrote:
> Steve Harris wrote:
> >On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> >>I can't really say I can think of a better way though. 
> >> Personally I'd leave scales out of the API and let the host deal
> >> with it, sticking to 1.0/octave throughout, but I can see the
> >> advantages of this as well.
> >
> >We could put it to a vote ;)
> >
> >- Steve
>
> I vote 1.0/octave.

So do I, definitely.

There has never been an argument about /octave, and there 
no longer is an argument about 1.0/octave.

The "argument" is about whether or not we should have a scale related 
pitch control type *as well*. It's really more of a hint than an 
actual data type, as you could just assume "1tET" and use both as 
1.0/octave.

The need for 1.0/note or similar arrise when you want to work with 
something like 12t without deciding on the exact tuning, and also 
when you want to write "simple" event processor plugins that think it 
terms of notes rather than actual pitch.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Synth APIs, MONKEY

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 13.14, Sami P Perttu wrote:
[...]
> > This sounds interesting and very flexible - but what's the cost?
> > How many voices of "real" sounds can you play at once on your
> > average PC? (Say, a 2 GHz P4 or someting.) Is it possible to
> > start a sound with sample accurate timing? How many voices would
> > this average PC cope with starting at the exact same time?
>
> Well, in MONKEY I have done away with separate audio and control
> signals - there is only one type of signal. However, each block of
> a signal may consist of an arbitrary number of consecutive
> subblocks. There are three types of subblocks: constant, linear and
> data. A (say) LADSPA control signal block is equivalent to a MONKEY
> signal block that has one subblock which is constant and covers the
> whole block. Then there's the linear subblock type, which specifies
> a value at the beginning and a per-sample delta value. The data
> subblock type is just audio rate data.

That sounds a lot like a specialized event system, actually. You have 
structured data - and that is essentially what events are about.


> The native API then provides for conversion between different types
> of blocks for units that want, say, flat audio data. This is
> actually less expensive and complex than it sounds.

Well, it doesn't sound tremendously expensive to me - and the point 
is that you can still accept the structured data if you can do a 
better job with that.


> About the cost: an expression for pitch would be evaluated, say,
> 100 times a second, and values in between would be linearly
> interpolated, so that overhead is negligible.

I see. This is what I intend to do in Audiality later on, although it 
will be more event centered and not "just" expressions. As an 
alternative to the current mono, poly and sequencer "patch plugins", 
there will be one that lets you code patch plugins in a byte compiled 
scripting language. Timing is sample accurate, but since we're 
dealing with "structured control", there's no need to evaluate once 
per sample, or even once per buffer. You just do what you want when 
you want.


> It probably does not
> matter that e.g. pitch glides are not exactly logarithmic, a
> piece-wise approximation should suffice in most cases.

Yes, but there is a problem with fixed control rate, even if you can 
pick one for each expression: If you set it low, you can't handle 
fast transients (percussion attacks and the like), and if you set it 
high, you get constantly high CPU utilization.

That's one of the main reason why I prefer timestamped events: One 
less descision to make. You always have sample accurate timing when 
you need it, but no cost when you don't.


> I'm not sure about the overhead of the whole system but I believe
> the instantiation overhead to be small, even if you play 100 notes
> a second.

Yes, the "note frequency" shouldn't be a major issue in itself; no 
need to go to extremes optimizing the handling of those events. 

However, even relatively simple FIR filters and the like may have 
rather expensive initialization that you cannot do much about, 
without instantiating "something" resident when you load the plugin.


> However, I haven't measured instantiation times, and
> there certainly is some overhead. We are still talking about
> standard block-based processing, though. Yes, sample accurate
> timing is implemented: when a plugin is run it is given start and
> end sample offsets.

As in "start processing HERE in your first buffer", and similarly for 
the last buffer? Couldn't that be handled by the host, though "buffer 
splitting", to avoid explicitly supporting that in every plugin?


> Hmm, that might have sounded confusing, but I intend to write a
> full account of MONKEY's architecture in the near future.

Ok, that sounds like an interesting read. :-)


> > You could think of our API as...
>
> It seems to be a solid design so far. I will definitely comment on
> it when you have a first draft for a proposal.

Well, the naming scheme isn't that solid... ;-)

But I think we have solved most of the hard technical problems. 
(Event routing, timestamp wrapping, addressing of synth voices, pitch 
control vs scales,...)

It's probably time to start working on a prototype, as a sanity check 
of the design. Some things are hard to see until you actually try to 
implement something.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 04:53:40 +0100, David Olofson wrote:
> That's something we might want to consider. Indeed, building names 
> into binaries means we'll actually need one binary for each language 
> (uurgh! reminds me of how Windoze "handles" languages...), but I'm 
> not sure external files are worth the extra complexity...
> 
> How about just having english in the source, and then add XML 
> translations later on, if desired? (Support is optional to host, and 
> plugins won't know about it.)

Of course RDF can handle all this cleanly and extensibly.

- Steve



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 04:25:56 +0100, David Olofson wrote:
> > (1/12)/note makes more sense because theres /is/ someting very 12ey
> > about 12tET notes (the clues in the name ;), whereas there is
> > nothing 12ey about octaves. At all.
> 
> There is nothing 12ey *at all* about notes if you're into 16t...
> 
> So, 1.0/note makes sense, (1/12)/note does *not*. :-)

Well I was only talking about 12tET, if youre working in 16tET then its
1/16. If your working in a non ET scale then its non trivial, but we know
that.

Your piano argument is not really a problem as its the piano mechaism that
generates the off-notes, that would be done at the midi->pitch stage,
sureley? By the time it reaches the oscilators its allready been shifted.

Maye I'm thinking at a different scope to you, but I view things like big
complex sequencers as working outside this API, for ont thing it will have
the same GUI issues as LADSPA.

- Steve 



Re: [linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread Sebastien Metrot
This doesn't work most of the time because many names can have multiple
meanings and vice versa. Also you'll have to manage encodings correctly, and
most developpers are just not aware of what an encoding really is.

Sebastien

- Original Message -
From: "David Olofson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, 11 December, 2002 16:53
Subject: Re: [linux-audio-dev] Re: Synth APIs, pitch control


> On Wednesday 11 December 2002 12.38, Sami P Perttu wrote:
> [...]
> > I shall have to add something like this to MONKEY. Right now it
> > supports LADSPA via a wrapper - the native API is pretty complex -
> > although creating a nice GUI based on just information in a LADSPA
> > .so is not possible, mainly due to lack of titles for enums.
>
> That's something we might want to consider. Indeed, building names
> into binaries means we'll actually need one binary for each language
> (uurgh! reminds me of how Windoze "handles" languages...), but I'm
> not sure external files are worth the extra complexity...
>
> How about just having english in the source, and then add XML
> translations later on, if desired? (Support is optional to host, and
> plugins won't know about it.)
>
> You could just hack an "application" that loads and queries plugins
> and outputs XML containing all names found in a structure
> corresponding to that of the plugin. Then you copy and translate that
> for each new langugage. If plugin are named .so, name these
> _.xml or something.
>




Re: [linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 12.38, Sami P Perttu wrote:
[...]
> I shall have to add something like this to MONKEY. Right now it
> supports LADSPA via a wrapper - the native API is pretty complex -
> although creating a nice GUI based on just information in a LADSPA
> .so is not possible, mainly due to lack of titles for enums.

That's something we might want to consider. Indeed, building names 
into binaries means we'll actually need one binary for each language 
(uurgh! reminds me of how Windoze "handles" languages...), but I'm 
not sure external files are worth the extra complexity...

How about just having english in the source, and then add XML 
translations later on, if desired? (Support is optional to host, and 
plugins won't know about it.)

You could just hack an "application" that loads and queries plugins 
and outputs XML containing all names found in a structure 
corresponding to that of the plugin. Then you copy and translate that 
for each new langugage. If plugin are named .so, name these 
_.xml or something.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> I can't really say I can think of a better way though. 
> Personally I'd leave scales out of the API and let the host deal
> with it, sticking to 1.0/octave throughout, but I can see the
> advantages of this as well.

Problem with letting the host worry about it is that the host would 
normally not understand anything of this whatsoever, since the normal 
case would be that a sequencer *plugin* controls the synths. It would 
be a hack.

As to API, all there is to it is the existence of a note_pitch hint 
for pitch controls, that suggests that the value *could* be 
interpretted as something else than 1.0/octave. If you want 
1.0/octave throughout, just ignore it and think of any pitch control 
you see as 1.0/octave.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 12.10, Steve Harris wrote:
> On Wed, Dec 11, 2002 at 01:39:12 +0100, David Olofson wrote:
> > Anyway, given that a converter plugin instance can only ever be
> > called once per buffer, and could potentially handle multiple
> > channels, I'm sure it will be quite a bit faster than host
> > callbacks when it actually matters: when you're flooded with
> > events.
>
> Maybe. My objection to converters is more that they imply two
> parallel representations of frequency (in the broad sense of the
> word), which seems like a mistake.

They are not parallel. One actually *is* frequency, while the other 
expresses pitch in relation to a scale.

It's like comparing inline code with calls through function pointers, 
basically.


> I still maintain that your average DSP programmer is capable of
> multiplying an octave number by 12 to get a 12tET MIDI note number.
> Even a VST programmer ;)

Yes, but that's not the point. And 12tET is essentially irrelevant to 
this discussion (now, even *I* say that! ;-) - it just happens to be 
what most people use most of the time. (Or is it? I'm thinking that 
there may be some rather significant cultures where this is not 
true... Unless they're all brainwashed by our 12tET music, we may 
eventually be outnumbered!)


> I said I was going to stop arguing this point! Oh well.

*hehe*


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 12.06, Steve Harris wrote:
> On Wed, Dec 11, 2002 at 01:26:01 +0100, David Olofson wrote:
> > You're missing that I'm not talking about 1.0/octave, linear
> > pitch, but /note, *note* pitch. That means
> > /note should *always* apply, and that 
> > should be constant. Changing it is totally pointless, since you'd
> > still have note pitch.
> >
> > Changing the "size" of one note is about as silly as changing the
> > "size" of one octave; that's my whole point. (1/12)/note for note
> > pitch *is* just as silly as 12.0/octave for linear pitch.
>
> Well, only if you regard a note as a first class object (I dont).

I do, definitely. It's the *only* logical reference to anything in a 
scale, since you may not even have octaves. (BTW, this applies to the 
the tuning of most grands. Lower octaves are tweaked downwards, while 
higher octaves are tweaked upwards. Not 100% sure why, but I suspect 
it has to do with the overtone spectra for low notes. At least, that 
seems to be why some phatt synth/techno bass sounds don't sound right 
if you play them at the "correct" pitch.)


> (1/12)/note makes more sense because theres /is/ someting very 12ey
> about 12tET notes (the clues in the name ;), whereas there is
> nothing 12ey about octaves. At all.

There is nothing 12ey *at all* about notes if you're into 16t...

So, 1.0/note makes sense, (1/12)/note does *not*. :-)


> > Some plugins think in 1.0/note, and others in 1.0/octave. If you
> > want to connect them, you'll need "something" that expresses
> > 1.0/note as 1.0/octave according to your scale of choice. Just as
> > if you were going to connect a MIDI controller to a CV synth.
>
> Just for the record I do think that having a note representation in
> the API is wrong, but I'm letting it slide. I guess I'l never write
> any code to support it anyway.

I would have agreed with you a while ago, but I think the VST guys 
have a point. Why would you *force* harmonizers, autocomp "machines" 
and the like to think in terms of linear pitch?

I personally think in semitones rather than musical scales when I 
compose and arrange (0-4-7, 0-3-7 etc; tracker arpeggio remember? 
:-), but I would think that classical tone/scale based theory is 
pretty deeply rooted in most musicians/coders. So, I'm afraid the 
general reaction to an API that doesn't understand that concept would 
be something like this:

"Huh? Not aware of *notes*!? Useless for music!"


Either way, I *do* see advantages in being able to say whether you're 
only interested in actual pitch, or "virtual tones in a scale of the 
user's choice." For example, that avoids having to re-record or edit 
everything just because you decide to change from 12tET to some other 
12t tuning.

And finally, you *can* tell the host that you want 1 note/octave, and 
use 1.0/octave throughout. No special support needed for that. (Well, 
expect that hosts that like to nag about control hint 
incompatibilities would have to be told that 1 note/octave and 
1.0/octave are "compatible enough" for implicit casting.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
David Olofson wrote:

>So, sort them and keep track of where you are. You'll have to sort 
>the events anyway, or the event system will break down when you send 
>events out-of-order. The latter is what the event processing loop of 
>every plugin will do, BTW - pretty trivial stuff.

what you describe here has a name: it's called queuing.

>Do event processors posses time travelling capabilites?

delays based on musical time do, whatever you like to call
it.

>It sounds like you're talking about "music edit operation plugins" 
>rather than real time plugins.

you want to support 'instruments', don't you? 'instruments'
are used to produce 'music' (usually), and 'music' has a
well-defined concept of 'time'.

>If you just *use* a system, you won't have a clue what kind of 
>timestamps it uses.

yeah, like for driving a car you don't need to know how
gas and brakes work.

>Do you know how VST timestamps events?

nope, i don't touch proprietary music software.

tim




Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Dave Griffiths
On Wed, 11 Dec 2002 12:09:58 +, Steve Harris wrote
> On Wed, Dec 11, 2002 at 12:47:50 +0100, Dave Griffiths wrote:
> > It also means getting midi signal routing working, as currently ssm has no
> > polyphonic means of note signalling, but it's fairly trivial. The only thing
> > is that it will break the everything plugs into anything rule :(
> 
> It shouldn't have to. There are plenty of polyphonic modular
> implementations that still just have audio and control data.

I'd like to avoid having a global polyphony for ssm really. It's a bit more
flexible if you can set different levels of polyphony for different
subpatches, and more cpu friendly too.

If anyone is interested BTW, there is a new version of ssm availible at
sourceforge at the moment, which along with a lot of other new stuff, has a
LADSPA GUI generator (thanks to Mike Rawes):
http://sourceforge.net/projects/spiralmodular

dave



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread David Gerard Matthews
Steve Harris wrote:


On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:


I can't really say I can think of a better way though.  Personally I'd leave 
scales out of the API and let the host deal with it, sticking to 1.0/octave 
throughout, but I can see the advantages of this as well.


We could put it to a vote ;)

- Steve


I vote 1.0/octave.
-dgm






Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Antti Boman
Antti Boman wrote:

Frank Barknecht wrote:


To wet your appetite: I really should finish my PD quicktoot, which
even in its current unfinished form is longer then three standard
quicktoots :(


You wet my appetite so that I have to ask if there's a version online 
for a quick look beforehand. A question mark.

Uh, funny, doubled the mistake of wetting and not whetting.

Sorry about this spam.

-a




[linux-audio-dev] Synth APIs, MONKEY

2002-12-11 Thread Sami P Perttu
> > First, I don't understand why you want to design a "synth API". If
> > you want to play a note, why not instantiate a DSP network that
> > does the job, connect it to the main network (where system audio
> > outs reside), run it for a while and then destroy it? That is what
> > events are in my system - timed modifications to the DSP network.
>
> 99% of the synths people use these days are hardcoded, highly
> optimized monoliths that are easy to use and relatively easy to host.
> We'd like to support that kind of stuff on Linux as well, preferably
> with an API that works equally well for effects, mixers and even
> basic modular synthesis.
>
> Besides, real time instantiation is something that most of us want to
> avoid at nearly any cost. It is a *very* complex thing to get right
> (ie RT safe) in any but the simplest designs.

Okay, I realize that now, maybe your approach is better. RT and really
good latency was and is not the first priority in MONKEY, it's more
intended for composition, therefore I can afford to instantiate units
dynamically. But it's good that someone is concerned about RT.

> > However, if you want, you can define functions like C x =
> > exp((x - 9/12) * log(2)) * middleA, where middleA is another
> > function that takes no parameters. Then you can give pitch as "C 4"
> > (i.e. C in octave 4), for instance. The expression is evaluated and
> > when the event (= modification to DSP network) is instantiated it
> > becomes an input to it, constant if it is constant, linearly
> > interpolated at a specified rate otherwise. I should explain more
> > about MONKEY for this to make much sense but maybe later.
>
> This sounds interesting and very flexible - but what's the cost? How
> many voices of "real" sounds can you play at once on your average PC?
> (Say, a 2 GHz P4 or someting.) Is it possible to start a sound with
> sample accurate timing? How many voices would this average PC cope
> with starting at the exact same time?

Well, in MONKEY I have done away with separate audio and control signals -
there is only one type of signal. However, each block of a signal may
consist of an arbitrary number of consecutive subblocks. There are three
types of subblocks: constant, linear and data. A (say) LADSPA control
signal block is equivalent to a MONKEY signal block that has one subblock
which is constant and covers the whole block. Then there's the linear
subblock type, which specifies a value at the beginning and a per-sample
delta value. The data subblock type is just audio rate data.

The native API then provides for conversion between different types of
blocks for units that want, say, flat audio data. This is actually less
expensive and complex than it sounds.

About the cost: an expression for pitch would be evaluated, say, 100 times
a second, and values in between would be linearly interpolated, so that
overhead is negligible. It probably does not matter that e.g. pitch glides
are not exactly logarithmic, a piece-wise approximation should suffice in
most cases.

I'm not sure about the overhead of the whole system but I believe the
instantiation overhead to be small, even if you play 100 notes a second.
However, I haven't measured instantiation times, and there certainly is
some overhead. We are still talking about standard block-based processing,
though. Yes, sample accurate timing is implemented: when a plugin is run
it is given start and end sample offsets.

Hmm, that might have sounded confusing, but I intend to write a full
account of MONKEY's architecture in the near future.

> You could think of our API as...

It seems to be a solid design so far. I will definitely comment on it when
you have a first draft for a proposal.

--
Sami Perttu   "Flower chase the sunshine"
[EMAIL PROTECTED]   http://www.cs.helsinki.fi/u/perttu




Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 12:47:50 +0100, Dave Griffiths wrote:
> It also means getting midi signal routing working, as currently ssm has no
> polyphonic means of note signalling, but it's fairly trivial. The only thing
> is that it will break the everything plugs into anything rule :(

It shouldn't have to. There are plenty of polyphonic modular
implementations that still just have audio and control data.

- Steve



Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Dave Griffiths
On Tue, 10 Dec 2002 15:58:32 -0800, Paul Winkler wrote
> On Tue, Dec 10, 2002 at 11:18:53PM +, Steve Harris wrote:
> > I'm not quite sure how either of them handle that newfangled poly-phoney
> > that seems so popular these days ;)
> 
> AFAICT, they both punt and do everything monophonic.

There are plans to make ssm polyphonic. It entails implementing subpatches, so
you can group patches together into one module (like pd) which are then
instanced internally per voice. A subpatch will have two types of input,
global and per-voice - the per-voice input is distributed to the multiple
instances, the global sent to all at once. 
The output will be mixed to a normal monophonic output, so you can go on to
process it with effects etc.

It also means getting midi signal routing working, as currently ssm has no
polyphonic means of note signalling, but it's fairly trivial. The only thing
is that it will break the everything plugs into anything rule :(

dave



[linux-audio-dev] Re: Synth APIs, pitch control

2002-12-11 Thread Sami P Perttu
> > a softstudio; it's pretty far already and
> > the first public release is scheduled Q1/2003.
>
> for Linux, obviously? ;-)

Yes. Linux, GPL. MONKEY is about 30.000 lines of C++ at the moment. I
still have to make a final architecture revision based on some issues
reading this list has evoked, and prepare the whole thing for release.

> > First, I don't understand why you want to design a "synth API". If you
> > want to play a note, why not instantiate a DSP network that does the job,
> > connect it to the main network (where system audio outs reside), run it
> > for a while and then destroy it? That is what events are in my system -
> > timed modifications to the DSP network.
>
> because a standard API is needed for a dynamically loaded plugins!
> LADSPA doesnt really cater for event-driven processes (synths)

Yes, I understand it now. In principle, audio and control ports could
almost suffice but sample-accurate events sent to plugins are more
efficient, and allow one to pass around structured data.

I shall have to add something like this to MONKEY. Right now it supports
LADSPA via a wrapper - the native API is pretty complex - although
creating a nice GUI based on just information in a LADSPA .so is not
possible, mainly due to lack of titles for enums.

> For a complete contrast, please look over
> http://amsynthe.sourceforge.net/amp_plugin.h which i am still toying
> with as a(nother) plugin api suitable for synths. I was hoping to wait

I like this better than the more complex proposal being worked on, except
that I don't much care for MIDI myself. But I also realize the need for
the event/channel/bay/voice monster because it is more efficient and
potentially doesn't require plugins to be instantiated while a song is
playing. I don't think one API can fit all sizes.

--
Sami Perttu   "Flower chase the sunshine"
[EMAIL PROTECTED]   http://www.cs.helsinki.fi/u/perttu





Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 12:40:18 +, Nathaniel Virgo wrote:
> I can't really say I can think of a better way though.  Personally I'd leave 
> scales out of the API and let the host deal with it, sticking to 1.0/octave 
> throughout, but I can see the advantages of this as well.

We could put it to a vote ;)

- Steve



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 01:39:12 +0100, David Olofson wrote:
> Anyway, given that a converter plugin instance can only ever be 
> called once per buffer, and could potentially handle multiple 
> channels, I'm sure it will be quite a bit faster than host callbacks 
> when it actually matters: when you're flooded with events.

Maybe. My objection to converters is more that they imply two parallel
representations of frequency (in the broad sense of the word), which seems
like a mistake.

I still maintain that your average DSP programmer is capable of multiplying
an octave number by 12 to get a 12tET MIDI note number. Even a VST
programmer ;)

I said I was going to stop arguing this point! Oh well.

- Steve



Re: [linux-audio-dev] XAP: Pitch control

2002-12-11 Thread Steve Harris
On Wed, Dec 11, 2002 at 01:26:01 +0100, David Olofson wrote:
> You're missing that I'm not talking about 1.0/octave, linear pitch, 
> but /note, *note* pitch. That means /note 
> should *always* apply, and that  should be constant. 
> Changing it is totally pointless, since you'd still have note pitch.
> 
> Changing the "size" of one note is about as silly as changing the 
> "size" of one octave; that's my whole point. (1/12)/note for note 
> pitch *is* just as silly as 12.0/octave for linear pitch.

Well, only if you regard a note as a first class object (I dont).

(1/12)/note makes more sense because theres /is/ someting very 12ey about
12tET notes (the clues in the name ;), whereas there is nothing 12ey about
octaves. At all.

> Some plugins think in 1.0/note, and others in 1.0/octave. If you want 
> to connect them, you'll need "something" that expresses 1.0/note as 
> 1.0/octave according to your scale of choice. Just as if you were 
> going to connect a MIDI controller to a CV synth.

Just for the record I do think that having a note representation in the
API is wrong, but I'm letting it slide. I guess I'l never write any code
to support it anyway.

- Steve



Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Antti Boman
Frank Barknecht wrote:

To wet your appetite: I really should finish my PD quicktoot, which
even in its current unfinished form is longer then three standard
quicktoots :(


You wet my appetite so that I have to ask if there's a version online 
for a quick look beforehand. A question mark.

-a



Re: [linux-audio-dev] LADSPA and Softsynths

2002-12-11 Thread Frank Barknecht
Hi,
Paul Winkler hat gesagt: // Paul Winkler wrote:

> PD can handle polyphony, and is about as modular as they come;
> but I don't really understand PD yet. :)

To wet your appetite: I really should finish my PD quicktoot, which
even in its current unfinished form is longer then three standard
quicktoots :(

ciao
-- 
 Frank Barknecht   _ __footils.org__



Re: [linux-audio-dev] Plugin APIs (again)

2002-12-11 Thread Paul Winkler
On Wed, Dec 11, 2002 at 12:20:29AM +, Steve Harris wrote:
> On Tue, Dec 10, 2002 at 03:49:14PM -0800, Paul Winkler wrote:
> > Then JACK came along, and I decided to drop that idea and pursue
> > getting sfront to compile JACK clients. It works, mostly...
> > and one day I'll clean it up enough to submit to John L. to
> > distribute with sfront... really, I will... honest...
> 
> Yeah, please do that would be damn useful. For rapid prototyping if
> nothing else

OK, I've finally released it ...
http://www.slinkp.com/linux/code

It patches cleanly against current sfront (0.85 I think).
Works with jack CVS of Dec. 5th.

Input isn't working currently, but output seems fine.
Input *was* working but I did a big refactoring and
must have broken something.


-- 

Paul Winkler
http://www.slinkp.com
"Welcome to Muppet Labs, where the future is made - today!"



[linux-audio-dev] band limited interpolation

2002-12-11 Thread Henry Gomersall
I'm trying to oversample to smooth the display on my software audio
scope using band limited (sinc) interpolation. I have a quick question
to ask. Which of the following implementations is liable to take more
processing time:
1) Padding my data, and then convolving the original (non-zero) samples
with a sinc function (from a LUT). I think it's only feasible to have a
LUT for a sinc of unity amplitude.
or
2) Doing an FFT on my original sample set. Padding, and multiplying by a
rectangular window, then doing an iFFT to return to the time domain. The
FFT library I would use would be FFTW.

Cheers

Henry