Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
Tim Hockin wrote:

>> i'm becoming tired of discussing this matter. fine by me if 
>> you can live with a plugin system that goes only half the way 
>> towards usable event handling. 
>
>I haven't been following this issue too closely, rather waiting for some
>decision.  I have been busy incorporating other ideas.  What do you suggest
>as an alternative to an unsigned 32 bit sample-counter?

i'm using event structures with a timestamp measured in
'ticks' for all plugins. the 'tick rate' is defined for 
any point in time through a tempo map in my implementation. 
the 'tick' type is floating-point.

yes, all plugins need to issue 'host' calls if they want
to map 'tick' to 'time' or 'frame' or reverse. however,
the overhead is not palpable in terms of performance.

allow me to excur somewhat beyond the scope of the question:

event outputs are implemented as lock-free fifos. 1-n outputs 
can connect to 1 inputs. because events remain in the 
outbound fifos until fetched, sorting is simple as long as 
individual fifos are filled in correct order -- which hasn't 
yet proved problematic.

two strategies for block-based processors are possible:
 
* fixed blocks -- calculate 'tick' at the end of the 
  block and process all events from all inbound fifos
  that are stamped <= 'tick'.

note that in this case, only one 'tick mapping' is needed,
the rest is simply comparison. of course dividing the cycle
into subcycles for better time resolution is possible too.

* sample-accurate -- determine the next event from all
  inbound connections, map this tick to audio frames,
  process until this frame, process the event(s) found, 
  repeat until the block is complete.

yes, this introduces some overhead if lots of events are
hurled at a plugin implementing sample-accuracy. however,
this is less problematic i think, having come to believe
that good interpolation methods should be preferred over
massive event usage.

please let me go into yet more depth:

another, quite substantial, benefit of the design is that 
the fifos can be filled in one thread (midi-in for
example) and fetched from in another (audio for example).
it also allows for least-latency routing of events across 
threads.

the current manifestation of this system handles plugins 
operating on the same sets of data and events in six major 
threads, or in fact any combination of these in one plugin:

periodic:
* audio (pcm interrupt)
* low-latency, high-frequency time (rtc interrupt, midi out)
* high-latency, low-frequency time (sequencer prequeuing)

on-demand:
* midi in (in fact anything that's pollable)
* script plugins (i use python which is not rt-capable)
* disk access.

the design was chosen because i deem it to impose the
least limitations on the who and how of plugins and their
connections, and so far it hasn't failed to live up to 
this promise. 

currently it comprises midi in and -out, jack and alsa 
(duplex), event sequencing, scheduled audio playback and 
recording, ladspa units (with event-based parameter i/o), 
tempo maps (rt modifiable), a few native filters and 
oscillators, and the ability to code event-based plugins 
in python (there's even the possibility of processing audio 
with python, but it does introduce a good deal of latency).

i consider myself as far from being a coding wizard. this
enumeration serves the purpose of proving that the design
i've chosen, which uses 'musical time' stamps throughout,
can in fact support a great variety of functionality, and
that this universality is a worthy goal. 

i'd also like you to understand this post as describing
the workings of my ideal candidate for a generic plugin
API, or parts thereof.

code is coming soon to a http server near you, when time
permits.

>I'd hate to lose good feedback because you got tired of it..

thanks. :)

tim




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 23.12, Tim Hockin wrote:
> > i'm becoming tired of discussing this matter. fine by me if
> > you can live with a plugin system that goes only half the way
> > towards usable event handling.
>
> I haven't been following this issue too closely, rather waiting for
> some decision.  I have been busy incorporating other ideas.  What
> do you suggest as an alternative to an unsigned 32 bit
> sample-counter?

It would have to be something that doesn't wrap, I think, or the only 
significant advantage I can see (being tied to the timeline instead 
of "free running time") is lost.


> I'd hate to lose good feedback because you got tired of it..

Dito. I'm (actually) trying to figure out what I missed, so I'm 
definitely interested in finding out. (If I don't know why I'm 
implementing a feature, how the h*ll am I going to get it right...?)

As far as I can tell, you can always ask the host to convert 
timestamps between any formats you like. If you absolutely cannot 
accept pluigns implementing qeueing of events after the end of the 
buffer time frame, the host could provide "long time queueing" - with 
whatever timestamp format you like. (All we need is room for some 64 
bits for timestamps - and events have to be 32 bytes anyway.)

But, when is musical time in ordinary events *required*?


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Hockin
> i'm becoming tired of discussing this matter. fine by me if 
> you can live with a plugin system that goes only half the way 
> towards usable event handling. 

I haven't been following this issue too closely, rather waiting for some
decision.  I have been busy incorporating other ideas.  What do you suggest
as an alternative to an unsigned 32 bit sample-counter?

I'd hate to lose good feedback because you got tired of it..



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 22.26, Tim Goetze wrote:
> David Olofson wrote:
> >> so eventually, you'll need a different event system for
> >> plugins that care about musical time.
> >
> >No. You'll need a different event system for plugins that want to
> >look at future events.
>
> which is an added level of complexity, barring a lot of ways
> to head for plugins.

I don't even think the kind of plugins that would need musical 
timestamps to work at all, would fit very well into an API that's 
designed for block based processing. I'm concerned that merging two 
entirely different ways of thinking about audio and events into one 
will indeed be *more* complex than having two different APIs.

Some people want to keep LADSPA while adding support for XAP. Now, 
are we about to make XAP so complex that we'll need a *third* API, 
just because most synth programmers think XAP is too complex and/or 
expensive?

(Meanwhile, the Bay/Channel/Port thing is considered a big, complex 
mess... *heh*)


> >> i'm convinced it's better to design one system that works
> >> for event-only as well as audio-only plugins and allows for
> >> the mixed case, too. everything else is an arbitrary
> >> limitation of the system's capabilities.
> >
> >So, you want our real time synth + effect API to also be a
> > full-blown off-line music editing plugin API? Do you realize the
> > complexity consequences of such a design choice?
>
> a plugin that is audio only does not need to care, it simply
> asks the host for time conversion when needed. complexity is
> a non-issue here.

But it's going to be at least one host call for every event... Just 
so a few event processors *might* avoid a few similar calls?


> and talking about complexity: two discrete
> systems surely are more complex to implement than one alone.

Yes - but you ignore that just supporting musical time in timestamps 
does not solve the real problems. In fact, some problems even become 
more complicated. (See other post, on transport movements, looping, 
musical time delays etc.)


> i'm becoming tired of discussing this matter. fine by me if
> you can live with a plugin system that goes only half the way
> towards usable event handling.

This is indeed a tiresome thread...

However, I have yet to see *one* valid example of when musical time 
timestamps would help enough to motivate that all other plugins have 
to call the host for every event. (I *did*, however, explain a 
situation where it makes things *worse*.) I have not even seen a 
*hint* towards something that would be *impossible* to do with audio 
time timestamps + host->get_musical_time() or similar.

To me, it still looks like musical time timestamps are just a 
shortcut to make a few plugins slightly easier to code - *not* an 
essential feature.

Prove me wrong, and I'll think of a solution instead of arguing 
against the feature.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
David Olofson wrote:

>> so eventually, you'll need a different event system for
>> plugins that care about musical time.
>
>No. You'll need a different event system for plugins that want to 
>look at future events.

which is an added level of complexity, barring a lot of ways 
to head for plugins.

>> i'm convinced it's better to design one system that works
>> for event-only as well as audio-only plugins and allows for
>> the mixed case, too. everything else is an arbitrary
>> limitation of the system's capabilities.
>
>So, you want our real time synth + effect API to also be a full-blown 
>off-line music editing plugin API? Do you realize the complexity 
>consequences of such a design choice?

a plugin that is audio only does not need to care, it simply
asks the host for time conversion when needed. complexity is
a non-issue here. and talking about complexity: two discrete 
systems surely are more complex to implement than one alone.

i'm becoming tired of discussing this matter. fine by me if 
you can live with a plugin system that goes only half the way 
towards usable event handling. 

tim




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 21.00, Tim Hockin wrote:
> > > i'm convinced it's better to design one system that works
> > > for event-only as well as audio-only plugins and allows for
> > > the mixed case, too. everything else is an arbitrary
> > > limitation of the system's capabilities.
> >
> > So, you want our real time synth + effect API to also be a
> > full-blown off-line music editing plugin API? Do you realize the
> > complexity consequences of such a design choice?
>
> Umm, I want that.

Well, so do I, actually - but the thing has to be designed, and it 
should preferably take less than a few years to fully understand the 
API. ;-)


> I have little need for the RT features, myself. 
> I want to use this API in a FruityLoops like host, where the user
> is not bothered with making wiring decisions or RT/non-RT behavior.
>  I want to use it to develop tracks in the studio.  So far, I don't
> see anything preventing that. My host, as it evolves in my mind,
> will allow things that you won't.  You can load a new instrument at
> run time.  It might glitch.  So what.  It will certainly be usable
> live, but that is not the primary goal.

I always jam and record "live" data from MIDI or other stuff, so I 
definitely need plugins in a net to run perfectly with very low 
latency - with sequencer control, "live" control, or both.

As to loading instruments at run time, making connections and all 
that, it's not absolutely required for me, but I'd really rather be 
*able* to implement a host that can do it, should I feel like it. I 
don't think this will matter much to the design of the API. The 
details I can think of are required to support SMP systems as well, 
so it isn't even RT-only stuff.


> As for time vs. time debates, my original idea was that each block
> was based on musical time (1/100th of a quarter note or something).

That would imply a rather low resolution on the tempo control, I 
think...


>  I've been convinced that sample-accurate events are good.  That
> doesn't mean I need to change the tick-size, I think.

Of course not - but you *can* if you like. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 19.42, Tim Hockin wrote:
> > delays based on musical time do, whatever you like to call
> > it.
>
> I always assumed that tempo-delays and thinsg would just ask the
> host for the musical time at the start of each buffer.

That's a hack that works ok in most cases, but it's not the Right 
Thing(TM), if you're picky.


> With
> sample-accurate events, the host can change tempo even within a
> buffer.

Yes. And it can also slide the tempo smoothly by changing it once per 
sample. To be entirely safe, you must take that in account.


>  If a plugin is concerned with muscial time, perhaps it
> should ask for the musical time at the start and end of the buffer.
>  If the musical time stamp the plugin wants is within the buffer,
> it can then find it and act.

Yes, that could work...


> This breaks down, though when the host can do a transport
> mid-buffer, and sample-accuracy permits that.

Yes.


> Perhaps plugins that
> care about musical time should receive events on their 'tempo'
> control.  Tempo changes then become easy.

Great idea!

For MAIA, I once had the idea of sending musical time events - but 
that would have been rather useless, as the host/timeline 
plugin/whatever couldn't sensibly send more than one event per buffer 
or something, or the system would be completely flooded.

However, tempo changes would only occur once in a while in the vast 
majority of songs, and even if the host limits the number of tempo 
change events to one every N samples, plugins can still work with 
musical time with very high accuracy.

And, there's another major advantage with tempo: Whereas looping 
unavoidably means a "skip" in musical time, but does *not* have to do 
that in tempo. If your whole song is it 120 BPM, you'll probably want 
arpeggiators and stuff to work with that even if you loop.

This is not the whole answer, though. As an example, you'll probably 
want an arpeggiator to be able to lock to musical *time*; not just 
tempo. That is, tempo is not enough for all plugins. Some will also 
have to stay in sync with the timeline - and this should preferably 
work even if you loop at weird ponts, or just slap the transport 
around a bit.


> Transports are still
> smarmy.

They always are. To make things simple, we can just say that plugins 
are not really expected to deal with time running backwards, jumping 
at "infinite" speed and that kind of stuff.

However, it would indeed be nice if plugins (the ones that care about 
musical time, that is) could handle looping properly.


> Is it sane to say 'don't do a transport mid-buffer' to the
> host developers?

I don't think that helps. Properly implemented plugins will work (or 
be confused) no matter when you do the transport operation.

Don't think too much in terms of buffers in relation to timestamps. 
It only inspires to incorrect implementations. :-)


So, what's the Right Thing(TM)?

Well, if you have an event and want it in musical time, ask the host 
to translate it.

If you want the audio time for a certain point on the musical 
timeline, same thing; ask the host. In this case, it might be 
interesting to note that the host may not at all be able to give you 
a reliable answer, if you ask about the future! How could it, when 
the user can change the tempo or mess with the transport at any time?


Now, if you want to delay an event with an *exact* amount, expressed 
as musical time, translate the event's timestamp into musical time, 
add the delay value, and then ask the host about the resulting audio 
time. If it's within the current buffer; fine - send it. If it's not, 
you'll have to put it on hold and check later.

There are at least two issues with doing it this way, tough:

* You *will* have to check the order of events on your
  outputs, since musical time is not guaranteed to be
  monotonous.

* You'll have to decide what to do when you generate an
  event that ends up beyond the end of a loop in musical
  time. Since you cannot really know this, the "correct"
  way would be to just accept that the event will never
  be sent.

So, if you only want an exact delay, you're probably *much* better of 
just keeping track of the tempo. It's so much easier, and 
automatically results in behavior that makes sense to the vast 
majority of users.


It's more complicated with a timeline synchronized arpeggiator, which 
*has* to keep track of the timeline, and not just the tempo. Sticking 
with the tempo idea and adding a PLL that locks the phase of the 
internal metronome to the timeline would probably be a better idea.

And no, timestamps in musical time would not help, because they don't 
automatically make anyone understand which events belong together. 
Even if they would, the *sender* of the events would normally know 
best what is sensible to do in these "timeline skip" situations. You 
would not be able to avoid hanging notes after looping and that kind 

Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Hockin
> > i'm convinced it's better to design one system that works
> > for event-only as well as audio-only plugins and allows for
> > the mixed case, too. everything else is an arbitrary
> > limitation of the system's capabilities.
> 
> So, you want our real time synth + effect API to also be a full-blown 
> off-line music editing plugin API? Do you realize the complexity 
> consequences of such a design choice?

Umm, I want that.  I have little need for the RT features, myself.  I want
to use this API in a FruityLoops like host, where the user is not bothered
with making wiring decisions or RT/non-RT behavior.  I want to use it to
develop tracks in the studio.  So far, I don't see anything preventing that.
My host, as it evolves in my mind, will allow things that you won't.  You
can load a new instrument at run time.  It might glitch.  So what.  It will
certainly be usable live, but that is not the primary goal.

As for time vs. time debates, my original idea was that each block was based
on musical time (1/100th of a quarter note or something).  I've been
convinced that sample-accurate events are good.  That doesn't mean I need to
change the tick-size, I think.

Tim



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 18.54, Tim Goetze wrote:
> David Olofson wrote:
> >On Wednesday 11 December 2002 15.25, Tim Goetze wrote:
> >> David Olofson wrote:
> >> >So, sort them and keep track of where you are. You'll have to
> >> > sort the events anyway, or the event system will break down
> >> > when you send events out-of-order. The latter is what the
> >> > event processing loop of every plugin will do, BTW - pretty
> >> > trivial stuff.
> >>
> >> what you describe here has a name: it's called queuing.
> >
> >Of course. But it doesn't belong in the event system, except
> > possibly as a host or SDK service that some plugins *may* use if
> > they like. Most plugins will never need this, so I think it's a
> > bad idea to force that overhead into the basic event system.
>
> above, you claim that you need queuing in the event system,
> and that it is 'pretty trivial stuff', in 'every plugin'.
> now you say you don't want to 'force that overhead'.

I did not say that; read again. I was referring to "the latter" - 
that is "keep track of where you are".

That is, look at the timestamp of the next event, so see whether or 
not you should handle the event *now*, or do some audio processing 
first. The second case implies that you may hit the frame count of 
the current buffer before it's time to execute that next event.


Either way, this is not the issue. Allowing plugins to send events 
that are meant to be processed in future buffers is, and this is 
because it requires that you timestamp with musical time in order to 
handle tempo changes correctly. *That* is what I want to avoid.


> >> >Do event processors posses time travelling capabilites?
> >>
> >> delays based on musical time do, whatever you like to call
> >> it.
> >
> >Then they cannot work within the real time net. They have to be an
> >integral part of the sequencer, or act as special plugins for the
> >sequencer and/or the editor.
>
> so eventually, you'll need a different event system for
> plugins that care about musical time.

No. You'll need a different event system for plugins that want to 
look at future events.


> and what if you come
> to the point where you want an audio plugin that needs to
> handle musical time, or prequeued events? you'll drown in
> 'special case' handling code.

Can you give me an example? I think I'm totally missing the point.


> i'm convinced it's better to design one system that works
> for event-only as well as audio-only plugins and allows for
> the mixed case, too. everything else is an arbitrary
> limitation of the system's capabilities.

So, you want our real time synth + effect API to also be a full-blown 
off-line music editing plugin API? Do you realize the complexity 
consequences of such a design choice?


> using audio frames as the basic unit of time in a system
> producing music is like using specific device coordinates
> for printing. they used to do it in the dark ages, but
> eventually everybody agreed to go independent of device
> limitations.

Expressing coordinates in a document is trivial in comparison the 
intheraction between plugins in a network. Printing protocols are 
rather similar to document formats, and not very similar at all to 
something that would be used for real time interaction between units 
in a net. But that's besides the point, really...

To make my point clear:

We might alternatively do away with the event system altogether, and 
switch to blockless processing. Then it becomes obvious that musical 
time, as a way of saying when something is supposed to happen, makes 
sense only inside the sequencer. Synths and effects would not see any 
timestamps *at all*, so there could be no argument about the format 
of timestamps in the plugin API.

As to plugins being *aware* of musical time, that's a different 
matter entirely.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Hockin
> delays based on musical time do, whatever you like to call
> it.

I always assumed that tempo-delays and thinsg would just ask the host for
the musical time at the start of each buffer. With sample-accurate events,
the host can change tempo even within a buffer.  If a plugin is concerned
with muscial time, perhaps it should ask for the musical time at the start
and end of the buffer.  If the musical time stamp the plugin wants is within
the buffer, it can then find it and act.

This breaks down, though when the host can do a transport mid-buffer, and
sample-accuracy permits that.  Perhaps plugins that care about musical time
should receive events on their 'tempo' control.  Tempo changes then become
easy.  Transports are still smarmy.  Is it sane to say 'don't do a transport
mid-buffer' to the host developers?

Tim



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
David Olofson wrote:

>On Wednesday 11 December 2002 15.25, Tim Goetze wrote:
>> David Olofson wrote:
>> >So, sort them and keep track of where you are. You'll have to sort
>> >the events anyway, or the event system will break down when you
>> > send events out-of-order. The latter is what the event processing
>> > loop of every plugin will do, BTW - pretty trivial stuff.
>>
>> what you describe here has a name: it's called queuing.
>
>Of course. But it doesn't belong in the event system, except possibly 
>as a host or SDK service that some plugins *may* use if they like. 
>Most plugins will never need this, so I think it's a bad idea to 
>force that overhead into the basic event system.

above, you claim that you need queuing in the event system,
and that it is 'pretty trivial stuff', in 'every plugin'. 
now you say you don't want to 'force that overhead'. 

>> >Do event processors posses time travelling capabilites?
>>
>> delays based on musical time do, whatever you like to call
>> it.
>
>Then they cannot work within the real time net. They have to be an 
>integral part of the sequencer, or act as special plugins for the 
>sequencer and/or the editor.

so eventually, you'll need a different event system for 
plugins that care about musical time. and what if you come 
to the point where you want an audio plugin that needs to 
handle musical time, or prequeued events? you'll drown in
'special case' handling code.

i'm convinced it's better to design one system that works
for event-only as well as audio-only plugins and allows for
the mixed case, too. everything else is an arbitrary 
limitation of the system's capabilities.

using audio frames as the basic unit of time in a system
producing music is like using specific device coordinates 
for printing. they used to do it in the dark ages, but 
eventually everybody agreed to go independent of device 
limitations.

tim




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread David Olofson
On Wednesday 11 December 2002 15.25, Tim Goetze wrote:
> David Olofson wrote:
> >So, sort them and keep track of where you are. You'll have to sort
> >the events anyway, or the event system will break down when you
> > send events out-of-order. The latter is what the event processing
> > loop of every plugin will do, BTW - pretty trivial stuff.
>
> what you describe here has a name: it's called queuing.

Of course. But it doesn't belong in the event system, except possibly 
as a host or SDK service that some plugins *may* use if they like. 
Most plugins will never need this, so I think it's a bad idea to 
force that overhead into the basic event system.

It's sort of like saying that every audio stream should have built-in 
EQ, delay and phase inversion, just because mixer plugins will need 
to implement that.


> >Do event processors posses time travelling capabilites?
>
> delays based on musical time do, whatever you like to call
> it.

Then they cannot work within the real time net. They have to be an 
integral part of the sequencer, or act as special plugins for the 
sequencer and/or the editor.


> >It sounds like you're talking about "music edit operation plugins"
> >rather than real time plugins.
>
> you want to support 'instruments', don't you? 'instruments'
> are used to produce 'music' (usually), and 'music' has a
> well-defined concept of 'time'.

Yes - and if we want to deal with *real* time, we have to accept that 
we cannot know about the future.

One may argue that you *do* know about the future when playing 
something from a sequencer, but I strongly believe that is way beyond 
the scope of an instrument API primarilly meant for real time work.


> >If you just *use* a system, you won't have a clue what kind of
> >timestamps it uses.
>
> yeah, like for driving a car you don't need to know how
> gas and brakes work.

Well, you don't need to know how they *work* - only what they *do*.


> >Do you know how VST timestamps events?
>
> nope, i don't touch proprietary music software.

I see.

Either way, it's using sample frame counts.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-11 Thread Tim Goetze
David Olofson wrote:

>So, sort them and keep track of where you are. You'll have to sort 
>the events anyway, or the event system will break down when you send 
>events out-of-order. The latter is what the event processing loop of 
>every plugin will do, BTW - pretty trivial stuff.

what you describe here has a name: it's called queuing.

>Do event processors posses time travelling capabilites?

delays based on musical time do, whatever you like to call
it.

>It sounds like you're talking about "music edit operation plugins" 
>rather than real time plugins.

you want to support 'instruments', don't you? 'instruments'
are used to produce 'music' (usually), and 'music' has a
well-defined concept of 'time'.

>If you just *use* a system, you won't have a clue what kind of 
>timestamps it uses.

yeah, like for driving a car you don't need to know how
gas and brakes work.

>Do you know how VST timestamps events?

nope, i don't touch proprietary music software.

tim




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Wednesday 11 December 2002 02.43, Paul Davis wrote:
> >> you are discussing an API that is intended to support
> >> *instruments*.
> >
> >And very few instruments understand musical time, and practically
> >none *should* think in terms of notes.
>
> i didn't say anything about notes (which is why i deliberately used
> a non-MIDI number to stand for a pitch code of some kind). see
> below about musical time.

Well, it was the integer number that set off the alarm. ;-)


> >Just use time (seconds, audio sample frames,...) and pitch (linear
> >pitch, Hz,...), and you'll eliminate the need for instuments to
> >understand musical time and scales, without imposing any
> > restrictions whatsoever upon them.
>
> some people don't seem to agree with you about using frequency.

Nor do I! ;-) (It wasn't my suggestion.)

I only included Hz here for completeness, to suggest that anything 
continous will do, whereas integer note numbers will not.


> >> any such API needs to be able to handle the
> >> following kind of request:
> >>
> >> at bar 13, beat 3, start playing a sound corresponding to
> >> note 134, and enter a release phase at bar 14, beat 2.
> >
> >This kind of information is relevant only in sequencers, and a few
> >special types of plugins. I don't see why the whole API should be
> >made significantly more complex and a lot slower, just to make
> > life slightly easier for the few that would ever consider writing
> > a plugin that cares about musical time.
>
> i'm sorry, you're simply wrong here. tim's original proposal was
> for an API centered around the needs of "instruments", not DSP
> units. go take a look at the current set of VSTi's and you'll find
> lots of them make some use of the concept of musical time,
> particular tempo.

Yes, I'm perfectly aware of this.

Yet, most of the *events* sent to these plugins do not have anything 
to do with musical timing; the synth core just needs to know when to 
perform certain control changes - and on that level, you care only 
about audio time anyway.


> you want the LFO to be tempo-synced?

Ask the host about the musical time for every N samples and sync your 
LFO to that information.

All events will still have to be in, or be converted to, audio time 
before they can be processed.


> you want to
> delay in the modulation section to follow the tempo?

I'm not sure what you mean here, but it sounds like you again need to 
ask the host for the musical time at suitable intervals, and 
calculate a delay time from that.

All events will still have to be in, or be converted to, audio time 
before they can be processed.


> there are lots
> of small, but musically rather important (and certainly pleasant)
> capabilities that rely on musical time.

Yes indeed - but very few of them benefit from events being delivered 
with timestamps in musical time format.


> >> now, if you don't handle this by prequeing events, then that
> >> simply means that something else has to queue the events and
> >> deliver them at the right time.
> >
> >That is the job of a sequencer. The job of the event system is to
> >transmit "messages" between ports with sample accurate timing.
>
> well, yes, the sequencer can do it, if indeed there *is* a
> sequencer.

If there is not, you have only time. There is no musical time until 
you throw a "timeline object" into the graph - and that may or may 
not be part of the sequencer. (I would rather have it as a separate 
plugin, which everyone - including the sequencer - gets audio and 
transport time from.)


> but this is an API we're talking about, and every single
> host that decides to use an API like this will end up needing to
> prequeue events this way. i consider that wasteful.

I consider it utterly wasteful to force all plugins to convert back 
and forth between timestamp formats, considering that the majority of 
them will do fine with audio time in timestamps.


> i agree with you that adding multiple timebases to an events
> timestamp field has an nasty edge of complexity to it. but i also
> think you will find that the existing proprietary software world is
> just beginning to understand the power of providing "virtual
> instruments" with access to a rich and wonderful temporal
> environment. i am concerned that you are losing sight of the
> possibilities in favor of simplicity, and that it might turn out
> that allowing events to be timestamped with musical time allows for
> more flexibility.

Well, I'm interested in finding out what that flexibility might be.
 
I frankly can see only one real advantage of musical time in event 
timestamps: Subsample accurate accurate timing.

Considering that people have trouble even accepting sample accurate 
timing as a required feature, *subsample* accurate would appear to be 
of virtually no interest to anyone at all.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in 

Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Wednesday 11 December 2002 02.06, Tim Goetze wrote:
> David Olofson wrote:
> >And normal plugins don't generate and "output" audio or control
> > data an arbitrary number of buffers ahead. Why should they do
> > that with events?
>
> you may have an algorithm written in a scripting (non-rt
> capable) language to generate events for example.

That's a pretty special case, I'd say. (Still, I do have a scripting 
language in Audiality... :-)


> or you
> don't wan't to iterate a lot of stored events at every
> sample to find out which to process, and still offer sample-
> accurate timing.

So, sort them and keep track of where you are. You'll have to sort 
the events anyway, or the event system will break down when you send 
events out-of-order. The latter is what the event processing loop of 
every plugin will do, BTW - pretty trivial stuff.


> >Think about an event processor, and it becomes really rather
> > obvious that you *cannot* produce output beyond the end of the
> > "buffer time frame" you're supposed to work with. You don't have
> > the *input* yet.
>
> i don't see how this touches the workings of an event
> processor, rt or not.

Do event processors posses time travelling capabilites?

Otherwise, I don't see how they possibly could even think about what 
happens beyond the end of the current buffer. How would you deal with 
input from real time controllers, such as a MIDI keyboard?


> and a 'musical' event processor
> is more likely to be rooted in musical time than in
> audio time.

It sounds like you're talking about "music edit operation plugins" 
rather than real time plugins.


> >> in general, it makes
> >> all timing calculations (quantization, arpeggiators etc)
> >> one level easier, and they do tend to get hairy quickly
> >> enough.
> >
> >And it's better to have an event system that needs host calls to
> > even *look* at an event?
>
> host calls only to convert the timestamp on the event, i
> understand.

Yeah. And that's what you do for every event before even considering 
to process it - which means you'll have to check the event after each 
"run" of audio processing (if any) twice.


> you need the reverse if your events are all
> audio-timestamped instead.

When and where? When would your average synth want to know about 
musical time, for example?


> if you keep a table or other cache mapping audio frame to
> musical time for the current block of audio, you're just
> fine.

No, not if you're processing or generating events beyond the end of 
the current block.


> >I believe controlling synths with timestamped events can be hairy
> >enough without having to check the type of every timestamp as
> > well.
>
> i think it's sane to keep timestamps within one domain.

Agreed.


> >That's it! Why do you want to force complexity that belongs in the
> >sequencer upon every damn plugin in the system, as well as the
> > host?
>
> on average, this is not complex if done right i think.

No, but why do it *at all* in the average case, just to make the 
special case a bit easier?

I think one or two host calls for every event processed is pretty 
expensive, especially considering that my current implementation does 
only this:

In the API headers:
#define AEV_TIME(frame, offset) \
((unsigned)(frame - aev_timer - offset) &   \
AEV_TIMESTAMP_MASK)

static inline unsigned aev_next(AEV_port *evp, unsigned offset)
{
AEV_event *ev = evp->first;
if(ev)
return AEV_TIME(ev->frame, offset);
else
return AEV_TIMESTAMP_MASK;
}

static inline AEV_event *aev_read(AEV_port *evp)
{
AEV_event *ev = evp->first;
if(!ev)
return NULL;
evp->first = ev->next;
return ev;
}

static inline void aev_free(AEV_event *ev)
{
ev->next = aev_event_pool;
aev_event_pool = ev;
}

In the plugin:
while(frames)
{
unsigned frag_frames;
while( !(frag_frames = aev_next(&v->port, s)) )
{
aev_event_t *ev = aev_read(&v->port);
switch(ev->type)
{
  case SOME_EVENT:
...do something...
break;
  case SOME_OTHER_EVENT:
...do something else...
break;
}
aev_free(ev);
}
if(frag_frames > frames)
frag_frames = frames;

...process frag_frames of audio...

s += frag_frames;   /* Start offset in buffers */
frames -= frag_frames;
}


Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Paul Davis
>> you are discussing an API that is intended to support
>> *instruments*.
>
>And very few instruments understand musical time, and practically 
>none *should* think in terms of notes.

i didn't say anything about notes (which is why i deliberately used a
non-MIDI number to stand for a pitch code of some kind). see below
about musical time.

>Just use time (seconds, audio sample frames,...) and pitch (linear 
>pitch, Hz,...), and you'll eliminate the need for instuments to 
>understand musical time and scales, without imposing any restrictions 
>whatsoever upon them.

some people don't seem to agree with you about using frequency.

>
>> any such API needs to be able to handle the
>> following kind of request:
>>
>> at bar 13, beat 3, start playing a sound corresponding to note
>>134, and enter a release phase at bar 14, beat 2.
>
>This kind of information is relevant only in sequencers, and a few 
>special types of plugins. I don't see why the whole API should be 
>made significantly more complex and a lot slower, just to make life 
>slightly easier for the few that would ever consider writing a plugin 
>that cares about musical time.

i'm sorry, you're simply wrong here. tim's original proposal was for
an API centered around the needs of "instruments", not DSP units. go
take a look at the current set of VSTi's and you'll find lots of them
make some use of the concept of musical time, particular tempo. you
want the LFO to be tempo-synced? you want to delay in the modulation
section to follow the tempo? there are lots of small, but musically
rather important (and certainly pleasant) capabilities that rely on
musical time.

>> now, if you don't handle this by prequeing events, then that simply
>> means that something else has to queue the events and deliver them
>> at the right time.
>
>That is the job of a sequencer. The job of the event system is to 
>transmit "messages" between ports with sample accurate timing.

well, yes, the sequencer can do it, if indeed there *is* a
sequencer. but this is an API we're talking about, and every single
host that decides to use an API like this will end up needing to
prequeue events this way. i consider that wasteful.

i agree with you that adding multiple timebases to an events timestamp
field has an nasty edge of complexity to it. but i also think you will
find that the existing proprietary software world is just beginning to
understand the power of providing "virtual instruments" with access to
a rich and wonderful temporal environment. i am concerned that you
are losing sight of the possibilities in favor of simplicity, and that
it might turn out that allowing events to be timestamped with musical
time allows for more flexibility.

--p




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Tim Goetze
David Olofson wrote:

>And normal plugins don't generate and "output" audio or control data 
>an arbitrary number of buffers ahead. Why should they do that with 
>events?

you may have an algorithm written in a scripting (non-rt
capable) language to generate events for example. or you
don't wan't to iterate a lot of stored events at every
sample to find out which to process, and still offer sample-
accurate timing. 

>Think about an event processor, and it becomes really rather obvious 
>that you *cannot* produce output beyond the end of the "buffer time 
>frame" you're supposed to work with. You don't have the *input* yet.

i don't see how this touches the workings of an event
processor, rt or not. and a 'musical' event processor
is more likely to be rooted in musical time than in 
audio time.

>> in general, it makes
>> all timing calculations (quantization, arpeggiators etc)
>> one level easier, and they do tend to get hairy quickly
>> enough.
>
>And it's better to have an event system that needs host calls to even 
>*look* at an event?

host calls only to convert the timestamp on the event, i 
understand. you need the reverse if your events are all
audio-timestamped instead.

if you keep a table or other cache mapping audio frame to 
musical time for the current block of audio, you're just 
fine.

>I believe controlling synths with timestamped events can be hairy 
>enough without having to check the type of every timestamp as well.

i think it's sane to keep timestamps within one domain.

>That's it! Why do you want to force complexity that belongs in the 
>sequencer upon every damn plugin in the system, as well as the host? 

on average, this is not complex if done right i think. and
if i use a system to produce music, to me it seems natural 
for the system to understand the concept of musical time.

tim




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Wednesday 11 December 2002 01.44, Paul Davis wrote:
> >See how I handle this in Audiality. Originally, I thought it would
> > be a nice idea to be able to queue events ahead of the current
> > buffer, but it turned out to be a very bad idea for various
> > reasons.
> >
> >And normal plugins don't generate and "output" audio or control
> > data an arbitrary number of buffers ahead. Why should they do
> > that with events?
>
> you are discussing an API that is intended to support
> *instruments*.

And very few instruments understand musical time, and practically 
none *should* think in terms of notes.

Just use time (seconds, audio sample frames,...) and pitch (linear 
pitch, Hz,...), and you'll eliminate the need for instuments to 
understand musical time and scales, without imposing any restrictions 
whatsoever upon them.


> any such API needs to be able to handle the
> following kind of request:
>
> at bar 13, beat 3, start playing a sound corresponding to note
>134, and enter a release phase at bar 14, beat 2.

This kind of information is relevant only in sequencers, and a few 
special types of plugins. I don't see why the whole API should be 
made significantly more complex and a lot slower, just to make life 
slightly easier for the few that would ever consider writing a plugin 
that cares about musical time.


> now, if you don't handle this by prequeing events, then that simply
> means that something else has to queue the events and deliver them
> at the right time.

That is the job of a sequencer. The job of the event system is to 
transmit "messages" between ports with sample accurate timing.


> so this devolves into that old question: do you implement
> prequeueing once and make it available to all clients of the API,
> or you do require each one to do it over?

If you don't want sequencers and the few other plugins ("MIDI 
echo"...?) that actually need prequeueing to implement it, throw a 
basic solution for that into the plugin SDK, or perhaps even put a 
"prequeue sequencer" in the host.

This does *not* belong in the actual event system, IMNSHO.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Paul Davis
>See how I handle this in Audiality. Originally, I thought it would be 
>a nice idea to be able to queue events ahead of the current buffer, 
>but it turned out to be a very bad idea for various reasons.
>
>And normal plugins don't generate and "output" audio or control data 
>an arbitrary number of buffers ahead. Why should they do that with 
>events?

you are discussing an API that is intended to support *instruments*. 
any such API needs to be able to handle the following kind of request:

at bar 13, beat 3, start playing a sound corresponding to note
   134, and enter a release phase at bar 14, beat 2.

now, if you don't handle this by prequeing events, then that simply
means that something else has to queue the events and deliver them at
the right time.

so this devolves into that old question: do you implement prequeueing
once and make it available to all clients of the API, or you do
require each one to do it over?

--p



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Wednesday 11 December 2002 00.08, Tim Goetze wrote:
> David Olofson wrote:
> >> thats a mistake, i think. there are some definite benefits to
> >> being able to define events' time in musical time as well.
> >
> >Like what? Since we're talking about sample accurate timing, isn't
> >asking the host about the musical time for an event timestamp
> >sufficient for when you want that information?
>
> like tempo changes without recalculating all later event
> times --

Of course - I'm perfectly aware of all the issues with tempo changes, 
looping, "seeking" and all that.

However, in a properly designed event system, there *are* no later 
event times to recalculate, since not even the events for the next 
buffer *exist* yet.

See how I handle this in Audiality. Originally, I thought it would be 
a nice idea to be able to queue events ahead of the current buffer, 
but it turned out to be a very bad idea for various reasons.

And normal plugins don't generate and "output" audio or control data 
an arbitrary number of buffers ahead. Why should they do that with 
events?

IMNSHO, the simple answer is "They should not!"


> this also allows prequeuing without emptying and
> reloading the queues on tempo change.

Why would you prequeue, and *what* would you prequeue?

Think about an event processor, and it becomes really rather obvious 
that you *cannot* produce output beyond the end of the "buffer time 
frame" you're supposed to work with. You don't have the *input* yet.


> in general, it makes
> all timing calculations (quantization, arpeggiators etc)
> one level easier, and they do tend to get hairy quickly
> enough.

And it's better to have an event system that needs host calls to even 
*look* at an event?

I believe controlling synths with timestamped events can be hairy 
enough without having to check the type of every timestamp as well.


> >Note that I'm talking about a low level communication protocol for
> >use in situations where you would otherwise use LADSPA style
> > control ports, or audio rate control streams. These are *not*
> > events as stored inside a sequencer.
>
> but you'll probably end up wanting to use a sequencer to
> store and/or (re)generate them, based on musical time.

Yes - but what's the problem?

When you get an event, ask the host what the music time is for the 
timestamp of that event, and store that. (Or transport time, or 
whatever you like best.)

When you want to send events for one buffer, you just ask for the 
music time for the first sample of the buffer, and for the sample 
after the end of the buffer (first of next). Then you find all events 
in that range, convert them from your "database" format into actual 
events (which includes converting the timestamps to "event time"), 
and send them.

That's it! Why do you want to force complexity that belongs in the 
sequencer upon every damn plugin in the system, as well as the host? 

(And people are complaining about multiple data types... *heh*)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Tim Goetze
David Olofson wrote:

>> thats a mistake, i think. there are some definite benefits to being
>> able to define events' time in musical time as well.
>
>Like what? Since we're talking about sample accurate timing, isn't 
>asking the host about the musical time for an event timestamp 
>sufficient for when you want that information?

like tempo changes without recalculating all later event
times -- this also allows prequeuing without emptying and
reloading the queues on tempo change. in general, it makes 
all timing calculations (quantization, arpeggiators etc) 
one level easier, and they do tend to get hairy quickly
enough.

>Note that I'm talking about a low level communication protocol for 
>use in situations where you would otherwise use LADSPA style control 
>ports, or audio rate control streams. These are *not* events as 
>stored inside a sequencer.

but you'll probably end up wanting to use a sequencer to 
store and/or (re)generate them, based on musical time.

tim




Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Tuesday 10 December 2002 23.02, Paul Davis wrote:
> >Yes. Event/audio time is one thing, and musical time is something
> >completely different, although related.
>
> you've just defined "event time" to be the same as "audio time".
> thats a mistake, i think. there are some definite benefits to being
> able to define events' time in musical time as well.

Like what? Since we're talking about sample accurate timing, isn't 
asking the host about the musical time for an event timestamp 
sufficient for when you want that information?

Note that I'm talking about a low level communication protocol for 
use in situations where you would otherwise use LADSPA style control 
ports, or audio rate control streams. These are *not* events as 
stored inside a sequencer.


> >Musical time can be a bit hairy to calculate, so I don't think
> > it's a good idea to do it all the time, and pass it to all
> > plugins.
>
> VST does this the right way. or rather, existing steinberg hosts
> do. they don't compute the information till a plugin asks for it.
> then they cache it for that "block", handing the same data to any
> other plugin.

Yes, that's exactly what I have in mind. I can't think of any other 
sensible way of doing it.


> >What you want is a callback that gives you the musical time
> >corresponding to a timestamp, and probably a few other variants as
> >well.
>
> this isn't adequate. the VST time info struct contains pretty much
> no dead wood at all, IMHO.

Well, I didn't mean "musical time" as in "musical time and nothing 
else". *heh*

The use of a callback, as opposed to passing something to plugins 
whether they want it or not (which means you would have to calculate 
the musical time, transport time, SMPTE time and whatnot for every 
single sample), was the whole point.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Paul Davis
>Yes. Event/audio time is one thing, and musical time is something 
>completely different, although related.

you've just defined "event time" to be the same as "audio time". thats
a mistake, i think. there are some definite benefits to being able to
define events' time in musical time as well.

>Musical time can be a bit hairy to calculate, so I don't think it's a 
>good idea to do it all the time, and pass it to all plugins. 

VST does this the right way. or rather, existing steinberg hosts
do. they don't compute the information till a plugin asks for it. then
they cache it for that "block", handing the same data to any other plugin.

>What you want is a callback that gives you the musical time 
>corresponding to a timestamp, and probably a few other variants as 
>well. 

this isn't adequate. the VST time info struct contains pretty much no
dead wood at all, IMHO.

--p



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Tuesday 10 December 2002 20.31, Paul Davis wrote:
> >> I assume that if the host loops, or the user jumps back in
> >> song-position, time does not jump with it, it just keeps on
> >> ticking?
> >
> >Yes. You can't rewind *time*, can you? ;-)
> >
> >Seriously though, the reason to do it this way is that timestamp
> > time is directly related to "audio time" (ie sample count) - and
> > in a real time system, it just keeps running all the time. I
> > don't see a reason to stop time, kill the audio, stop all plugins
> > etc etc, just because you stop the *sequencer*!
>
> no, but there are a *lot* of things that plugins, particularly
> instrument plugins, might want to do that are based on musical time
> or even just transport time.

Yes - I'm not suggesting that there should not be transport and 
musical time; just that stopping musical time does not imply stopping 
audio/event time.


> the free-running sample count time is
> irrelevant for many things. when you ask a plugin to start the
> release phase of a note off at a certain time, its often not based
> on the sample count but on musical time.

That's a sequencer implementation issue. Event timestamps are always 
based on audio time. (Or ticks, if you don't have audio. Doesn't 
matter, as long as all plugins use the same unit.)


> if you stretch the tempo
> while the note sounds, it should still start the release phase when
> it reaches the correct musical time, not some arbitrary sample
> count.

Of course. But since *everything* is processed one buffer at a time - 
events as well as audio, this automatically Just Works(TM). You're 
not allowed to queue ahead, so as long as you can get the musical 
time for any sample frame within the current buffer, there's no 
problem.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Tuesday 10 December 2002 15.54, Steve Harris wrote:
> On Tue, Dec 10, 2002 at 09:14:36 -0500, Paul Davis wrote:
> > >So time starts at some point decided by the host.  Does the host
> > > pass the current timestamp to process(), so plugins know what
> > > time it is?  I assume that if the host loops, or the user jumps
> > > back in song-position, time does not jump with it, it just
> > > keeps on ticking?
> > >
> > >I guess my only question is how do plugins know what time it is
> > > now?
> >
> > in VST and JACK, its done with a function call that retrieves a
> > struct containing current time info, including both transport,
> > sample and musical positions, amongst other things.
>
> We want this as well (to allow MTC and MIDI Clock sync if nothing
> else), but the timestamp stuff needs to be monotonic, samples
> synced, and I guess it makes sense to pass it in with process().

Yes. Event/audio time is one thing, and musical time is something 
completely different, although related.

Musical time can be a bit hairy to calculate, so I don't think it's a 
good idea to do it all the time, and pass it to all plugins. That 
could be acceptable if you did it by publishing a struct in the host 
struct - but who is interested in the musical time of the first 
sample frame in the buffer? There's nothing special about that sample 
frame.

What you want is a callback that gives you the musical time 
corresponding to a timestamp, and probably a few other variants as 
well. (Timestamp is only good for "short range", since the host 
cannot know how many wraps back or ahead you mean, and so has to 
assume that you want the one in [now-2G, now+2G].)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Tim Hockin
> i will be talking more about this issue at the LAD meeting in
> karlsruhe (plug, plug :)

which is impossible for Californians to attend on a budget :(



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Paul Davis
>> I assume that if the host loops, or the user jumps back in
>> song-position, time does not jump with it, it just keeps on
>> ticking?
>
>Yes. You can't rewind *time*, can you? ;-)
>
>Seriously though, the reason to do it this way is that timestamp time 
>is directly related to "audio time" (ie sample count) - and in a real 
>time system, it just keeps running all the time. I don't see a reason 
>to stop time, kill the audio, stop all plugins etc etc, just because 
>you stop the *sequencer*!

no, but there are a *lot* of things that plugins, particularly
instrument plugins, might want to do that are based on musical time or
even just transport time. the free-running sample count time is
irrelevant for many things. when you ask a plugin to start the release
phase of a note off at a certain time, its often not based on the
sample count but on musical time. if you stretch the tempo while the
note sounds, it should still start the release phase when it reaches
the correct musical time, not some arbitrary sample count.

i will be talking more about this issue at the LAD meeting in
karlsruhe (plug, plug :)

--p



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread David Olofson
On Tuesday 10 December 2002 08.56, Tim Hockin wrote:
[...timestamps...]
> > Wrapping is not a problem, so why avoid it? :-)
>
> So time starts at some point decided by the host.  Does the host
> pass the current timestamp to process(), so plugins know what time
> it is?

In Audiality, there is a host variable that holds the "current event 
time". That is in fact the count of the first sample in the buffer to 
process.


> I assume that if the host loops, or the user jumps back in
> song-position, time does not jump with it, it just keeps on
> ticking?

Yes. You can't rewind *time*, can you? ;-)

Seriously though, the reason to do it this way is that timestamp time 
is directly related to "audio time" (ie sample count) - and in a real 
time system, it just keeps running all the time. I don't see a reason 
to stop time, kill the audio, stop all plugins etc etc, just because 
you stop the *sequencer*!


> I guess my only question is how do plugins know what time it is
> now?

Check that host variable. (No call needed, since in a host with 
multiple threads and/or multiple sample rates, you'd have one host 
struct for each "context" anyway.)


> > Seriously, though, 32 bit is probably sensible, since you'd
> > really rather not end up in a situation where you have to
> > consider timestamp wrap intervals when you decide what buffer
> > size to use. (Use larger buffers than 32768 frames in Audiality,
> > and you're in trouble.)
>
> Anything smaller than 32 bits doesn't save you any cycles, saves a
> WHOPPING 2 bytes of memory, and cause potential alignment issues to
> nix your 2 byte savings.  32 bit is an obvious answer, I think.

Yes. Here are some more reasons:

* The event struct won't fit in 16 bytes anyway.

* We might want to support "extended events" within
  the system, without screwing old hosts or adding
  a parallel event system. So, 32 byte events are nice.

* There is no point in making the event struct smaller
  than one cache line. In fact, that could even cause
  severe performance issues on SMP systems. (Cache line
  ping-pong.)


> > worry about wrapping *at all* - which is not true. Use 32 bit
> > timestamps internally in a sequencer, and you'll get a bug report
> > from the first person who happens to get more than 2 or 4 Gframes
> > between two events in the database.
>
> So start the timer at 0x and force anyone testing to deal
> with a wrap early on.

*hehehe* I like that! >:-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Steve Harris
On Tue, Dec 10, 2002 at 09:14:36 -0500, Paul Davis wrote:
> >So time starts at some point decided by the host.  Does the host pass the
> >current timestamp to process(), so plugins know what time it is?  I assume
> >that if the host loops, or the user jumps back in song-position, time does
> >not jump with it, it just keeps on ticking?
> >
> >I guess my only question is how do plugins know what time it is now?  
> 
> in VST and JACK, its done with a function call that retrieves a struct
> containing current time info, including both transport, sample and musical
> positions, amongst other things.

We want this as well (to allow MTC and MIDI Clock sync if nothing
else), but the timestamp stuff needs to be monotonic, samples synced, and
I guess it makes sense to pass it in with process().

- Steve



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Paul Davis
>So time starts at some point decided by the host.  Does the host pass the
>current timestamp to process(), so plugins know what time it is?  I assume
>that if the host loops, or the user jumps back in song-position, time does
>not jump with it, it just keeps on ticking?
>
>I guess my only question is how do plugins know what time it is now?  

in VST and JACK, its done with a function call that retrieves a struct
containing current time info, including both transport, sample and musical
positions, amongst other things.

--p



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-10 Thread Tim Hockin
> > > The way VST does it however, that wouldn't be needed, since
> > > timestamps are related to buffers. 0 == start of this buffer.
> > > Might look nice to plugins, but I forsee minor nightmares in
> > > multithreaded hosts, hosts that want to split buffers, hosts that
> > > support different buffer sizes in parts of the net, hosts that
> > > support multiple sample rates in the system, communication over
> > > wire,... (Yet another reason why I think the VST event system is
> > > a pretty bad design.)
> >
> > Hmm.. I can see why this is tempting, it avoids the wrapping
> > problem, among other things. Are you sure its not better that way?
> 
> Wrapping is not a problem, so why avoid it? :-)

So time starts at some point decided by the host.  Does the host pass the
current timestamp to process(), so plugins know what time it is?  I assume
that if the host loops, or the user jumps back in song-position, time does
not jump with it, it just keeps on ticking?

I guess my only question is how do plugins know what time it is now?  

> Seriously, though, 32 bit is probably sensible, since you'd really 
> rather not end up in a situation where you have to consider timestamp 
> wrap intervals when you decide what buffer size to use. (Use larger 
> buffers than 32768 frames in Audiality, and you're in trouble.)

Anything smaller than 32 bits doesn't save you any cycles, saves a WHOPPING
2 bytes of memory, and cause potential alignment issues to nix your 2 byte
savings.  32 bit is an obvious answer, I think.

> worry about wrapping *at all* - which is not true. Use 32 bit 
> timestamps internally in a sequencer, and you'll get a bug report 
> from the first person who happens to get more than 2 or 4 Gframes 
> between two events in the database.

So start the timer at 0x and force anyone testing to deal with a
wrap early on.



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread David Olofson
On Monday 09 December 2002 23.32, Steve Harris wrote:
> On Mon, Dec 09, 2002 at 10:09:18 +0100, David Olofson wrote:
> > That explicit delay element I'm talking about would probably be
> > an internal host object that's aware of the buffer size in the
> > loop it sits in, and subtracts the corresponding latency from the
> > delay parameter, so you get *exactly* the delay requested.
>
> That soundsa little too hosty for my tastes. I'l just believe you
> :)

Well, you could make it an ordinary plugin if you like - but it would 
still have to be aware of two things that only the host would worry 
about normally: buffer size, and the fact that there is a feedback 
loop at all.

Now, if they're "ordinary" plugins, what happens if the user chains 
two of them in the feedback loop? Which instance gets to compensate 
for the buffer latency, and how would the other instance know about 
it?


If you're willing to implement a host that supports feedback loops at 
all (with "small buffer partitions" and stuff), it might not matter 
much if you throw in some delay code as well. :-)


> > > My experience is that this isn't neccesary. Genrally nothing
> > > really supprising happens in fedback systems, unless the
> > > blocksize is very large.
> >
> > Well, most VST and DX hosts don't allow feedback loops at all,
> > AFAIK... I wouldn't think it's a major loss, unless you're doing
> > things that you *should* be doing on a modular synth.
>
> A modular synth is just the logical extreme of any synth system.

That was the way I was originally thinking about all this... No 
strict limits; let's see how far into the domains of "specialized" 
designs we can push it.


> Not being able to handle feedback is a serious failing, it rules
> out so many effects.

Then let's hack a host that supports feedback! (When we have a plugin 
API for it, that is... :-)

I don't see a good reason why there should be a conflict between 
*any* block/callback based plugin API and "well behaved" feedback 
loops. All you need is latency info (which is needed anyway) and some 
hacks in the host.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread Steve Harris
On Mon, Dec 09, 2002 at 10:09:18 +0100, David Olofson wrote:
> That explicit delay element I'm talking about would probably be an 
> internal host object that's aware of the buffer size in the loop it 
> sits in, and subtracts the corresponding latency from the delay 
> parameter, so you get *exactly* the delay requested.

That soundsa little too hosty for my tastes. I'l just believe you :)
 
> > My experience is that this isn't neccesary. Genrally nothing really
> > supprising happens in fedback systems, unless the blocksize is very
> > large.
> 
> Well, most VST and DX hosts don't allow feedback loops at all, 
> AFAIK... I wouldn't think it's a major loss, unless you're doing 
> things that you *should* be doing on a modular synth.

A modular synth is just the logical extreme of any synth system. Not being
able to handle feedback is a serious failing, it rules out so many effects.

- Steve 



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread David Olofson
On Monday 09 December 2002 18.59, Steve Harris wrote:
> On Mon, Dec 09, 2002 at 05:47:47PM +0100, David Olofson wrote:
> > * Hosts may chose whatever buffer size they want.
> >
> > * You may be running off-line, in which case you
> >   could potentially run with "huge" buffers.
> >
> > It still doesn't matter? I do believe concistency matters in
> > serious audio applications, so latency has to be *defined* - even
> > if not incredibly low.
>
> Sure, thats why I'm confused my your suggestion of changing the
> buffer size inside loops.

Well, unless you're already using a buffer size that's small enough, 
you'll have to switch to smaller buffers for the plugins involved in 
the feedback loop.

That explicit delay element I'm talking about would probably be an 
internal host object that's aware of the buffer size in the loop it 
sits in, and subtracts the corresponding latency from the delay 
parameter, so you get *exactly* the delay requested.


> > Well, one would assume that the host forbids feedback loops
> > without delay elements, so at least, the user cannot do this
> > without being aware of what's going on, to some extent. If the
> > buffer size goes below some sensible number, the host could warn
> > the user about potential CPU load increase. ("Read docs for more
> > info".)
>
> My experience is that this isn't neccesary. Genrally nothing really
> supprising happens in fedback systems, unless the blocksize is very
> large.

Well, most VST and DX hosts don't allow feedback loops at all, 
AFAIK... I wouldn't think it's a major loss, unless you're doing 
things that you *should* be doing on a modular synth.

But then again, why prevent XAP from reaching a bit into that field? 
In the case of feedback loop handling, it's actually entirely a host 
side issue, so there's nothing preventing some hosts from doing what 
I describe.

Indeed, you could most probably do that with VST plugins as well. 
It'll most probably be a bit more hairy and less efficient than with 
XAP, though!


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread Steve Harris
On Mon, Dec 09, 2002 at 05:47:47PM +0100, David Olofson wrote:
>   * Hosts may chose whatever buffer size they want.
> 
>   * You may be running off-line, in which case you
> could potentially run with "huge" buffers.
> 
> It still doesn't matter? I do believe concistency matters in serious 
> audio applications, so latency has to be *defined* - even if not 
> incredibly low.

Sure, thats why I'm confused my your suggestion of changing the buffer
size inside loops.
 
> Well, one would assume that the host forbids feedback loops without 
> delay elements, so at least, the user cannot do this without being 
> aware of what's going on, to some extent. If the buffer size goes 
> below some sensible number, the host could warn the user about 
> potential CPU load increase. ("Read docs for more info".)

My experience is that this isn't neccesary. Genrally nothing really
supprising happens in fedback systems, unless the blocksize is very large.

- Steve



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread David Olofson
On Monday 09 December 2002 17.30, Steve Harris wrote:
> On Mon, Dec 09, 2002 at 04:52:48PM +0100, David Olofson wrote:
> > > > That's the feedback loop problem. As long as the host runs
> > > > plugins in the correct order, you'll never see this unless
> > > > you *actually* have loops in your network.
> > >
> > > Like in your example ;)
> >
> > Hmm... Which one? (What did I mix up *this* time? ;-)
>
> You example had (unless I misunderstood) an output from an
> instrument feedback into itsself.

Well, I can't remember intentionally making such an example, so it's 
probably a mistake. :-)


[...]
> > OTOH, is it really *uselful* to support larger buffers than 32768
> > frames...?
>
> Probably not, but I dont think its useful to express timestamps in
> 16bit chunks. You either throw away another 16bits or your data
> would loose 32bit alignment.

Well, unless there is a use for another 16 bit field, and you're 
right on the limit of the current power-of-2 event size.


> > And, having 32 bits might fool develpers of sequencers and other
> > "long time frame aware" devices into believing that you don't
> > have to worry about wrapping *at all* - which is not true. Use 32
> > bit
>
> Yes, this has happened to me in an oscilator inplementation, which
> is why I'm concerned :)

*hehe*


> > > Ouch. Changing the buffer size sounds messy and inefficient.
> >
> > (Still, VST hosts do it all the time, since automation "events"
> > are still passed through function calls... *heh*)
> >
> > Anyway, even with a timestamped event system, this is the *only*
> > way you can handle feedback loops.
>
> Why? You can accept that there will be some extra latency in one
> section of the loop and it doesn't usually matter.

Well, lets consider that:

* Hosts may chose whatever buffer size they want.

* You may be running off-line, in which case you
  could potentially run with "huge" buffers.

It still doesn't matter? I do believe concistency matters in serious 
audio applications, so latency has to be *defined* - even if not 
incredibly low.


> Granted its not
> ideal, but blockless processing /really/ isn't practical on CPUs
> yet c.f. OT discussion.

You won't need blockless. Just enforce that an explicit event delay 
plugin is used on any feedback loop connection, and have the user 
decide. You'll never need smaller buffers that sufficient for 
properly handling the shortest latency in a local "loop" - and that's 
*only* for the plugins that are actually in the loop.


> > And what's so messy about it? Less frames/cycle means you can use
> > the standard buffers, and timestamps not being buffer relative
> > means you don't have to do anything extra at all with events.
> > Just loop over
>
> OK, not messy, just supprising, as in "all I did was place a plugin
> in here and suddenly the cpu cost went up three times, your plugin
> is broken"

Well, one would assume that the host forbids feedback loops without 
delay elements, so at least, the user cannot do this without being 
aware of what's going on, to some extent. If the buffer size goes 
below some sensible number, the host could warn the user about 
potential CPU load increase. ("Read docs for more info".)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread Steve Harris
On Mon, Dec 09, 2002 at 04:52:48PM +0100, David Olofson wrote:
> > > That's the feedback loop problem. As long as the host runs
> > > plugins in the correct order, you'll never see this unless you
> > > *actually* have loops in your network.
> >
> > Like in your example ;)
> 
> Hmm... Which one? (What did I mix up *this* time? ;-)

You example had (unless I misunderstood) an output from an instrument
feedback into itsself.
 
> > Hmm.. I can see why this is tempting, it avoids the wrapping
> > problem, among other things. Are you sure its not better that way?
> 
> Wrapping is not a problem, so why avoid it? :-)

Well, its is, but...
 
> Easy. :-) It's wrapped in API macros, so you won't have to see any of 
> that (rather trivial) integer arithmetics, unless you're doing weird 
> stuff with events.

OK, that would remove my worries.
 
> OTOH, is it really *uselful* to support larger buffers than 32768 
> frames...?

Probably not, but I dont think its useful to express timestamps in 16bit
chunks. You either throw away another 16bits or your data would loose
32bit alignment.
 
> And, having 32 bits might fool develpers of sequencers and other 
> "long time frame aware" devices into believing that you don't have to 
> worry about wrapping *at all* - which is not true. Use 32 bit 

Yes, this has happened to me in an oscilator inplementation, which is why
I'm concerned :)

> > Ouch. Changing the buffer size sounds messy and inefficient.
> 
> (Still, VST hosts do it all the time, since automation "events" are 
> still passed through function calls... *heh*)
> 
> Anyway, even with a timestamped event system, this is the *only* way 
> you can handle feedback loops.

Why? You can accept that there will be some extra latency in one section
of the loop and it doesn't usually matter. Granted its not ideal, but
blockless processing /really/ isn't practical on CPUs yet c.f. OT
discussion.
 
> And what's so messy about it? Less frames/cycle means you can use the 
> standard buffers, and timestamps not being buffer relative means you 
> don't have to do anything extra at all with events. Just loop over 

OK, not messy, just supprising, as in "all I did was place a plugin in
here and suddenly the cpu cost went up three times, your plugin is
broken"

- Steve



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread David Olofson
On Monday 09 December 2002 16.06, Steve Harris wrote:
> On Mon, Dec 09, 2002 at 03:00:50PM +0100, David Olofson wrote:
> > That's the feedback loop problem. As long as the host runs
> > plugins in the correct order, you'll never see this unless you
> > *actually* have loops in your network.
>
> Like in your example ;)

Hmm... Which one? (What did I mix up *this* time? ;-)


> > The way VST does it however, that wouldn't be needed, since
> > timestamps are related to buffers. 0 == start of this buffer.
> > Might look nice to plugins, but I forsee minor nightmares in
> > multithreaded hosts, hosts that want to split buffers, hosts that
> > support different buffer sizes in parts of the net, hosts that
> > support multiple sample rates in the system, communication over
> > wire,... (Yet another reason why I think the VST event system is
> > a pretty bad design.)
>
> Hmm.. I can see why this is tempting, it avoids the wrapping
> problem, among other things. Are you sure its not better that way?

Wrapping is not a problem, so why avoid it? :-)

Seriously, in Audiality, timestamps are only 16 bit, and thus wrap 
pretty frequently. The *only* problem with that is when you want to 
queue events a long time ahead. But, since since plugins are only 
supposed to mess with the time frame of the current buffer, that's 
not an issue either.

Even if the wrap happens in the middle of the buffer, you're fine, as 
long as the *delta* between the event before and after the wrap is 
smaller than half the total range of the timestamp data type.


> Speaking of which what is the conventioanl wisdom on timestamp
> sizes, 32 sounds dangerous (all the plugins have to work correctly
> accross the boundary, hard).

Easy. :-) It's wrapped in API macros, so you won't have to see any of 
that (rather trivial) integer arithmetics, unless you're doing weird 
stuff with events.


> and 64 sounds a bit wastefull, though
> its 4M years at 192k, so you dont have to worry about wrapping ;)

16 works for me! ;-)

Seriously, though, 32 bit is probably sensible, since you'd really 
rather not end up in a situation where you have to consider timestamp 
wrap intervals when you decide what buffer size to use. (Use larger 
buffers than 32768 frames in Audiality, and you're in trouble.)

OTOH, is it really *uselful* to support larger buffers than 32768 
frames...?

And, having 32 bits might fool develpers of sequencers and other 
"long time frame aware" devices into believing that you don't have to 
worry about wrapping *at all* - which is not true. Use 32 bit 
timestamps internally in a sequencer, and you'll get a bug report 
from the first person who happens to get more than 2 or 4 Gframes 
between two events in the database.

2 Gframes is not *that* long... Especially not if you have a 
sequencer constantly running in the background to catch anything you 
ever play on your master keyboard. (Something I'll *definitely* 
implement!)


> > BTW, feedback loops would be the major reason why a host would
> > want to run parts of the net with smaller buffers. See why I
> > discarded the idea of buffer related timestamps? :-)
>
> Ouch. Changing the buffer size sounds messy and inefficient.

(Still, VST hosts do it all the time, since automation "events" are 
still passed through function calls... *heh*)

Anyway, even with a timestamped event system, this is the *only* way 
you can handle feedback loops.

And what's so messy about it? Less frames/cycle means you can use the 
standard buffers, and timestamps not being buffer relative means you 
don't have to do anything extra at all with events. Just loop over 
the sub-net with the "offending" plugins until you have a full 
buffer, and then go on as usual. It doesn't even screw up event 
ordering, so you don't need extra shadow + sort/merge or anything.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread Steve Harris
On Mon, Dec 09, 2002 at 03:00:50PM +0100, David Olofson wrote:
> That's the feedback loop problem. As long as the host runs plugins in 
> the correct order, you'll never see this unless you *actually* have 
> loops in your network.

Like in your example ;)

> The way VST does it however, that wouldn't be needed, since 
> timestamps are related to buffers. 0 == start of this buffer. Might 
> look nice to plugins, but I forsee minor nightmares in multithreaded 
> hosts, hosts that want to split buffers, hosts that support different 
> buffer sizes in parts of the net, hosts that support multiple sample 
> rates in the system, communication over wire,... (Yet another reason 
> why I think the VST event system is a pretty bad design.)

Hmm.. I can see why this is tempting, it avoids the wrapping problem,
among other things. Are you sure its not better that way?

Speaking of which what is the conventioanl wisdom on timestamp sizes, 32
sounds dangerous (all the plugins have to work correctly accross the
boundary, hard). and 64 sounds a bit wastefull, though its 4M years at
192k, so you dont have to worry about wrapping ;)
 
> BTW, feedback loops would be the major reason why a host would want 
> to run parts of the net with smaller buffers. See why I discarded the 
> idea of buffer related timestamps? :-)

Ouch. Changing the buffer size sounds messy and inefficient.

- Steve



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread David Olofson
On Monday 09 December 2002 14.49, Steve Harris wrote:
> On Mon, Dec 09, 2002 at 11:39:38AM +0100, David Olofson wrote:
> > In theory, the problem is very easy to solve: Have the host throw
> > in "shadow event ports", and then have it sort/merge the queues
> > from those into a single, ordered queue that is passed to the
> > actual target port.
>
> I dont think this totally solves the problem.
>
> There is also the latency problem, if the instrument generates
> output events with the same timestamp as some input event
> (reasonable) then it wont receive those same event until its next
> processing block, what does it do then? They are all arriving
> "late".

That's the feedback loop problem. As long as the host runs plugins in 
the correct order, you'll never see this unless you *actually* have 
loops in your network.


> Should the host add latency to the events (by adding one blocks
> worth to the event time)?

In an actual loop, yes, it would have to do that - at least the way 
timestamps work in Audiality. (Running time, wrapping, not related to 
buffer boundaries.)

The way VST does it however, that wouldn't be needed, since 
timestamps are related to buffers. 0 == start of this buffer. Might 
look nice to plugins, but I forsee minor nightmares in multithreaded 
hosts, hosts that want to split buffers, hosts that support different 
buffer sizes in parts of the net, hosts that support multiple sample 
rates in the system, communication over wire,... (Yet another reason 
why I think the VST event system is a pretty bad design.)


> Of course this is only a problem when you have graphs with
> feedback, otherwise there is a linear execution order that ensures
> this kind of problem wont happen.

Exactly.

BTW, feedback loops would be the major reason why a host would want 
to run parts of the net with smaller buffers. See why I discarded the 
idea of buffer related timestamps? :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread Steve Harris
On Mon, Dec 09, 2002 at 11:39:38AM +0100, David Olofson wrote:
> In theory, the problem is very easy to solve: Have the host throw in 
> "shadow event ports", and then have it sort/merge the queues from 
> those into a single, ordered queue that is passed to the actual 
> target port.

I dont think this totally solves the problem.

There is also the latency problem, if the instrument generates output
events with the same timestamp as some input event (reasonable) then it
wont receive those same event until its next processing block, what does
it do then? They are all arriving "late".

Should the host add latency to the events (by adding one blocks worth to
the event time)?

Of course this is only a problem when you have graphs with feedback,
otherwise there is a linear execution order that ensures this kind of
problem wont happen.

- Steve 



Re: [linux-audio-dev] XAP and Event Outputs

2002-12-09 Thread David Olofson
On Monday 09 December 2002 11.39, David Olofson wrote:
[...]
> struct XAP_cnx_descriptor
> {
>   XAP_plugin  *plugin;
>   int bay;
>   int channel;
>   int output;
> };

Doh! s/output/slot/ or index or something, since this can describes 
inputs as well...


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---