Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Tim Hockin
> We've been talking about 'TEMPO' and 'TRANSPORT' and 'TICKS' and 'METER'
> controls, which (honestly) kind of turns my stomach.  This is not what
> controls are meant to be doing.  the answer strikes me in shadowy details:
> 
> Each host struct has a timeline member.  Plugins register with the host for
> notification of ceratin things:
> host->register_time_event(plugin, event, function);
> events:
>   TIME_TICKS  // call me on every tick edge
>   TIME_TRANSPORT  // call me when a transport happens
>   TIME_METER  // call me when the meter changes
>   
> What about multiple timelines, you ask?  Use different host structs.  Or
> something.  If we standardize a timeline interface, we don't have to
> overload the control-event mechanism (which forces hosts to understand the
> hints or the plugin won't work AT ALL).

Replying to myself with two other ideas:

1) the host->tick_me(plugin, 100, cookie)  // call me back in 100 ticks
- This could be a simple host-based time-management API
- This depends on a 1-1 map between host and timeline, which I think is ok

2) rather than having per-channel TEMPO etc controls, have a small set of
host-global (timeline global, if you prefer to say) event queues.  If a
channel is concerned with TEMPO, they will have already gotten the hosts
TEMPO queue.  They can then check it.  This saves delivering the same event
struct to potentially many plugins, and still is sample accurate.

Again, just blathering.  Trying to find something elegant..

Tim



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Tim Hockin
> No matter how you turn this stuff about, some things get a bit hairy. 
> The most important thing to keep in mind though, is that some designs 
> make some things virtually *impossible*.

I think this is the important point - whether the simple timestamp is
sample-time or musical time, SOME plugins need to convert.  Now the question
is - which plugin classes require which, and which is the majority.  Or
perhaps, if it is lightweight enough, we SHOULD pas sboth sample-time and
tick-time for events?

> I disagree. It's also a technical decision. Many synths and effects 
> will sync with the tempo, and/or lock to the timeline. If you can 
> have only one timeline, you'll have trouble controlling these plugins 
> properly, since they tread the timeline pretty much like a "rhythm" 
> that's hardcoded into the timeline.

I don't see what the trouble is...

> Well, if you have two tempo maps, how would you apply the "meter 
> map"? I guess the meter map would just be a shared object, and that 
> meter changes are related to musical time of the respective map, but 
> there *are* (at least theoretically) other ways.

> > being politically quite incorrect, i am happy supporting only
> > one tempo and one time at the same point. imagine how
> > complicated things get when you answer 'yes' two times above,
> > and add to this that i can describe the music i want to make
> > without (even standard polyrhythmic patterns because they
> > usually meet periodically).
> 
> It doesn't seem too complicated if you think of it as separate 
> sequencers, each with a timeline of it's own... They're just sending 
> events to various units anyway, so what's the difference if they send 
> events describing different tempo maps as well?

We've been talking about 'TEMPO' and 'TRANSPORT' and 'TICKS' and 'METER'
controls, which (honestly) kind of turns my stomach.  This is not what
controls are meant to be doing.  the answer strikes me in shadowy details:

Each host struct has a timeline member.  Plugins register with the host for
notification of ceratin things:
host->register_time_event(plugin, event, function);
events:
  TIME_TICKS// call me on every tick edge
  TIME_TRANSPORT  // call me when a transport happens
  TIME_METER  // call me when the meter changes
  
What about multiple timelines, you ask?  Use different host structs.  Or
something.  If we standardize a timeline interface, we don't have to
overload the control-event mechanism (which forces hosts to understand the
hints or the plugin won't work AT ALL).

Tim



Re: [linux-audio-dev] VST 2.0 observations

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 04.57, Tim Hockin wrote:
> > Conclusion:
> > a. The information is there, but you have to *pull*
> >it from the host. Doesn't seem like you're ever
> >notified about tempo changes or transport events.
> >
> > b. Since you're asking the host, and there are no
> >other arguments, there is no way that one plugin
> >can keep track of more than one timeline. It
> >seems that it is assumed that there is only one
> >timeline in a net.
>
> So all the mechanisms we're discussing are more flexible? 

Seems like it...


> Obviously we don't have EVERYTHING hammered yet..

Well, I think it's time someone look into the implementational issues 
of timeline calculations. How much does it *really* cost? How about 
SMPTE and other fields?

Also note that VST didn't have events at all from the start. That 
*might* have affected their design decisions.


[...]
> > 4. There is a feature that allows plugins to tell the host
> >which "category" they would fit in. (There is some
>
> Ick - external.

Agreed.

However, should we have categorization as part of the host SDK (along 
with preset management etc), or just let applications deal with it?

I vote for having it in the SDK... (Or you won't find your plugins 
unless you arrange them all once for each application. *heh*)


> > 9. There is a bypass feature, so that hosts can have
> >plugins implement sensible bypass for mono -> surround
> >and other non-obvious in/out relations. (As if in/out
> >relations ever were to be assumed "obvious"!)
>
> not clear what you mean..

Actually, this is not very clear at all. What does "bypass" really 
mean?

Well, VST was originaly a very "primitive" API for FX only, and the 
plugins were used for inserts. No nets or anything; just basic chains.

So, one would assume that "bypass" is as simple as copying the inputs 
to the outputs. Pretty much like ripping the plugin out, although the 
host is able to have the same effect on the processing, while still 
leaving the plugin in the "net". The advantage would be that the 
plugin could still run VUs and stuff.

These days, the most important advantage is probably that you can use 
"bypass" to have plugins with non-obvious in->out relations do the 
"same thing" logically. As in a mono->5.1 plugin piping the mono 
input to the center output or something, when "bypass" is activated.

No idea what synths and other "non-FX" plugins would do... (If they 
support it at all. They probably don't in general.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 04.33, Tim Hockin wrote:
> > Well, yes, if you have more than the overview. I'm interested in
> > time info structs, events and that sort of stuff.
> >
> > (I don't have a Mac to run that SDK self extracting binary...
> > *heh*)
>
> I had a mac-geek friend open it for me:
>
> http://www.hockin.org/~thockin/CoreAudio/

Ah, excellent! :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 03.53, Tim Goetze wrote:
> David Olofson wrote:
> >Well, considering that we seem to have virtually *no* input from
> >people with solid experience with software sequencers or
> > traditional music theory based processing, I suggest we either
> > decide to build a prototype base on what we *know*, or put XAP on
> > hold until we manage to get input from people with real
> > experience in more fields.
>
> it's (mostly) all there for you to read.

Well, yes - but is there any sequencer out there that deals with the 
basic timeline + sequencer stuff?


> there's a pure sequencer
> engine called 'tse3' out there that is somebody's third go at
> the sequencer alone. no matter what its merits are, reading
> its source is bound to give you an idea how things come together
> in a sequencer (hosted at sf.net iirc).

Sounds interesting. Will check.


> and there's always muse
> which, iirc, also comprises audio.

Just started browsing MusE source, actually. Discovered that the 
synth API indeed uses *audio* timestamps (as expected), so I'll have 
to look at the sequencer core for whatever we might be missing here.


> > * Is an explicitly scale related pitch control type needed?
>
> 1. / octave is the politically correct value i guess.
> 12. / octave is what i am happy with.
>
> since transformation between the two is a simple multiplication,
> i don't care much which gets voted.

That's not really what it's about. IMHO, we should have 1/octave for 
basically "everything", but also something that's officially hinted 
as "virtual pitch", which is related to an unspecified scale, rater 
than directly to pitch.

Example:
1. You have integer MIDI notes in.

2. You have a scale converter. The scale is not 12tET,
   but a "pure" 12t scale, that results in better
   sounding intervals in a certain key.

3. Output is linear pitch (1.0/octave).


In this example, the scale converter would take what I call note 
pitch, and generate linear pitch. Note pitch - whether it's expressed 
as integer notes + pitch bend, or as continous pitch - is virtual; it 
is not what you want to use to control your synth. Linear pitch is 
the *actual* pitch, that will drive the pitch inputs on synths.

So far, there is no big deal what you call the two; you could just 
say that you have 12tET before the converter, and 12t pure 
temperament after it.


Now, if you wanted to insert an effect that looks at the input and 
generates a suitable chord, where would you put it, and how would you 
implement it?

Hint:   There are two answers; one relatively trivial, and one that's
both complicated and requires that two plugins are completely
aware of the details of the pure 12t scale.


If this does not demonstrate why I think NOTEPITCH is useful, I 
frankly have no idea how to explain it, short of implementing both 
alternatives in code.


> > * Is there a good reason to make event system timestamps
> >   relate to musical time rather than audio time?
>
> yes. musical time is, literally, the way a musician perceives
> time. he will say something like "move the snare to the sixteenth
> before beat three there" but not "move it to sample 3440004."

Of course - but I don't see how that relates to audio timestamps. 
Musical time is something that is locked to the sequencer's timeline, 
whereas audio time is simply running sample count.

Whenever the sequencer is running (and actually when it's stopped as 
well!), there is a well defined relation between the two. If there 
wasn't you would not be able to control a softsynth from the 
sequencer with better than totally random latency!

Within the context of a plugin's process()/run() call, the sequencer 
will already have defined the musical/audio relation very strictly, 
so it doesn't matter which one you get - you can always translate. 

You can *always* translate? Well, not quite! Read on.


> the system should do its best to make things transparent to the
> musician who uses (and programs) it; that is why i am convinced
> the native time unit should relate to musical time.

OTOH, musical time is really rather opaque to DSP programmers, and 
either way, has to be translated into audio time, sooner or later, 
one way or another. I find it illogical to use a foreign unit in a 
place where everything else is bound to "real" time and samples.

And that's not all there is to it. Surprize:

Musical time *stops* when you stop the sequencer, which means that 
plugins can no longer exchange timestamped events! You may send and 
receive events all you like, but they will all have the same 
timestamp, and since time is not moving, you're not *allowed* to 
handle the events you receive. (If you did, sample accurate timing 
would be out the window.)

So, for example, you can't change controls on your mixer, unless you 
have the sequencer running. How logical is that in a virtual studio?  
How logical is it not to be able to pla

Re: [linux-audio-dev] VST 2.0 observations

2002-12-14 Thread Tim Hockin
> Conclusion:
>   a. The information is there, but you have to *pull*
>  it from the host. Doesn't seem like you're ever
>  notified about tempo changes or transport events.
> 
>   b. Since you're asking the host, and there are no
>  other arguments, there is no way that one plugin
>  can keep track of more than one timeline. It
>  seems that it is assumed that there is only one
>  timeline in a net.

So all the mechanisms we're discussing are more flexible?  Obviously we
don't have EVERYTHING hammered yet..

> 3. There are calls that allow plugins to get the audio input
>and output latency.
>
> Conclusion:
>   c. I'm assuming that this is mostly useful for
>  VU-meters and other stuff that needs to be
>  delayed appropriately for correct display.
>  Obviously not an issue when the audio latency
>  is significantly shorter than the duration of
>  one video frame on the monitor! ;-) Seriously
>  though, this is needed for "high latency"
>  applications to display meters and stuff
>  correctly. They're not very helpful if they're
>  half a second early!

yeah, this is now on the TODO list

> 4. There is a feature that allows plugins to tell the host
>which "category" they would fit in. (There is some

Ick - external.

> 9. There is a bypass feature, so that hosts can have
>plugins implement sensible bypass for mono -> surround
>and other non-obvious in/out relations. (As if in/out
>relations ever were to be assumed "obvious"!)

not clear what you mean..




Re: [linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread Tim Hockin
> Well, yes, if you have more than the overview. I'm interested in time 
> info structs, events and that sort of stuff.
> 
> (I don't have a Mac to run that SDK self extracting binary... *heh*)

I had a mac-geek friend open it for me:

http://www.hockin.org/~thockin/CoreAudio/



Re: [linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread David Gerard Matthews
David Olofson wrote:


On Sunday 15 December 2002 03.44, Tim Hockin wrote:


Aaargh! Can't seem to find anything more interesting than a PDF
with a very basic overview... Is there a freely available SDK
anywhere?

Would just like to say that I find some parts of that PDF a bit
scary... We're *not* talking about a lean and mean low overhead
API here, that's for sure!


I have the docs, you still need them?



Well, yes, if you have more than the overview. I'm interested in time 
info structs, events and that sort of stuff.

(I don't have a Mac to run that SDK self extracting binary... *heh*)

Ran into the same problem myself.  (Actually, that's not entirely true: 
I do have an ancient
PowerMac 6116 that hasn't been booted in well over a year, but it wasn't 
really worth it just
to read the file)  Isn't there a free utility out there somewhere 
which can open .sit and .hqx
files?  Google didn't turn up anything for me...
-dgm


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
  --- http://olofson.net --- http://www.reologica.se ---








Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Tim Goetze
David Olofson wrote:

>Well, considering that we seem to have virtually *no* input from 
>people with solid experience with software sequencers or traditional 
>music theory based processing, I suggest we either decide to build a 
>prototype base on what we *know*, or put XAP on hold until we manage 
>to get input from people with real experience in more fields.

it's (mostly) all there for you to read. there's a pure sequencer 
engine called 'tse3' out there that is somebody's third go at
the sequencer alone. no matter what its merits are, reading
its source is bound to give you an idea how things come together
in a sequencer (hosted at sf.net iirc). and there's always muse
which, iirc, also comprises audio.

>   * Is an explicitly scale related pitch control type needed?

1. / octave is the politically correct value i guess. 
12. / octave is what i am happy with.

since transformation between the two is a simple multiplication,
i don't care much which gets voted.

>   * Is there a good reason to make event system timestamps
> relate to musical time rather than audio time?

yes. musical time is, literally, the way a musician perceives 
time. he will say something like "move the snare to the sixteenth
before beat three there" but not "move it to sample 3440004."

the system should do its best to make things transparent to the
musician who uses (and programs) it; that is why i am convinced
the native time unit should relate to musical time.

i do not think it should be explicitly 'bar.beat.tick', but 
total ticks that get translated when needed. this judgement is 
based on intuition rather than fact i fear. for one thing, it 
makes all arithmetic and comparisons on timestamps a good deal 
less cpu-bound. it is simpler to describe. however, in many 
algorithms it makes the % operator necessary instead of a 
direct comparison. otoh, the % operator can be used effectively
to cover multi-bar patterns, which is where the bbt scheme 
becomes less handy.

>   * Should plugins be able to ask the sequencer about *any*
> event, for the full length of the timeline?

you're perfectly right in saying that all events destined for 
consumption during one cycle must be present when the plugin
starts the cycle. i do not think it is sane to go beyond this
timespan here.

however time conversion functions must exist that give 
valid results for points past and future with respect to 
the current transport time in order to correctly schedule 
future events.

>   * Is there a need for supporting multiple timelines?

this is a political decision, and it's actually a decision you
have to make twice: one -- multiple tempi at the same point, 
and two -- multiple ways to count beats (7/8 time vs 3/4 time 
vs 4/4 time etc) in concurrence. 

being politically quite incorrect, i am happy supporting only 
one tempo and one time at the same point. imagine how 
complicated things get when you answer 'yes' two times above,
and add to this that i can describe the music i want to make
without (even standard polyrhythmic patterns because they 
usually meet periodically). 

multiple tempi are really uncommon, and tend to irritate 
listeners easily.

>   * Is it at all possible, or reasonable, to support
> sequencers, audio editors and real time synths with
> one, single plugin API?

the sequencer definitely needs a different kind of connection
to the host. in fact it should be assumed it is part of, or
simply, it is the host i think. 

for simple hosts, default time conversion facilities are really
simple to implement: one tempo and one time at transport time
zero does it. conversion between linear and musical time then
are a simple multiplication.

audio editors, i don't know. if you call it 'offline processing' 
instead i ask where's the basic difference to realtime.

real time synths -- wait, that's the point, isn't it? ;)

tim




Re: [linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 03.44, Tim Hockin wrote:
> > Aaargh! Can't seem to find anything more interesting than a PDF
> > with a very basic overview... Is there a freely available SDK
> > anywhere?
> >
> > Would just like to say that I find some parts of that PDF a bit
> > scary... We're *not* talking about a lean and mean low overhead
> > API here, that's for sure!
>
> I have the docs, you still need them?

Well, yes, if you have more than the overview. I'm interested in time 
info structs, events and that sort of stuff.

(I don't have a Mac to run that SDK self extracting binary... *heh*)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 03.25, David Gerard Matthews wrote:
[...]
> >I would like to clarify something here: musical time as used in an
> >API is not by definition "periodic". (That's just an
> > interpretation of it, normally based of the time signature.)
> >
> >The APIs that don't use audio frames for timestamps and song
> > position generally use "ticks". 1.0 per quarter note is a fine
> > unit, for example, if we (like practically everyone else) decide
> > to use the quarter note as the base. It's rather handy, because a
> > quarter note is always a quarter note, but a bar might be
> > anything...
>
> Well, not necessarily!  A quarter note is not always the unit of a
> single beat.

Well, I never said it was the unit of a *beat*. ;-)


>  In traditional
> music theory, 6/8 meter is frequently thought of in 2  - the
> dotted-quarter note gets the beat.

Well, yes - I've actually used that sometimes.


> Half-note based meters (2/2, 3/2) are also very common, and things
> like 5/16 are not
> uncommon in modern music.  That said, I think what you mean by
> quarter-note is
> what I would call "counting unit" or "durational unit" or something
> like that.

Yes. And either way, even if you never *use* a quarter note, or the 
corresponding distance, in your music, quarters, dotted quarters, 
16ths or whatever still have a fixed relation. Just tell the plugin 
to do 16ths, and it'll get the idea.


> >  while you can just modulate some audio with
> >(sin(musical_time * M_PI * 2.0) + 1.0) or whatever. ;-)
>
> Nice.  Sounds like fun.

Or maybe not...

out = in * (1.0 - fmod(musical_time * 4.0, 1.0));

would be more fun, although you'd need some filtering on the 
modulator to kill that awfull clicking. Or just use it to index an 
interpolated "envelope waveform" of some 16 points or so. (*Now* it 
starts to get interesting. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread Tim Hockin
> Aaargh! Can't seem to find anything more interesting than a PDF with 
> a very basic overview... Is there a freely available SDK anywhere?
> 
> Would just like to say that I find some parts of that PDF a bit 
> scary... We're *not* talking about a lean and mean low overhead API 
> here, that's for sure!

I have the docs, you still need them?



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Gerard Matthews
David Olofson wrote:


On Sunday 15 December 2002 00.32, Paul Davis wrote:


	* Is there a good reason to make event system timestamps
	  relate to musical time rather than audio time?


Again, I would rather let the timestamps deal with audio time. 
Hosts which work in bars/beats/frames
should be capable of doing the necessary conversion.  Remember,
there

you're ignoring *plugins* that want to work with B|b|t durations.
the canonical examples that i've mentioned several times are
tempo-synced LFO's and delays. these can either be plugins on their
own or more likely for XAP, integrated into a modulation section of
an "instrument".



Exactly.

Unless I'm missing something, all you need for this is a way for 
plugins to keep track of the tempo (sync) and musical time (lock). 
The TEMPO + POSITION_* event set I proposed should be sufficient for 
for sample accurate lock to musical time, even accross tempo changes 
and transport skips.

Ok, fair enough.  Yes, the approach you outline above seems reasonable.


are plenty of parameters which might need some time indications
but which are completely unrelated to notions of tempo.  I'm
thinking mostly about LFOs and other modualtion sources here
(although there might be good grounds for lumping these in with
frequency controlled parameters.)  Just as I would rather


yes, sometimes you might want to control them quite independently
of tempo. but lots of the most interesting instruments in the
software synth world now allow tempo sync as an option, and its
very nice to have it available.



It is not just "very nice"; it's a required feature, IMNSHO.


Right.  

And it should preferably be sample accurate, and capable of tracking 
even the most furious tempo maps. Someone will complain sooner or 
later if they aren't.

Absolutely.


see pitch control make as few assumptions as possible about tuning
and temperament, I would like to see time control make as few
assumptions as possible about tempo and duration.  Sequencers
generally do operate within the shared assumptions of traditional
concepts of periodic rhythm, but in a lot of music (from pure
ambient to many non-Western musics to much avant-garde music)
such notions are irrelevant at best.


true, and i've been known to make this point myself. but the fact
that there is a lot of music in which periodic rythmn is irrelevant
doesn't erase all the music in which its a central organizing
principle. coming up with an API that doesn't facilitate periodic
(even if highly variable) structure when arranging things in time
makes working with such music more cumbersome than it need be, some
of the time.


Point taken.  

I would like to clarify something here: musical time as used in an 
API is not by definition "periodic". (That's just an interpretation 
of it, normally based of the time signature.)

The APIs that don't use audio frames for timestamps and song position 
generally use "ticks". 1.0 per quarter note is a fine unit, for 
example, if we (like practically everyone else) decide to use the 
quarter note as the base. It's rather handy, because a quarter note 
is always a quarter note, but a bar might be anything...

Well, not necessarily!  A quarter note is not always the unit of a 
single beat.  In traditional
music theory, 6/8 meter is frequently thought of in 2  - the 
dotted-quarter note gets the beat.
Half-note based meters (2/2, 3/2) are also very common, and things like 
5/16 are not
uncommon in modern music.  That said, I think what you mean by 
quarter-note is
what I would call "counting unit" or "durational unit" or something like 
that.

 while you can just modulate some audio with 
(sin(musical_time * M_PI * 2.0) + 1.0) or whatever. ;-)

Nice.  Sounds like fun.




//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
  --- http://olofson.net --- http://www.reologica.se ---








Re: [linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 03.12, David Olofson wrote:
> Aaargh! Can't seem to find anything more interesting than a PDF
> with a very basic overview... Is there a freely available SDK
> anywhere?

Doh! Forget about that. (Somehow, when you Google for stuff on that 
kind of sites, you *always* end up in the completely wrong place...)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] I need help! I can't handle the problem about full-duplex (playback & record) programming

2002-12-14 Thread Pascal Haakmat
13/12/02 15:46, leo zhu wrote:

> Hi, guys,
> 
> I am working on a project in which I need to implement
> palyback and recording on the same sound card and in
> the same time. 
> 
> I open the soundcard with RDWR mode and used 'select'
> to wait for sound data on card and read it. After
> that, I receive audio data from socket and write this
> data into buffer on card. for now. I tested it and i
> found if just one way, ie. just read or write, it
> works fine, the quality of sound is fine. but if two
> way, ie. play&record, the sound is horrible and the
> weird is that I always get better sound quality from
> playback that what i got from recording. (I heard the
> audio from two ends). by the way, I used
> SNDCTL_DSP_TRIGGER to syncronyse them.
> 
> can this method implement full-duplex functions ? if
> not, what should I do? thanks in advance

Yes, it's possible in principle, and it can work. Although some
cards/drivers do it better than others... On Linux I believe the
easiest way is to open the audio device twice: once for reading and
once for writing. There might be other portability problems with that
method as well. 
I've never gotten SNDCTL_DSP_TRIGGER to work as
expected, but that may just have been impatience.



[linux-audio-dev] AudioUnits SDK?

2002-12-14 Thread David Olofson

Aaargh! Can't seem to find anything more interesting than a PDF with 
a very basic overview... Is there a freely available SDK anywhere?

Would just like to say that I find some parts of that PDF a bit 
scary... We're *not* talking about a lean and mean low overhead API 
here, that's for sure!


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 00.32, Paul Davis wrote:
> >>* Is there a good reason to make event system timestamps
> >>  relate to musical time rather than audio time?
> >
> >Again, I would rather let the timestamps deal with audio time. 
> > Hosts which work in bars/beats/frames
> >should be capable of doing the necessary conversion.  Remember,
> > there
>
> you're ignoring *plugins* that want to work with B|b|t durations.
> the canonical examples that i've mentioned several times are
> tempo-synced LFO's and delays. these can either be plugins on their
> own or more likely for XAP, integrated into a modulation section of
> an "instrument".

Exactly.

Unless I'm missing something, all you need for this is a way for 
plugins to keep track of the tempo (sync) and musical time (lock). 
The TEMPO + POSITION_* event set I proposed should be sufficient for 
for sample accurate lock to musical time, even accross tempo changes 
and transport skips.


> >are plenty of parameters which might need some time indications
> > but which are completely unrelated to notions of tempo.  I'm
> > thinking mostly about LFOs and other modualtion sources here
> > (although there might be good grounds for lumping these in with
> > frequency controlled parameters.)  Just as I would rather
>
> yes, sometimes you might want to control them quite independently
> of tempo. but lots of the most interesting instruments in the
> software synth world now allow tempo sync as an option, and its
> very nice to have it available.

It is not just "very nice"; it's a required feature, IMNSHO.

And it should preferably be sample accurate, and capable of tracking 
even the most furious tempo maps. Someone will complain sooner or 
later if they aren't.


> >see pitch control make as few assumptions as possible about tuning
> >and temperament, I would like to see time control make as few
> >assumptions as possible about tempo and duration.  Sequencers
> >generally do operate within the shared assumptions of traditional
> >concepts of periodic rhythm, but in a lot of music (from pure
> > ambient to many non-Western musics to much avant-garde music)
> > such notions are irrelevant at best.
>
> true, and i've been known to make this point myself. but the fact
> that there is a lot of music in which periodic rythmn is irrelevant
> doesn't erase all the music in which its a central organizing
> principle. coming up with an API that doesn't facilitate periodic
> (even if highly variable) structure when arranging things in time
> makes working with such music more cumbersome than it need be, some
> of the time.

I would like to clarify something here: musical time as used in an 
API is not by definition "periodic". (That's just an interpretation 
of it, normally based of the time signature.)

The APIs that don't use audio frames for timestamps and song position 
generally use "ticks". 1.0 per quarter note is a fine unit, for 
example, if we (like practically everyone else) decide to use the 
quarter note as the base. It's rather handy, because a quarter note 
is always a quarter note, but a bar might be anything...

The important point is that musical time represents the movement of a 
*timeline*. Unlike audio time, it is not free running time in any 
way, but a representation of the sequencer's idea of time. That is, 
locking to musical time is essentially the same thing as locking to 
the sequencer. And if you do that, your plugin and the sequencer will 
have a common reference for time, that the sequencer can play around 
with as it wishes, while you can just modulate some audio with 
(sin(musical_time * M_PI * 2.0) + 1.0) or whatever. ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Steve Harris
On Sun, Dec 15, 2002 at 01:59:24 +0100, David Olofson wrote:
> Speaking of which, does anyone hack LADSPA plugins in C++, or other 
> languages?

The CMT set are all C++. That's the reason I didn't start by contributing to
it.

- Steve



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Steve Harris
On Sat, Dec 14, 2002 at 07:47:55 -0500, David Gerard Matthews wrote:
> I'm pretty sure that Apple adopted it just to be different/moderately 
> incompatible.  They like to trumpet
> OSX's Unix kernel, and the fact that Unix apps port to OSX pretty 
> easily, but the coding style and
> development tools and techniques they encourage seem to be  pretty un-Unix.
> OTOH, I certainly couldn't write Obj C code, but it doesn't seem to be 
> too hard to read if you're familiar
> with C and C++.

It doesn't really require any c++ familarity, its more like Smalltalk (a
"real" OO language).

Apple do have a history of Objective C, and they use gnu compiler IIRC, so
in theory porting OSX apps or writing linux.osc apps shouldn't be hard.

- Steve



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Nathaniel Virgo
On Sunday 15 December 2002 12:59 am, David Olofson wrote:
> Speaking of which, does anyone hack LADSPA plugins in C++, or other
> languages?

The cmt set of LADSPA plugins is in C++.



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 01.49, Steve Harris wrote:
> On Sun, Dec 15, 2002 at 01:19:08 +0100, David Olofson wrote:
> > > Yes, that was my conclusion too. Its much cleaner than c++, but
> > > its pretty slow. I'm quite supprised that Apple went for it for
> > > DSP code.
> >
> > OTOH, have you looked at how the VST host/plugin interface is
> > actually implemented? Pretty "interesting". :-) (And here we
> > worry about function call overhead...)
>
> No, but I've heard that its not really c++ underneath. I'm always
> worried about looking at things like that incase I even want to
> implement something similar. I think its better to knwo its a (IPR)
> clean implementation.

Yeah, that's probably a good idea. (Not that we'd be very likely to 
copy that part anyway... :-)


> > Seriously though, I think a plugin API of this kind *needing* C++
> > would suggest that there's something wrong with the design. It
> > shouldn't be that complex.
>
> I agree. Sometime its nice to have OO contructs inside plugins
> though, eg. filters are very clean if implemented with OO.

Well, as long as the compiler generates a clean C interface, any 
language is fine for plugin implementations.

Speaking of which, does anyone hack LADSPA plugins in C++, or other 
languages?


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Steve Harris
On Sun, Dec 15, 2002 at 01:19:08 +0100, David Olofson wrote:
> > Yes, that was my conclusion too. Its much cleaner than c++, but its
> > pretty slow. I'm quite supprised that Apple went for it for DSP
> > code.
> 
> OTOH, have you looked at how the VST host/plugin interface is 
> actually implemented? Pretty "interesting". :-) (And here we worry 
> about function call overhead...)

No, but I've heard that its not really c++ underneath. I'm always worried
about looking at things like that incase I even want to implement
something similar. I think its better to knwo its a (IPR) clean
implementation.

> Seriously though, I think a plugin API of this kind *needing* C++ 
> would suggest that there's something wrong with the design. It 
> shouldn't be that complex.

I agree. Sometime its nice to have OO contructs inside plugins though, eg.
filters are very clean if implemented with OO.

- Steve



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Gerard Matthews
Steve Harris wrote:


On Sun, Dec 15, 2002 at 12:25:46 +0100, David Olofson wrote:


Well, I don't exactly know Objective C, but I've read up on the 
basics, for reasons I can't remember... (Probably to see if it was 
the "C++ done right" I was looking for. In that case, it was not, 
because the contructs are *higher* level; not lower.)


Yes, that was my conclusion too. Its much cleaner than c++, but its pretty
slow. I'm quite supprised that Apple went for it for DSP code.


I'm pretty sure that Apple adopted it just to be different/moderately 
incompatible.  They like to trumpet
OSX's Unix kernel, and the fact that Unix apps port to OSX pretty 
easily, but the coding style and
development tools and techniques they encourage seem to be  pretty un-Unix.
OTOH, I certainly couldn't write Obj C code, but it doesn't seem to be 
too hard to read if you're familiar
with C and C++.
-dgm






Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 01.10, Steve Harris wrote:
> On Sun, Dec 15, 2002 at 12:25:46 +0100, David Olofson wrote:
> > Well, I don't exactly know Objective C, but I've read up on the
> > basics, for reasons I can't remember... (Probably to see if it
> > was the "C++ done right" I was looking for. In that case, it was
> > not, because the contructs are *higher* level; not lower.)
>
> Yes, that was my conclusion too. Its much cleaner than c++, but its
> pretty slow. I'm quite supprised that Apple went for it for DSP
> code.

OTOH, have you looked at how the VST host/plugin interface is 
actually implemented? Pretty "interesting". :-) (And here we worry 
about function call overhead...)


> There are ways of speeding up the message passing, but the're
> pretty ugly, and they dont solve it 100%. My current prefered
> solution is OO style C, which is pretty clean, but doesn't give you
> inheritnace and a few other things, which would be nice.

Well, there's always struct-in-struct + typecasts... ;-)

Seriously though, I think a plugin API of this kind *needing* C++ 
would suggest that there's something wrong with the design. It 
shouldn't be that complex.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Steve Harris
On Sun, Dec 15, 2002 at 12:25:46 +0100, David Olofson wrote:
> Well, I don't exactly know Objective C, but I've read up on the 
> basics, for reasons I can't remember... (Probably to see if it was 
> the "C++ done right" I was looking for. In that case, it was not, 
> because the contructs are *higher* level; not lower.)

Yes, that was my conclusion too. Its much cleaner than c++, but its pretty
slow. I'm quite supprised that Apple went for it for DSP code.

There are ways of speeding up the message passing, but the're pretty
ugly, and they dont solve it 100%. My current prefered solution is OO style
C, which is pretty clean, but doesn't give you inheritnace and a few other
things, which would be nice.

- Steve



Re: [linux-audio-dev] XAP and these timestamps...

2002-12-14 Thread David Olofson
On Sunday 15 December 2002 00.26, Paul Davis wrote:
> >A nice question to ask is 'what is time'. I suppose that there is
> > a direct correlation between time and sample frequency; but what
> > to do with non-constant sample frequency? (This is not a
> > hypothetical situation, since a sampled system which is
> > synchronised to an external source, could be facing variable
> > sample rate - when slaved to a VCR for instance). I believe the
> > answer lies in definition of which is your time master, and use
> > that as such; so in case of the slowed down VCR, the notion of
> > time will only progress slower, without causing any trouble to
> > the system. If no house clock or word clock is availble, things
> > might end up hairy...
>
> i believe, though i am not certain, that you are confusing two
> different kinds of synchronization. there is:
>
> * positional synchronization ("where are we?")
> * sample clock synchronization ("how long between samples?")
>
> they are not related to each other in *any* way, AFAIK.

Well, they could be "related", I think... (Read on.)


> synchronizing position with a VCR via SMPTE (for example) has
> nothing to do with sample clock sync. likewise, a word clock
> connection between two digital devices has nothing to positional
> synchronization.

Good point. One could say that every sync source generates one of 
these:
* timing data (tempo, sample clock,...)
* positional data (song position, SMPTE,...)

Positional data sort of implies that you can extract timing data as 
well, provided you get a stream of positional data with sufficiently 
accurate timing.


Anyway, in that other post, I think I said there *is* a relation 
between all of these, but I forgot to explain why:

* Audio device syncs to wordclock
* Sequencer uses audio for timing (nominal sample rate assumed)

Note that both are just *sync* - not lock. If you wanted to sync with 
a VRC, you would most probably be using SMPTE instead of wordclock - 
and then, it would make a *lot* more sense to sync + lock the 
sequencer to that, and just let the audio interface do 48 kHz, or 
whatever you like.


Either way, the point I was trying to make still stands; by the time 
the host starts calling all plugins to process one block of data, 
*all* information the plugins will access must *static* - or you will 
have plugins that run in "parallel" mysteriously fall out of sync.

So, the sequencer (which will obviously be one of the first plugins 
in the graph, if not integrated in the host) will have to figure out 
the SMTPE, audio, MIDI clock or whatever it's syncing or locking to 
for the first sample of the block, construct the timeline info for 
the block, and pass it to the plugins that want it.

At this point, all is decided. Sync and lock has been applied, and 
for all practical matters, you may consider this block property of 
the audio hardware; you just have to do some audio processing to 
generate the actual data.


> so, slaving to a positional reference has no effect on sample
> rate.

Well, it *could*, if you assumu that one SMPTE frame corresponds to N 
samples... ;-)

Not much point, though - you would only end up modulating the tempo 
of the still free running sequencer, as well as the pitch of any 
audio you generate. :-)


> conversely, slaving to a sample clock source has no effect on
> positional tracking. word clock does make variable sample rate
> possible, and indeed some systems use it to implement sync'ed
> varispeed. but its definitely not a consequence of using a
> positional reference like SMPTE or MTC. when slaving to those
> signals, the sample rate remains constant (all other things
> remaining the same), and all that changes are the notions of "where
> we are?" and "how fast are we moving?" and "what direction are we
> moving in?". the same number of samples are processed every second,
> but what those samples contain will vary with the positional
> reference.

This is a great explanation, IMHO.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



[linux-audio-dev] Meterbridge 0.9.0

2002-12-14 Thread Steve Harris
http://plugin.org.uk/meterbridge/

Changes

* Greatly improved the readability of the VU meter
* Made the VU meter conform the the AES analogue equivalent levels. This
should make it more generally useful without adjustment and if you
have properly calibrated DA converters and analogue equipment then
the levels should agree.
* Made the DPM meter look nicer and easier to read.
* Cured a handful of segfaults (thanks to Melanie and Mark K.).
* Reduced the maximum CPU usage of the UI. It should never have caused RT
problems before, but it could have stolen cycles from other UI
threads that needed them more.
* Cleaned up and optimised the port insertion (input monitoring) code, its
still hacky but cleaner and more reliable now.
* Added a "set jack name" option, -n.
* Will now make a meter for every non-flag argument, even if there is no port
matching that name, so, eg. you can create an unconnected 4
channel meter with "meterbridge - - - -".
* More reliable cleanup on exit.

Before it goes to 1.0 I'd like some sort of documentation, and maybe to
improve the input port monitoring situation. So don't hold your breath ;)
If anyone wants to write anything for the docs I would be extremely
greatful.

I will look at any tricky things, like meter labelling, and antialiased
needles after 1.0.

- Steve



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread Paul Davis
>>  * Is there a good reason to make event system timestamps
>>relate to musical time rather than audio time?
>>
>Again, I would rather let the timestamps deal with audio time.  Hosts 
>which work in bars/beats/frames
>should be capable of doing the necessary conversion.  Remember, there 

you're ignoring *plugins* that want to work with B|b|t durations. the
canonical examples that i've mentioned several times are tempo-synced
LFO's and delays. these can either be plugins on their own or more
likely for XAP, integrated into a modulation section of an
"instrument".

>are plenty of parameters which might need some time indications but
>which are completely unrelated to notions of tempo.  I'm thinking
>mostly about LFOs and other modualtion sources here (although there
>might be good grounds for lumping these in with frequency controlled
>parameters.)  Just as I would rather

yes, sometimes you might want to control them quite independently of
tempo. but lots of the most interesting instruments in the software
synth world now allow tempo sync as an option, and its very nice to
have it available. 

>see pitch control make as few assumptions as possible about tuning
>and temperament, I would like to see time control make as few
>assumptions as possible about tempo and duration.  Sequencers
>generally do operate within the shared assumptions of traditional
>concepts of periodic rhythm, but in a lot of music (from pure ambient
>to many non-Western musics to much avant-garde music) such notions
>are irrelevant at best.

true, and i've been known to make this point myself. but the fact that
there is a lot of music in which periodic rythmn is irrelevant doesn't
erase all the music in which its a central organizing principle. coming 
up with an API that doesn't facilitate periodic (even if highly
variable) structure when arranging things in time makes working with
such music more cumbersome than it need be, some of the time.

--p



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 23.20, David Gerard Matthews wrote:
[...]
> > * Is an explicitly scale related pitch control type needed?
>
> I would argue that it's not.

Do you have experience with processing events meant for non-ET 
scales? I'm looking for a definitive answer to this question, but it 
seems that this kind of stuff is used so rarely that one might argue 
that 12tET and scaleless are the only two significant alternatives. 

(And in that case, the answer is very simple: Use 1.0/octave, and 
assume 12tET whenever you want to think in terms of notes.)


> > * Is there a good reason to make event system timestamps
> >   relate to musical time rather than audio time?
>
> Again, I would rather let the timestamps deal with audio time. 
> Hosts which work in bars/beats/frames
> should be capable of doing the necessary conversion.  Remember,
> there are plenty of parameters which
> might need some time indications but which are completely unrelated
> to notions of tempo.  I'm thinking
> mostly about LFOs and other modualtion sources here (although there
> might be good grounds for lumping
> these in with frequency controlled parameters.)  Just as I would
> rather see pitch control make as few
> assumptions as possible about tuning and temperament, I would like
> to see time control make as few
> assumptions as possible about tempo and duration.

Well, you do have to be able to lock to tempo and/or musical time in 
some cases - but (IMHO) that is an entirely different matter, which 
has little to do with whatever format the event timestamps have.


>  Sequencers
> generally do operate within the
> shared assumptions of traditional concepts of periodic rhythm, but
> in a lot of music (from pure ambient
> to many non-Western musics to much avant-garde music) such notions
> are irrelevant at best.

This confirms my suspicions. (And I even have some experience of just 
being annoyed with those non-optional grids... :-/ In some cases, you 
just want to make the *music* the timeline; not the other way around.)


> > * Should plugins be able to ask the sequencer about *any*
> >   event, for the full length of the timeline?
>
> Not sure that I grok the ramifications of this.

Let's put it like this; two altornatives (or both, maybe):

1. Plugins receive timestamped events, telling them
   what to do during each block. Effectively the same
   thing as audio rate control streams; only structured
   data instead of "raw" samples.

2. Plugins get direct access to the musical events,
   as stored within the sequencer. (These will obviously
   not have audio timestamps!) Now, plugins can just
   look at the part of the timeline corresponding to
   each block, and do whatever they like. Some event
   processors may well play the whole track backwards!
   This solution would also make it possible for
   plugins to *modify* events in the sequencer database,
   which means you can implement practically anything
   as a plugin.


Obviously, these are two very different ways of dealing with events. 
I would say they are solutions for completely different problem 
spaces, with only slight overlap. One cannot replace the other; if 
you want to deal with the combined problem space, you need both.

Why can't 2 deal with everything? Well, consider the simple case 
where you want to chain plugins. One plugin reads events from the 
sequencer, and is supposed to make a synth play the events. However, 
now the *synth* will expect to have access to an event database as 
well! So what does that first plugin do...? Plug in as a "wrapper" 
around the whole sequencer database?

Well, I could think of ways to make that work, but none that make any 
sense for a real time oriented API.

Suggestions?

Is there a third kind of "event system" that I've missed?


> > * Is there a need for supporting multiple timelines?
>
> Possibly.  I would say definitely if the audio event timestamps
> relate to musical time.

Well, I see what you mean - but see above; The timestamps are not 
much of an issue, since sync'ed and locked effects would get their 
musical time info by other means.

Though, obviously - if timestamps are in musical time and you can 
have only one timeline, you have a problem... Or not. (My question is 
basically whether you need to be able to sync or lock to multiple 
timelines or not.)


> For example, in a sequencer, it should be possible to have
> different tracks existing
> simultaneously with different tempi. Obviously, if the timestamps
> are derived from
> audio time, then only a single timeline is needed, because you have
> a single time
> source which doesn't care about tempo.

For timestamps, yes - but if you want your plugin to "spontaneously" 
(ie without explicit "note" events) to sync with the tempo or the 
beat...?


>  This hypothetical sequencer
> would be
> able to convert betwe

Re: [linux-audio-dev] XAP and these timestamps...

2002-12-14 Thread Paul Davis
>A nice question to ask is 'what is time'. I suppose that there is a direct
>correlation between time and sample frequency; but what to do with
>non-constant sample frequency? (This is not a hypothetical situation, since
>a sampled system which is synchronised to an external source, could be
>facing variable sample rate - when slaved to a VCR for instance). I believe
>the answer lies in definition of which is your time master, and use that as
>such; so in case of the slowed down VCR, the notion of time will only
>progress slower, without causing any trouble to the system. If no house
>clock or word clock is availble, things might end up hairy...

i believe, though i am not certain, that you are confusing two
different kinds of synchronization. there is:

* positional synchronization ("where are we?")
* sample clock synchronization ("how long between samples?")

they are not related to each other in *any* way, AFAIK. synchronizing
position with a VCR via SMPTE (for example) has nothing to do with
sample clock sync. likewise, a word clock connection between two
digital devices has nothing to positional synchronization.

so, slaving to a positional reference has no effect on sample
rate. conversely, slaving to a sample clock source has no effect on
positional tracking. word clock does make variable sample rate
possible, and indeed some systems use it to implement sync'ed
varispeed. but its definitely not a consequence of using a positional
reference like SMPTE or MTC. when slaving to those signals, the sample
rate remains constant (all other things remaining the same), and all
that changes are the notions of "where we are?" and "how fast are we
moving?" and "what direction are we moving in?". the same number of
samples are processed every second, but what those samples contain
will vary with the positional reference.

a careful read of the last chapter of the protools manual is
recommended (the PDF is available online). its not superb, but its a
pretty good introduction to sync issues with DAWs and similar software.

--p



Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Gerard Matthews
David Olofson wrote:


On Saturday 14 December 2002 19.41, Tim Goetze wrote:


this is not meant to intimidate, rather to be a wake-up call.



[...many good points elided...]

Well, considering that we seem to have virtually *no* input from 
people with solid experience with software sequencers or traditional 
music theory based processing, I suggest we either decide to build a 
prototype base on what we *know*, or put XAP on hold until we manage 
to get input from people with real experience in more fields.

We do not seem to have sufficient information to answer the following 
questions:

	* Is an explicitly scale related pitch control type needed?

I would argue that it's not.  


	* Is there a good reason to make event system timestamps
	  relate to musical time rather than audio time?


Again, I would rather let the timestamps deal with audio time.  Hosts 
which work in bars/beats/frames
should be capable of doing the necessary conversion.  Remember, there 
are plenty of parameters which
might need some time indications but which are completely unrelated to 
notions of tempo.  I'm thinking
mostly about LFOs and other modualtion sources here (although there 
might be good grounds for lumping
these in with frequency controlled parameters.)  Just as I would rather 
see pitch control make as few
assumptions as possible about tuning and temperament, I would like to 
see time control make as few
assumptions as possible about tempo and duration.  Sequencers generally 
do operate within the
shared assumptions of traditional concepts of periodic rhythm, but in a 
lot of music (from pure ambient
to many non-Western musics to much avant-garde music) such notions are 
irrelevant at best.


	* Should plugins be able to ask the sequencer about *any*
	  event, for the full length of the timeline?


Not sure that I grok the ramifications of this.



	* Is there a need for supporting multiple timelines?


Possibly.  I would say definitely if the audio event timestamps relate 
to musical time.  
For example, in a sequencer, it should be possible to have different 
tracks existing
simultaneously with different tempi.  Obviously, if the timestamps are 
derived from
audio time, then only a single timeline is needed, because you have a 
single time
source which doesn't care about tempo.  This hypothetical sequencer 
would be
able to convert between arbirtrary representations of bpm and time 
signature,
but the code for doing this would be in the host app, not the plugin.  
Now, if the plugin timestamps events internally using musical time, then 
multiple
timelines are necessary in the above scenario.

And the most fundamental, and most important question:

	* Is it at all possible, or reasonable, to support
	  sequencers, audio editors and real time synths with
	  one, single plugin API?


Probably not.  For audio editors, I think JACK is doing a very fine job. 
In fact, beginning
with FreqTweak, there seems to be some precedent for using JACK for 
plugins.  JACK's
biggest problem, however, is its lack of midi support.  Basically, the 
way I see it, XAP would
be for plugins and realtime synths hosted on a sequencer or DAW app 
which uses JACK for
audio input/output.  


If we *had* sufficient information to answer these questions, there 
wouldn't be much of an argument after everyone understood the 
problem. The details would just have been a matter of taste.

Now, we seem to have lots of ideas, but few facts, so there's not 
much point in further discussion. We need to test our ideas in real 
applications, and learn from the experience.

One thing which has crossed my mind:  several people have brought up VST 
as a frame of reference,
but has anyone looked at AudioUnits?  I admit that I haven't either, but 
the reference code is out there,
and perhaps it might be a good idea to take a look at it.  (One 
potential problem is that the example
code seems to be in Objective C.)
-dgm


I guess I'm hunting for *real* problems. Please post any ideas you 
might have - I'll try to either explain the solution, implement it, 
or admit that a different design is needed.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
  --- http://olofson.net --- http://www.reologica.se ---







Re: [linux-audio-dev] XAP: a polemic

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 19.41, Tim Goetze wrote:
> this is not meant to intimidate, rather to be a wake-up call.

[...many good points elided...]

Well, considering that we seem to have virtually *no* input from 
people with solid experience with software sequencers or traditional 
music theory based processing, I suggest we either decide to build a 
prototype base on what we *know*, or put XAP on hold until we manage 
to get input from people with real experience in more fields.

We do not seem to have sufficient information to answer the following 
questions:

* Is an explicitly scale related pitch control type needed?

* Is there a good reason to make event system timestamps
  relate to musical time rather than audio time?

* Should plugins be able to ask the sequencer about *any*
  event, for the full length of the timeline?

* Is there a need for supporting multiple timelines?


And the most fundamental, and most important question:

* Is it at all possible, or reasonable, to support
  sequencers, audio editors and real time synths with
  one, single plugin API?


If we *had* sufficient information to answer these questions, there 
wouldn't be much of an argument after everyone understood the 
problem. The details would just have been a matter of taste.

Now, we seem to have lots of ideas, but few facts, so there's not 
much point in further discussion. We need to test our ideas in real 
applications, and learn from the experience.


I guess I'm hunting for *real* problems. Please post any ideas you 
might have - I'll try to either explain the solution, implement it, 
or admit that a different design is needed.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Blockless processing

2002-12-14 Thread Fernando Pablo Lopez-Lezcano
> > What does your thingie do that sfront doesn't do?
> > sfront compiles SAOL / SASL text files (describing a 
> > processing & synthesis network) down to C which
> > compiles nicely with GCC.
> 
> SAOL is still block based AFAIK. This allows you to do some really neat
> tricks with feedback, knowing that the latency is only one sample.
> 
> In principle you can run any system with a block size of 1, but the
> performance will really suck. Maybe SAOL would be ok, anyone tried it?
> 
> > the basic idea is not new either... IIRC, Common Music
> > does much the same thing starting from a lisp dialect.
> 
> Yes, but its lisp :)

The package is Common Lisp Music (CLM), does not use blocks (ie: block
size = 1), and compiles the sample generation loop of instruments into C
from a subset of Common Lisp (instruments are written in Common Lisp).
The other parts of the instrument run in compiled lisp. It is quite
fast. It is originally based in Common Lisp but there are now two other
implementations of the same primitives (unit generators) in both Scheme
and C. The scheme port runs on guile and is getting quite close to the
Common Lisp / C based CLM in speed (factor of 2 or 3 as I recall). All
written by Bill Schottstaed. 

-- Fernando





Re: [linux-audio-dev] Bristol Synth

2002-12-14 Thread Anthony
* Paul Davis <[EMAIL PROTECTED]> [Dec 14 02 07:25]:
> >Hi, I've been playing a lot with bristol synth and really love it. So
> >much so that I've been trying to 'Jackify' it. Actually, I'm pretty
> >much done, but can't figure out the internal audio format. Its
> >interleaved floats I think, but not normalised to [-1,1]. If any of
> >the developers are here could you help me out? I can hear noise, but I
> >need to tune the maths. TIA.
> 
> all JACK audio data at this time consists of mono 32 bit floating
> point, normalized to [-1,+1].
> 
> --p

No, I'm talking about the other end of things, bristol synths
format. I need to convert it to the above. Actually, this isnt the
issue so much, but rather getting jack to consume the data fast
enough. 

--ant



Re: [linux-audio-dev] VST 2.0 observations

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 18.55, Steve Harris wrote:
> On Sat, Dec 14, 2002 at 06:04:55 +0100, David Olofson wrote:
> [plugin categorisation]
>
> > > This is a classic example of where external metadata is
> > > noticably superiour.
> >
> > Yeah. Especially since users may not agree with the authors'
> > choices, or may simply want to have plugins categorized based on
> > different criteria.
>
> Or the author may want to extend the categories.

Yeah... *heh* I see a great deal of that sort of metadata related 
(and other) feature creep in VST. It's becoming a mess, and people 
are whining about it more and more frequently.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and these timestamps...

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 16.20, Frank van de Pol wrote:
> ~
> Good thinking David,
>
> with a small set of events (tempo updates and position updates) you
> can maintain a real-time view of the tempo map for all plugins.
> This is the same concept as used in the ALSA sequencer API.
>
> Since the plugins require prequeueing (because of the processing in
> blocks, and to compensate lag caused by the plugin or
> processing/transmission delays), and this prequeueing includes the
> tempo/position changes you'll get into an interesting situation
> where events scheduled for a certain tick and events schedulued for
> a certain time (=sample) might be influenced by the tempo/position
> change events. In the ALSA sequencer this is solved by allowing
> events to be scheduled at either a specific time or specific tick
> (using two queues).

But you queue ahead, and there are no "slices" or similar, are there? 
Since there are no slices, and thus no unit for a fixed latency, you 
*must* queue ahead - or you might as well use the rawmidi interface 
and SCHED_FIFO.
 
Audio plugins work with one "time slice" at a time. What has happened 
with the timeline during a time slice (one block) is already decided 
before you call the first plugin in the graph. Nothing must be 
allowed to change during the processing of one slice. It is in fact 
too late as soon as the host decides to start processing the block.

Same thing as passing a block to the audio interface - when it's 
away, it's too late to change anything. The latency is something we 
have to deal with, and there is no way whatsoever to get around it. 

(All we can do is reduce it by hacking the kernel - and that seems to 
be pretty damn sufficient! :-)


You *could* have someone else prequeue future events and send them 
back to you at the right time, but then you would need to say:

* When you want the events back
* What the timestamps are related to (real or timeline)

And if the timestamps are related to the timeline:

* What to do in case of a transport stop or skip


Indeed, this is doable by all means, but it's a *sequencer* - not a 
basic timestamped event system. Throw a public domain implementation 
into the plugin SDK, so we don't have to explain over and over how to 
do this correctly.

Yes, it should be in the public domain, so people can rip the code 
and adapt it to their needs. There are many answers to that third 
quesion...

OTOH, if people want to keep their source closed, you could say they 
*deserve* to reimplement this all over! ;-)


> A nice question to ask is 'what is time'. I suppose that there is a
> direct correlation between time and sample frequency; but what to
> do with non-constant sample frequency? (This is not a hypothetical
> situation, since a sampled system which is synchronised to an
> external source, could be facing variable sample rate - when slaved
> to a VCR for instance).

Exactly. This is what Layla does, for example.

I have thought about whether or not plugins should know the exact 
sample rate, but I'm afraid it's too hard to find out the real truth 
to be worth the effort. You can probably ask Layla what the *current* 
exact sample rate is, but what's the use? Plugins will be at least a
few ms ahead anyway, so there's nothing much you can do about the 
data that is already enqueued for playback.

In short, if there is a sudden change, you're screwed, period.


> I believe the answer lies in definition of
> which is your time master, and use that as such; so in case of the
> slowed down VCR, the notion of time will only progress slower,
> without causing any trouble to the system.

Exactly. I think this is all that should be relevant to any plugins 
that are not also drivers, or otherwise need to directly deal with 
the relation between "engine time" and real world time.


> If no house clock or
> word clock is availble, things might end up hairy...

Only if you care about it. For all practical matters, you can assume 
the following:

input_time = (sample_count - input_latency) / sample_rate;
output_time = (sample_count + output_latency) / sample_rate;

If the sample rate changes at "sample_count", well it's already way 
too late to compensate for it, because your changes will appear on 
the output exactly at "output_time".

Now, you *could* do something about changes in the input sample rate, 
if you like, but I suspect that there isn't much use for that at all. 
If you want to record audio at the maximum quality, I'm afraid you 
just have to turn that wordclock sync off for a while, and let the 
HDR sync the recorded material during playback. (Or you should check 
what's the bl**dy matter with your flaky wordclock sync source! :-)


Need I point out (again?) that input_latency and output_latency are 
*ONLY* to be used when you deal directly with external things (MIDI 
drivers, audio drivers, meters and stuff in GUIs,...), where you need 
to know the "exact" relation between n

[linux-audio-dev] XAP: a polemic

2002-12-14 Thread Tim Goetze
this is not meant to intimidate, rather to be a wake-up call.

it seems almost unreal (and certainly unprofessional) to me 
that an instrument plugin api is being discussed here by a 
bunch of people who have little to no experience in the field 
of software sequencers. going into implementation details at
the current level of understanding of the problem space is,
excuse me, ridiculous.

after all, what is going to drive your instrument networks?
punched cardboard? certainly not. you'll either use realtime 
input, or a sequencer, or, most wanted, a combination of both.

the closer the integration of the event/plugin system with the 
sequencer, the more uses the api can be put to, with less pain. 

stopping short of the mark where the api becomes useful for 
more applications than basically sample-rate dependent 
event->audio converters is narrow-minded. viewing the 'host' 
as a blackbox supposed to 'do the rest' without caring about 
its internals is blatant ignorance. 

i do think it's reasonable to ignore my personal input since 
i don't offer published code to back up my views, and when i
do, you'll find it centered around my personal musical needs. 

however there are, afaik, people from the rosegarden team on 
this list. it would also be helpful having werner schweer of 
muse fame participate in some way or other. you might also want 
to look at other free/open sequencer engines. for one thing,
you'll find that most, if not all, are tick-based.

vst[i] is a bad candidate i think because few people here will 
have vst host-side coding experience, and the api itself is 
bound to be centered around the particular coding needs of a 
specific company, for a specific application that drags code 
with it that originated in the eighties and never was subject 
to public source-level review.

in short, the more people with hands-on sequencer experience 
participating, the better. none are just too few.

tim

ps: if this post hasn't substantially changed your ways of
perceiving this matter, please don't bother answering.




Re: [linux-audio-dev] VST 2.0 observations

2002-12-14 Thread Steve Harris
On Sat, Dec 14, 2002 at 06:04:55 +0100, David Olofson wrote:
[plugin categorisation]
> > This is a classic example of where external metadata is noticably
> > superiour.
> 
> Yeah. Especially since users may not agree with the authors' choices, 
> or may simply want to have plugins categorized based on different 
> criteria.

Or the author may want to extend the categories.

- Steve



Re: [linux-audio-dev] XAP status : incomplete draft

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 16.13, Steve Harris wrote:
> On Sat, Dec 14, 2002 at 03:35:41 +0100, David Olofson wrote:
> > > Er, well, most people will just let the host do the wiring for
> > > them. So it will all work fine.
> >
> > ...as long as they put the plugins in the right order.
>
> Well you will almost allways use a external device -> pitch
> converter, so you cant get it into the wrong order. Anyone whos
> capable of comosing in more that one scale at once is alos capable
> of placing a few modules.

I don't get it. If you're supposed to place the scale converter 
*first*, then how are you supposed to be able to apply anything like 
traditional music theory, rather than pure, continous pitch based 
theory? You will have to know the *exact* temperament of the scale 
(to decode the input, and to generate output in the same scale), even 
if you're only worried about notes.


> > Running linear pitch with a scale applied into a plugin that
> > expects /note is not a mismatch? So, how is that
> > plugin going to figure out what pitch in the input corresponds to
> > which note in the scale?
>
> No plugins expect something per note. All of the expect to receive
> pitch, if ther are designed for ET they can calculate the note fomr
> the pitch trivially. You have to tell them what scale youre using
> (or they could be 12tET only).

Why design a plugin for one specific temperament of one scale, when 
you could support a range of similar temperaments for that scale...? 
Why worry about temperament at all, in every plugin of this kind?

This seems to me like a totally backwards way of implementing note 
based theory.


> > > It wont need explaining, its blatatly obvious, unlike if you
> > > have pitch and note pitch when its not obvious if they will be
> > > compatible or not (even to the host).
> >
> > I don't see how it's blatantly obvious that things will work if
> > you put the plugins in one order, whereas it will not work at all
> > if you put them in some other order.
>
> The order is irrelevant.

Yes, if every plugin is aware of the exact temperament of the scale.


> Realisticly you only need to convert from notes to pitch at the
> input stage, once it in the system you will be fine just processing
> the pitch data.

Yes, but only if you work only with continous pitch based theory, or 
ET scales only.


> If you really, really want to convert from one source of note
> numbering to two seperate scales you do the equivalent function
> with pitch mappers (we discussed this a few days back, I think you
> agreed that it was easier to do the processing on pitch data,
> rather than skewed scales anyway).

That's not what I'm talking about. I'm talking about doing any 
musically interesting processing at all, while playing the result 
with a non-ET scale. It has nothing to do with whether you're using 
one scale, or multiple scales.


> Any modules that want to do note based processing for ET scales can
> do it just by being told now many notes per octave there are (just
> like with note representations) and note based processing for
> non-ET scales is still hard, but probably not neccesary.

Well, that might be for *very* non-ET scales. However, I suspect that 
*subtly* different temperaments of the 12t scale used in some 
classical music are a lot more interesting to people in general - 
especially those who would ever think about using any event 
processors based on traditional theory.


> I'm not
> away of any non-ET scale where you could, eg. arpegiate without
> knowing a lot more than just the number of the note in the scale.

So, if you're playing a keyboard instrument, you have to play 
different notes if you're using a slightly different temperament than 
you would if the instrument was tuned to 12tET?

Well, the scales that were used for keyboard instruments before the 
ET scale became popular (thanks to Bach) aren't very useful if you 
get too far from the intended key signature. Is that reason enough to 
dismiss them as non-existent? Indeed, they're hardly ever used these 
days - but OTOH, nor is Mercator's 53tET scale, which is as close to 
perfect you get with ET, without using hundreds of tones/octave.

That said, it's easy enough to change the tuning of synths, even 
dynamically, so I'm still not convinced that working with an 
approximat 12t scale, and then converting to the right temperament, 
is useless and irrelevant.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] VST 2.0 observations

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 11.19, Steve Harris wrote:
> On Sat, Dec 14, 2002 at 06:48:53 +0100, David Olofson wrote:
> > 4. There is a feature that allows plugins to tell the host
> >which "category" they would fit in. (There is some
> >enumeration for that somewhere.) Might be rather handy
> >when you have a large number of plugins installed...
>
> This is a classic example of where external metadata is noticably
> superiour.

Yeah. Especially since users may not agree with the authors' choices, 
or may simply want to have plugins categorized based on different 
criteria.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Temporary XAP website

2002-12-14 Thread Alexander Ehlert
On Fri, Dec 13, 2002 at 11:33:38AM +, Steve Harris wrote:
> On Fri, Dec 13, 2002 at 11:32:10 +0100, David Olofson wrote:
> > > Very nice.  Logo #2 (with the red A) gets my vote.
> > 
> > Hmm... Yeah, that's the first version I made. My ex GF suggested 
> > blue, and I thought it might look more "serious" in a way. I think I 
> > prefer the red one too, though...
> 
> Me too. :)

Yeah, me too! The 2nd logo is the best, but the P is a bit too wide :)

Cheers, Alex

-- 



Re: [linux-audio-dev] XAP and these timestamps...

2002-12-14 Thread Frank van de Pol
~
Good thinking David,

with a small set of events (tempo updates and position updates) you can
maintain a real-time view of the tempo map for all plugins. This is the same
concept as used in the ALSA sequencer API.

Since the plugins require prequeueing (because of the processing in blocks,
and to compensate lag caused by the plugin or processing/transmission
delays), and this prequeueing includes the tempo/position changes you'll get
into an interesting situation where events scheduled for a certain tick and
events schedulued for a certain time (=sample) might be influenced by the
tempo/position change events. In the ALSA sequencer this is solved by
allowing events to be scheduled at either a specific time or specific tick
(using two queues).

A nice question to ask is 'what is time'. I suppose that there is a direct
correlation between time and sample frequency; but what to do with
non-constant sample frequency? (This is not a hypothetical situation, since
a sampled system which is synchronised to an external source, could be
facing variable sample rate - when slaved to a VCR for instance). I believe
the answer lies in definition of which is your time master, and use that as
such; so in case of the slowed down VCR, the notion of time will only
progress slower, without causing any trouble to the system. If no house
clock or word clock is availble, things might end up hairy...

If for offline processing the audio piece is rendered (inside the XAP
architecture), this can also be done in faster or slower than real-time
depending on cpu power (I think this is a big bonus).

mapping should be:
sample position (using declared, not effective rate) -> time -> tick

for the mapping from time to tick a simple, real-time view of the relevant
part of the tempo map is used; 
- time of last tempo change + tempo
- time of last position change + tick  (also used for alignment)

Since for some applications a relative time is most usefull, while for
others's a absolute position is better, this is also something to look at.
Position changes and events queued at absolute position are typically not a
good match. If events are received from multiple sources, or are received
out-of-order, they have to merged; special attention required for those
cases. Same for applications that want to make changes to prequeued events
(eg. withdraw those).

To me I get the feeling that the XAP consists on a set of APIs, each very
focused and as simple as possible. Depending on the use cases for the
application/plugin, one or more of this XAP-APIs can be used.
a quick thought brings a few to my mind; a further analysis would be
required to complete the picture:

1 a XAP api for transport of the blocks of audio data 
2 a XAP api for transport of the event data
3 a XAP api for handling the time/tempo stuff (layered upon 1 & 2)
4 a XAP api for handling musical events (notes etc.), layered upon (1, 2 & 3)
5 a XAP api for the configuration/settings
6 a XAP api for the topology

etc. etc.

just some thoughts,
Frank.



On Sat, Dec 14, 2002 at 01:06:46AM +0100, David Olofson wrote:
> On Friday 13 December 2002 22.14, Tim Hockin wrote:
> > > >  Plugins can
> > > > look at
> > > > host->ticks_per_beat.
> > >
> > > No, that can change at any time (or many times) in the block.
> >
> > well, the plugin has to know ticks-per-beat and samples-per-tick. 
> > Or rather, samples-per-beat.  If we know samples per beat (tempo)
> > we can do whatever we need, right?
> 
> Yes - as long as the song position doesn't skip, because that won't 
> (*) result in tempo events. Plugins that *lock* (rather than just 
> sync) must also be informed of any discontinuities in the timeline, 
> such as skips or loops.
> 
> (*) You *really* don't want two events with the same timestamp,
> where the first says "tempo=-Inf" and the other says
> "tempo=120 BPM". But that would be the closest to the correct
> way of describing a transport skip you can get. Next is
> "running like hell" for one sample frame, and then reverting
> to the right tempo, but that's a *really* nasty thing to do
> to plugins that are only concerned with tempo...
> 
> 
> > Thinking again: A plugin is really concerned with the past, and how
> > it affects the future, not the future alone.
> 
> That's a good way to explain what prequeueing is really about. :-)
> 
> 
> > plugin: "I received
> > some event that needs further processing in 1.5 beats".  If it
> > knows how many samples per beat, and it receives tempo-change
> > events, what more does it REALLY need?  We can provide ticks as a
> > higher level of granularity, but is it really needed?
> 
> No. I thought some about this earlier, but forgot to write it. This 
> is all you need to maintain a perfect (almost - read on) image of the 
> timeline:
> 
>   Tempo changes
>   Whenever the tempo changes, you get a sample
>   accurate event with the new tempo.
> 
>   Unit: Ticks/sample

Re: [linux-audio-dev] XAP status : incomplete draft

2002-12-14 Thread Steve Harris
On Sat, Dec 14, 2002 at 03:35:41 +0100, David Olofson wrote:
> > Er, well, most people will just let the host do the wiring for
> > them. So it will all work fine.
> 
> ...as long as they put the plugins in the right order.

Well you will almost allways use a external device -> pitch converter, so
you cant get it into the wrong order. Anyone whos capable of comosing in
more that one scale at once is alos capable of placing a few modules.
 
> Running linear pitch with a scale applied into a plugin that expects 
> /note is not a mismatch? So, how is that plugin going to 
> figure out what pitch in the input corresponds to which note in the 
> scale?

No plugins expect something per note. All of the expect to receive pitch,
if ther are designed for ET they can calculate the note fomr the pitch
trivially. You have to tell them what scale youre using (or they could be
12tET only).
 
> > It wont need explaining, its blatatly obvious, unlike if you have
> > pitch and note pitch when its not obvious if they will be
> > compatible or not (even to the host).
> 
> I don't see how it's blatantly obvious that things will work if you 
> put the plugins in one order, whereas it will not work at all if you 
> put them in some other order.

The order is irrelevant.
 
Realisticly you only need to convert from notes to pitch at the input
stage, once it in the system you will be fine just processing the pitch
data.

If you really, really want to convert from one source of note numbering to
two seperate scales you do the equivalent function with pitch mappers (we
discussed this a few days back, I think you agreed that it was easier to
do the processing on pitch data, rather than skewed scales anyway).

Any modules that want to do note based processing for ET scales can do it
just by being told now many notes per octave there are (just like with
note representations) and note based processing for non-ET scales is
still hard, but probably not neccesary. I'm not away of any non-ET scale
where you could, eg. arpegiate without knowing a lot more than just the
number of the note in the scale.

- Steve



Re: [linux-audio-dev] XAP status : incomplete draft

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 11.15, Steve Harris wrote:
> On Sat, Dec 14, 2002 at 02:44:48 +0100, David Olofson wrote:
> > > Right, so dont allow plugins to talk notes... I still dont
> > > think its necceasry, its just programmer convienvence.
> >
> > It's actually more *user* convenience than programmer
> > convenience. Programmers that work with traditional theory will
> > have to work with /note regardless, but users won't be
> > able to tell for sure which plugins expect or generate what,
> > since it just says "PITCH" everywhere. Is that acceptable, or
> > maybe even desirable?
>
> Er, well, most people will just let the host do the wiring for
> them. So it will all work fine.

...as long as they put the plugins in the right order.


> If they do the wiring themselves then they will wire pitch outut to
> pitch input and it will all work. Theres no possibilities of a
> pitch data mismatch, because theres only one format.

Running linear pitch with a scale applied into a plugin that expects 
/note is not a mismatch? So, how is that plugin going to 
figure out what pitch in the input corresponds to which note in the 
scale?


> > Fine, it works for me, but I'm not sure I know how to explain how
> > this works to your average user.
>
> It wont need explaining, its blatatly obvious, unlike if you have
> pitch and note pitch when its not obvious if they will be
> compatible or not (even to the host).

I don't see how it's blatantly obvious that things will work if you 
put the plugins in one order, whereas it will not work at all if you 
put them in some other order.


> > > If you dont have it there cant be any compatibility problems.
> >
> > How can you avoid compatibility problems by sending two different
> > kinds of data, while pretending they are the same?
>
> There aren't two kinds of data, theres just pitch.
>
> > Useful when you think only in terms of linear pitch, yes. When
> > you do anything related to traditional music theory, you'll have
> > to guess which note the input pitch is supposed to be, and then
> > you'll have to guess what scale is desired for the output.
>
> This is true of all the systems we've discussed.

No. There is no guessing if you know you have 1.0/note. You can apply 
traditional theory in NtET space, and then translate the result into 
a "tweaked" tempering, without the traditional theory plugin having 
to understand anything about tempering.


> > Nothing more sophisticated than autocomp plugins (rhythm +
> > harmonizer, basically) and other plugins based on traditional
> > music theory. Things that are relatively easy to implement if you
> > can assume that input and output is note based, and disregard
> > tempering of the scale, within reasonable limits. They still work
> > with non-ET scales, because that translation is done elsewhere.
> > (In the synths - but not many of them actually support it,
> > AFAIK... *heh*)
>
> Right, and none of this stuff is any harder if you just support
> pitch. In either case you need to know what scale its in.

No, you don't - that's the whole point. You only have to know 
*approximately* what scale is used. 1.0/note is just 1.0/note, even 
if the scale converter (that must be placed *after* the note/scale 
based plugins) converts it into some non-ET scale. 

Obviously, you cannot force 12tET based theory to apply to 16t - but 
there is no point in trying to do that anyway. It's a completely 
different theory, so you'll need a different plugin anyway! You 
*can*, however, use any 12t based plugin, with any 12t scale, as long 
as the relative pitch of the notes in the scale doesn't deviate too 
much from what the plugin was designed for.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] XAP and these timestamps...

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 07.05, Tim Hockin wrote:
> > Yes - as long as the song position doesn't skip, because that
> > won't (*) result in tempo events. Plugins that *lock* (rather
> > than just sync) must also be informed of any discontinuities in
> > the timeline, such as skips or loops.
>
> OK, it took me a bit to grok this.  We have four temporal concerns:
>
> 1) plugins that need to do things in step with the tempo
>- they have a TEMPO control

Yes.


> 2) plugins that need to do things on tempo milestones
>- they have a TEMPO and can host callback for music-time

Yes - except that the host can't know which timeline you're asking 
about... (Irrelevant to VST, since you simply cannot have more than 
one timeline, unless you split the not in partitions belonging to 
different timelines. There's no way to have musical time related 
input from two timelines to one plugin.)


> 3) plugins that need to do things at some point in absolute time
>- they have the sample rate, no worries

Right.


> 4) plugins that need to do things at some point in song time
>- they have a TRANSPORT control

Yes.


> > (*) You *really* don't want two events with the same timestamp,
> > where the first says "tempo=-Inf" and the other says
> > "tempo=120 BPM". But that would be the closest to the correct
>
> ick...

Exactly. That's why we use the alternative logic: The current tempo 
is whatever the value of the tempo control is in the timeline at the 
current position. We don't care whether musical time is stopped, 
skipping or whatever; tempo is just a control value.


> > Tempo changes
> > Whenever the tempo changes, you get a sample
> > accurate event with the new tempo.
> >
> > Unit: Ticks/sample
>
> Before I go any further: What's a tick?

Could be any handy unit, as long as it's tied to a timeline, rather 
than free running time. I definitely vote for 1 PPQN (ie one tick per 
beat), which is what VST is using. No need to throw PPNQ resolution 
in the mix when we're dealing with floating point anyway!


> > Meter changes
> > When PPQN (who would change *that* in the middle
>
> Define PPQN in our terms?  Pulses of...

Pulses Per Quarter Note. Let's simply set that to one, and forget 
about it. We don't want plugins to keep track of some arbitrary 
conversion factor, that is in fact entirely irrelevant.


> Then I can digest the rest of this email :)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Temporary XAP website

2002-12-14 Thread David Olofson
On Saturday 14 December 2002 08.18, Joshua Haberman wrote:
> David Olofson <[EMAIL PROTECTED]> wrote:
> > On Thursday 12 December 2002 15.36, Steve Harris wrote:
> > > On Thu, Dec 12, 2002 at 02:17:05PM +0100, David Olofson wrote:
> > > > (And of course, I'll make huge highres versions and icons and
> > > > stuff as well, when we have the final design down. :-)
> > >
> > > And vector (EPS/SVG) ofcourse.
> >
> > Well, the problem is that I'm working in GIMP, which doesn't seem
> > to feel like exporting bezier curves... Am I missing something,
> > or do I need to hack something?
>
> Perhaps you could export in raster format, then use a tool like
> autotrace (http://autotrace.sourceforge.net/) to convert.

Yeah, that's what I had in mind. Thanks for the link!


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
.- M A I A -.
|The Multimedia Application Integration Architecture|
`> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Bristol Synth

2002-12-14 Thread Paul Davis
>Hi, I've been playing a lot with bristol synth and really love it. So
>much so that I've been trying to 'Jackify' it. Actually, I'm pretty
>much done, but can't figure out the internal audio format. Its
>interleaved floats I think, but not normalised to [-1,1]. If any of
>the developers are here could you help me out? I can hear noise, but I
>need to tune the maths. TIA.

all JACK audio data at this time consists of mono 32 bit floating
point, normalized to [-1,+1].

--p



Re: [linux-audio-dev] VST 2.0 observations

2002-12-14 Thread Steve Harris
On Sat, Dec 14, 2002 at 06:48:53 +0100, David Olofson wrote:
> 4. There is a feature that allows plugins to tell the host
>which "category" they would fit in. (There is some
>enumeration for that somewhere.) Might be rather handy
>when you have a large number of plugins installed...

This is a classic example of where external metadata is noticably
superiour.

- Steve



Re: [linux-audio-dev] XAP status : incomplete draft

2002-12-14 Thread Steve Harris
On Sat, Dec 14, 2002 at 02:44:48 +0100, David Olofson wrote:
> > Right, so dont allow plugins to talk notes... I still dont think
> > its necceasry, its just programmer convienvence.
> 
> It's actually more *user* convenience than programmer convenience. 
> Programmers that work with traditional theory will have to work with 
> /note regardless, but users won't be able to tell for sure 
> which plugins expect or generate what, since it just says "PITCH" 
> everywhere. Is that acceptable, or maybe even desirable?

Er, well, most people will just let the host do the wiring for them. So it
will all work fine.

If they do the wiring themselves then they will wire pitch outut to pitch
input and it will all work. Theres no possibilities of a pitch data
mismatch, because theres only one format.
 
> Fine, it works for me, but I'm not sure I know how to explain how 
> this works to your average user.

It wont need explaining, its blatatly obvious, unlike if you have pitch
and note pitch when its not obvious if they will be compatible or not
(even to the host).
 
> > If you dont have it there cant be any compatibility problems.
> 
> How can you avoid compatibility problems by sending two different 
> kinds of data, while pretending they are the same?

There aren't two kinds of data, theres just pitch.
 
> Useful when you think only in terms of linear pitch, yes. When you do 
> anything related to traditional music theory, you'll have to guess 
> which note the input pitch is supposed to be, and then you'll have to 
> guess what scale is desired for the output.

This is true of all the systems we've discussed.
 
> Nothing more sophisticated than autocomp plugins (rhythm + 
> harmonizer, basically) and other plugins based on traditional music 
> theory. Things that are relatively easy to implement if you can 
> assume that input and output is note based, and disregard tempering 
> of the scale, within reasonable limits. They still work with non-ET 
> scales, because that translation is done elsewhere. (In the synths - 
> but not many of them actually support it, AFAIK... *heh*)

Right, and none of this stuff is any harder if you just support pitch. In
either case you need to know what scale its in.

- Steve