RE: [linux-audio-dev] LADPSA v1.1 SDK Provisional Release

2002-07-15 Thread Richard W.E. Furse

Defaults hints are a bit of kludge, but the least kludgy of the offerings -
suggestion was we'd prefer a compromise like this in 1.1. When I coded 1.0 I
anticipated defaults within the remit of GUIs or 'standard patch' mechanisms
(prob. XML) - I can revert.

i.e. I'm happy not to release the default stuff, just making 1.1 a comment
about 1.0f=0dB. Up to you all (although I'll have to take the changes out of
the SDK which will annoy me!).

[BTW concert A is intended with logical meaning "a typical pitch". Hosts
that implemented this as concert C wouldn't get a nagging email from me
although I might recommend a shrink.]

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Taybin
> Rutkin
> Sent: 15 July 2002 16:01
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADPSA v1.1 SDK Provisional Release
>
>
> On Sun, 7 Jul 2002, Richard W.E. Furse wrote:
>
> > Please let me know if this looks alright - and if I've done
> anything stupid!
>
> I'm a little confused about the purpose of the LADSPA_IS_HINT_DEFAULT_*
> defines.  Are they necessary?  Are they just useful?  Why concert A?  Why
> not concert C?
>
> Taybin
>




[linux-audio-dev] LADPSA v1.1 SDK Provisional Release

2002-07-07 Thread Richard W.E. Furse

I've put a provisional version of the LADSPA SDK including the LADSPA v1.1
header file at

http://www.ladspa.org/ladspa_sdk_dev.tgz

Please let me know if this looks alright - and if I've done anything stupid!

Once this is sorted out I'll update the CMT library and the website.

--Richard




RE: [OT] RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Richard W.E. Furse

No, but this is the reason why most organ consoles have a mirror allowing
the organist to watch the conductor and why in many cathedrals there is a
separately located set of pipes for use with choral music (generally
controlled by an upper manual, labelled "choir"). As long as the organist
follows the conductor and the pipes are near the choir, there isn't a
problem anywhere in the acoustic. There's more of an issue when there is a
distance between sound sources (e.g. organ and congregation in a cathedral,
however this musical form doesn't rely on precision).

I went to a recent Bach Choir performance in Westminster Cathedral that was
really very bad in terms of spatial separation - the organ was at one end
and the choir towards the other. We were in the middle and the overall
effect was very out of sync. Sounded like about 0.3s worth to my ear, but
that is a guess from stale memory. The performance was good in other ways.

Ramble ramble ramble...

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Tobias
> Ulbricht
[...]
> getting off-topic now. But have you ever measured the "latency" between
> the organ and the audience in the church singing? It makes me sick of
> church songs but nevertheless *some* people like it that way...
> Well, it obviously depends on the "instrument" what latency is OK.
>
> tobias.
>




RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Richard W.E. Furse

In my experience, audible separation of acoustic events normally happens
around 20ms (ignoring phase effects). Most instruments (including guitar)
are entirely playable with this sort of delay.

The pipe organ example is a good one - there is a huge variety of delay on
pipe organs, probably beyond the half second (I don't have the figures, but
there's often a significant delay between keypress and note as well as the
acoustic delay). I'm fine with small delays, but fast passages becomes
continuously less comfortable as the delay increases. The same is true for
other instruments such as guitar.

BTW, as I keep moaning, I think network audio is an important "next step" in
LAD development and ideally could be combined with the kind of step required
to get JACK firmly off the ground. I'm still plugging LADMEA ideas
(www.ladspa.org/ladmea/).

--Richard




RE: [linux-audio-dev] LADSPA v1.1 Alternative Proposal

2002-05-29 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Tim Goetze
> Sent: 29 May 2002 22:32
[...]
> >totally agree -- Tim's solution doesn't break binary compatability,
>
> richard pointed out that it'd actually break binary compatibility with
> hosts that call connect_port at every processing cycle.
>
> however, until these hosts are fixed, it only means 'always default'
> rather than 'always segfault' for them so i'm still for it.
[...]

To flesh this out, some problems I can think of with this approach are as
below (mostly following from the first):

1. Writing to inputs is conceptually ugly.
2. The find a default a GUI has to do the following: [a] load the library,
[b] instantiate the plugin, [c] write a value to a memory location, [d] call
connect_port(), [e] compare old and new values, [f] if they're the same,
repeat [c] to [e] with a different value in case the written value happened
to be the default, [g] destroy the plugin. This can be simplified by
addition of a default flag, but even so it's ugly.
3. A plugin has to be loaded and instantiated to find a default value - it
is no longer possible to deal with the plugin in the abstract using just
descriptor data (which is easily marshalled).
4. The semantics are untidy with audio ratio ports and outputs. Output
defaults are admittedly of limited usefulness (but of some - consider a host
that's going to filter the output or flash a light when a toggle changes - a
default is a useful initial value). Audio rate defaults ARE useful - however
at the connect_port() stage the plugin can write only to the first point of
the buffer. Only the host knows how long the buffer is and will have to copy
the value through the buffer so it cannot just connect the port and forget
about it.
5. connect_port() used with read-only shared memory or memory-mapped
soundfiles will segfault (I'm mostly using using IEEE float soundfiles at
the moment, so the latter is a real requirement).
6. Hosts can no longer usefully call connect_port() each frame - they will
always lose their intended input data on defaulting ports. These hosts will
always have to copy data into place, in which case it's more efficient just
to call connect_port() once only at initialisation. This is a nuisance for
hosts using event or frame packets rather than fixed data areas (e.g.
Glame?).

Creative thinking though...

My preference is the extra 4bits of hint. I'm happy to get rid of the high
and low options if they're too confusing or just leave defaults as the job
of whatever GUI wrapper layer is used - I'd prefer to keep LADSPA simple.

To provide a public answer to objections to provision of a reference
getDefault() implementation, this would just be a short function in the SDK
that host writers could steal if they couldn't be bothered to handle the
different default flags values themselves.

Best wishes,

--Richard




[linux-audio-dev] LADSPA v1.1 Alternative Proposal

2002-05-26 Thread Richard W.E. Furse

After some more worry about the least untidy way to do defaults, I've come
up with the following, based on Paul's/Steve's scheme. I'm edging towards
this as preferable to my previous posting for LADSPA v1.1.

Please let me know how you think this compares with the previous
incarnation.

Some comments:
* I'll include a standard getDefault() function implementation in
  the SDK so host programmers don't have to work through the cases.
* The *_HIGH and *_LOW options are a bit complicated. We could ditch
them.
* This approach does NOT allow explicit defaults (e.g. 0.707). However
the
  previous approach only managed this by partly mucking up the structure
  that makes LADSPA simple. However, this default set does cover all the
  defaults I came up with for the CMT library when I went through it
  before, so I'm confident it's a good start point. I've left some slack
  too - there is room for another 6 default rules or values in the
future.
* This approach handles the LADSPA_HINT_SAMPLE_RATE neatly.

Here are the new diffs:

3,5c3,5
<Linux Audio Developer's Simple Plugin API Version 1.1[provisional,
<LGPL].  Copyright (C) 2000-2002 Richard W.E. Furse, Paul
<Barton-Davis, Stefan Westerfeld.
---
>Linux Audio Developer's Simple Plugin API Version 1.0[LGPL].
>Copyright (C) 2000-2001 Richard W.E. Furse, Paul Barton-Davis,
>Stefan Westerfeld.
75,78c75
<value although it may have a preferred range (see hints below).
<
<For audio it is generally assumed that 1.0f is the `0dB' reference
<amplitude and is a `normal' signal level. */
---
>value although it may have a preferred range (see hints below). */
218,219c215
<conjunction with any other hint except LADSPA_HINT_DEFAULT_0 or
<LADSPA_HINT_DEFAULT_1. */
---
>conjunction with any other hint. */
243,305d238
< /* The various LADSPA_HINT_HAS_DEFAULT_* hints indicate a `normal'
<value for the port that is sensible as a default. For instance,
<this value is suitable for use as an initial value in a user
<interface or as a value the host might assign to a control port
<when the user has not provided one. Defaults are encoded using a
<mask so only one default may be specified for a port. Some of the
<hints make use of lower and upper bounds, in which case the
<relevant bound or bounds must be available and
<LADSPA_HINT_SAMPLE_RATE must be applied as usual. The resulting
<default must be rounded if LADSPA_HINT_INTEGER is present. Default
<values were introduced in LADSPA v1.1. */
< #define LADSPA_HINT_DEFAULT_MASK0x3C0
<
< /* This default hint indicates that no default is provided. */
< #define LADSPA_HINT_DEFAULT_NONE0x0
<
< /* This default hint indicates that the suggested lower bound for the
<port should be used. */
< #define LADSPA_HINT_DEFAULT_MINIMUM 0x40
<
< /* This default hint indicates that a low value between the suggested
<lower and upper bounds should be chosen. For ports with
<LADSPA_HINT_LOGARITHMIC, this should be exp(log(lower) * 0.75 +
<log(upper) * 0.25). Otherwise, this should be (lower * 0.75 + upper
<* 0.25). */
< #define LADSPA_HINT_DEFAULT_LOW 0x80
<
< /* This default hint indicates that a middle value between the
<suggested lower and upper bounds should be chosen. For ports with
<LADSPA_HINT_LOGARITHMIC, this should be exp(log(lower) * 0.5 +
<log(upper) * 0.5). Otherwise, this should be (lower * 0.5 + upper *
<0.5). */
< #define LADSPA_HINT_DEFAULT_MIDDLE  0xC0
<
< /* This default hint indicates that a high value between the suggested
<lower and upper bounds should be chosen. For ports with
<LADSPA_HINT_LOGARITHMIC, this should be exp(log(lower) * 0.25 +
<log(upper) * 0.75). Otherwise, this should be (lower * 0.25 + upper
<* 0.75). */
< #define LADSPA_HINT_DEFAULT_HIGH0x100
<
< /* This default hint indicates that the suggested upper bound for the
<port should be used. */
< #define LADSPA_HINT_DEFAULT_MAXIMUM 0x140
<
< /* This default hint indicates that the number 0 should be used. Note
<that this default may be used in conjunction with
<LADSPA_HINT_TOGGLED. */
< #define LADSPA_HINT_DEFAULT_0   0x200
<
< /* This default hint indicates that the number 1 should be used. Note
<that this default may be used in conjunction with
<LADSPA_HINT_TOGGLED. */
< #define LADSPA_HINT_DEFAULT_1   0x240
<
< /* This default hint indicates that the number 100 should be used. */
< #define LADSPA_HINT_DEFAULT_100 0x280
<
< /* This default hint indicates that the Hz frequency of `concert A'
<should be used. This will be 440 unless the h

[linux-audio-dev] RE: LADSPA v1.1

2002-05-18 Thread Richard W.E. Furse

BTW, I'd be happy to do the default values entirely through the hint
structure if people prefer (i.e. not include *actual* values, just rules
like "use the upper bound", "use the mid" etc.). Paul suggested this a while
back and Steve seems to like it...

Thoughts?

--Richard

> -Original Message-
> From: Richard W.E. Furse [mailto:[EMAIL PROTECTED]]
> Sent: 18 May 2002 20:07
> To: [EMAIL PROTECTED]
> Subject: LADSPA v1.1
>
>
> Well, it had to get to the top of my to-do list eventually. I've
> been trying to sort [1] the LADSPA website [2] LADSPA v1.1.
[...]




[linux-audio-dev] LADSPA v1.1

2002-05-18 Thread Richard W.E. Furse

Well, it had to get to the top of my to-do list eventually. I've been trying
to sort [1] the LADSPA website [2] LADSPA v1.1.

On the first point, I'm missing links to LCP and the XML GUI thing. Also, if
there are any plugins or hosts that should be on the list please shout (I
have added a few that people have told me about but I've not posted the new
index onto the web yet).

On the second point, here's a set of diffs for the new version of ladspa.h.
Do these look alright? I'd have preferred to put the default values in the
hint structure, but that would have changed its size and broken
backwards-compatibility.


3,4c3,4
<Linux Audio Developer's Simple Plugin API Version 1.0[LGPL].
<    Copyright (C) 2000-2001 Richard W.E. Furse, Paul Barton-Davis,
---
>Linux Audio Developer's Simple Plugin API Version 1.1[provisional,
LGPL].
>Copyright (C) 2000-2002 Richard W.E. Furse, Paul Barton-Davis,
75c75,78
<value although it may have a preferred range (see hints below). */
---
>value although it may have a preferred range (see hints below).
>
>For audio it is generally assumed that 1.0f is the `0dB' reference
>amplitude and is a `normal' signal level. */
215c218
<conjunction with any other hint. */
---
>conjunction with any other hint except LADSPA_HINT_HAS_DEFAULT. */
238a242,250
> /* Hint LADSPA_HINT_HAS_DEFAULT indicates that a suggested default
>value is available for the port. If this is the case then the
>default value is present in the PortDefaultValues in the main
>LADSPA_Descriptor structure. Ports with defaults must still be
>connected by the host in the normal way. Default values are not
>affected by the LADSPA_HINT_SAMPLE_RATE hint. Default values were
>introduced in LADSPA v1.1. */
> #define LADSPA_HINT_HAS_DEFAULT   0x40
>
244a257
> #define LADSPA_IS_HINT_HAS_DEFAULT(x)   ((x) & LADSPA_HINT_HAS_DEFAULT)
467a481,487
>
>   /* This optional pointer indicates an array of hint `default'
>  values. This pointer should only be read when required by a
>  LADSPA_HINT_HAS_DEFAULT hint for a port. See the hint for further
>  details. Entries are referenced by port index (from 0 to
>  PortCount-1). Default values were introduced in LADSPA v1.1. */
>   const LADSPA_Data * PortDefaultValues;

--Richard




RE: [linux-audio-dev] A new audio plugin API ?

2002-05-13 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Dr.
> Matthias Nagorni
> Sent: 13 May 2002 10:53
> To: [EMAIL PROTECTED]
> Subject: [linux-audio-dev] A new audio plugin API ?
[...]
> So I would expect that if you can use an audio plugin API to implement
> modules for a modular synthesizer, this API should provide enough
> functionality for (almost) all other audio applications you can think of.
> The invention of voltage controlled modules, made popular by Moog in the
> sixties, has had an enormous impact on the development of electronic
> music. Learning from this will lead us to the most important item for
> LADSPA extension, namely dynamic control ports that accept arrays.
> Once a new plugin API is implemented, it would be possible to
> implement all
> the modules of a modular synthesizer as plugins. This way, softsynths
[...]

Yep, this is important stuff. LADSPA was designed from ground up for modular
synthesis, however I had a different approach in mind, part of which I
implemented before starting my (rather demanding) current job. There wasn't
much point continuing as there are such excellent LADSPA-friendly synths out
there already. But just because I find it interesting there are notes
below...

> Now the wishlist:
>
> 1) Default values
On its way for LADSPA 1.1 (along with 0dB=1.0f). Yes Steve, it has been
getting mythical, but I've actually been working through my list of
things-to-do for LADSPA (mostly website related) over the past week so there
is hope ;-)

> 2) LADSPA_IS_SYNTH
> 3) LADSPA_IS_PORT_GATE, LADSPA_IS_PORT_GAIN, LADSPA_IS_PORT_FREQUENCY
Not needed, see below.

> 4) LADSPA_IS_PORT_DYNAMIC
I don't understand this. Are we just saying this is an array? If so, I think
this goes beyond LADSPA's scope.

> 5) LADSPA_HINT_LOW_FREQ, LADSPA_HINT_HIGH_FREQ
I don't quite understand this, but hopefully this could be handled by clear
labelling for the user.

> 6) Optional string array for integer ports (e.g. waveform: sine, saw, ...)
Nice to have (MN has it), but out of scope IMHO.

> 7) Type of integer port int not float
Already present throught the hints (well, 24bits worth after casting).

> 8) Polyphony extension: arrays of type buf[poly][buflen] and control[poly]
> 9) LADSPA_IS_PORT_POLY
Not needed, see below.

> 10) Categories
Nice to have, but we didn't agree a way to categorise, probably out of
scope.

[...]

And now - my view of a simple way to build a LADSPA-only modular synth
(a.k.a. as P-Net, pronounced peanut). The components are all pretty trivial
I think you'd agree and require no changes to LADSPA. What they DO require
is three plugin IDs reserved for unusual use.

Components:

(1) An XML representation of plugin networks. These networks may represent
synth or processing networks (or "patches").
(2) An XML editor for the networks which can load plugins and represent
them nicely on screen. The XML representation may wish to use some kind of
coordinate system to make this look pretty. Big warnings should be presented
on screen when the user selects a plugin that is not tagged as realtime.
(3) Three special plugins that are provided by the *HOST* rather than being
loaded (a number of each of these may be present in the network). The
plugins are:
(a) [processing networks only] a mono audio input
(b) [processing networks or synth networks] mono audio output,
(c) [synth networks only] synth control input, with one control input 
port
and one control (or perhaps audio) output port. The input port selects which
synth control is required (1 for frequency, 2 for gate etc, 3 for velocity,
4 for modulation wheel or whatever).
(4) A synth program that accepts two XML plugin networks: one a synth, one
a processing network. The programs listens to a MIDI interface or file and
writes to an audio interface or file (and possibly a MIDI file). Multiple
instances of the synth network are used to play each note during polyphonic
use (the synth may reuse them for sanity) and the mixed results are passed
through the processing network.

As a sideline, the processing network representation one gets is rather
useful - it can also be loaded by a clever host as a plugin in its own right
or used as a self-contained realtime or offline signal processing
configuration.

More complex arrangements are possible (e.g. with multitibral behaviour,
matrix processing or sampler modules built into the host), but this gives
the basic idea.

... But I think most of this is available already through ecasound or
suchlike.

--Richard




RE: [linux-audio-dev] LADSPA Specs ?

2002-05-13 Thread Richard W.E. Furse



> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Kai
> Vehmanen
> Sent: 12 May 2002 16:21
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADSPA Specs ?
>
>
> On Sat, 11 May 2002, Likai Liu wrote:
>
> > Please consider my proposal for array extension of the LADSPA:
>
> As the one who encouraged to "show the code", I feel obliged to give at
> least some feedback, but I'm afraid it's not very positive this time. The
> proposal itself looks fine, but I'm not sure how much interest this gets
> from developers of current LADSPA hosts.
>
> For instance ecasound's own plugin system (on top of which LADSPA plugins
> are mapped) doesn't support arrays, so it would be a major change to add
> support for this feature. On the other hand ecasound already has a
> mechanisms in place for representing envelopes (sequences of control data
> changes) and large parameter sets (ecasound plugins can change their
> parameter interface dynamically during runtime).
>
> So this is my view on the issue. Let's see what other people think...

I agree - rather than put the envelope in the plugin, I'd prefer the
envelope to be in a separate plugin wired up to a port (control or audio).
This is more flexible if someone wants to use a different control shape from
somewhere else.

Array data is a very useful data type - for wavetables, waveshaping, impulse
responses, samples, curves etc. On a par with string and event datatypes - I
use all of these heavily in MN (my particular flavour of host). However,
these are all nontrivial and handled differently by different programs -
introducing each of these types makes it harder for host writers to satisfy
the API and (at the risk of being a complete bore) I'd prefer to keep LADSPA
simple. ;-)

For a general solution, see the first cut of LADMEA
(www.ladspa.org/ladmea/). There are already small problems with it, but it
does provide an API that does pretty-well everything. As a result it's
complicated, over-general and everyone hates it! Maybe I could fix this, but
(1) I'd need to show the code and don't have the time at the moment and (2)
Jack does the simple audio stuff that people really need much more simply
(although I have a feeling there'll be another when the limits of Jack are
struck...).

--Richard




RE: [linux-audio-dev] LADSPA Specs ?

2002-05-13 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 11 May 2002 22:54
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADSPA Specs ?
>
>
> >you mentioned that you have never seen any plugin that outputs a
> >different number of samples than inputs. I think the time-stretching
> >plugin is a very legitimate example, and I certainly don't understand
> >your rationale that a time-stretching plugin should output the same
> >number of samples under the same frequency sample rate. You see, my dear
> >friend, this wouldn't be time-stretching.
>
> I'm afraid it would. You need to think about this some more. In any
> given unit of time, a certain number of samples are transformed into a
> varying air pressure wave. What matters is the relation between those
> samples and the original source material. The number of them remains
> the same per unit of time whether the signal is altered or not.

There are two ways to interpret timeshift:
(1) As a reinterpretation of the timestamps on the samples (i.e.
resampling), or
(2) As an actual stretching of the audio (e.g. it was supposed to last 1s,
afterwards it lasts 2s).

Neither of these are compatible with LADSPA because:

(1) LADSPA always assumes that samples are spaced according to the sample
rate.
(2) LADSPA can do this when stretching although you'll need linearly
increasing memory to make it work. Compacting is impossible as the plugin
has to look ahead into the input stream (unless you're intending some ugly
buffering). The LADSPA non-causal extensions I suggested a while back get
around this, however they are non-trivial and consensus was not to introduce
them (I agree).

> However, what you may be thinking of is a different (though clearly
> related) problem. Many "time stretching" algorithms (the ones that do
> not operate independently on pitch and duration) cannot be used in
> real-time, because if the speed is supposed to increase, they run out
> of samples to process.

Absolutely.

> This is why LADSPA has flags to indicate that a plugin is not suitable for
> real-time (i.e. streaming) use.
[...]

LADSPA always assumes it is streaming (i.e. logical time of audio = sample
number / sample rate). Running a stretch algorithm as in (2) above would
certainly not qualify as a realtime LADSPA plugin because the definition of
realtime for LADSPA purposes is that the plugin satisfies certain
constraints on how much CPU time it will use when it runs - and the stretch
algorithm above won't manage this (unless you have an AWFUL lot of memory in
your PC!).

Fundamentally, time shifting plugins (stretch, reverse etc) aren't really
compatible with the streaming paradigm (which has a basic assumption of
causality). And they are few enough to be handled separately - I don't
really see them as LADSPA's domain.

--Richard




RE: [linux-audio-dev] LADSPA audio port range?

2002-03-18 Thread Richard W.E. Furse

Yep, definite consensus. I nearly had time to sort this out last weekend,
but not quite...

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Steve
> Harris
> Sent: 18 March 2002 12:32
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADSPA audio port range?
[...]
> I've been banging on about it for ages, and I can't remeber if I ever got
> consensus. I use [-1,1] and complain to people whos hosts do other things.
> I have a patched applyplugin that uses [-1,1].
[...]




RE: [linux-audio-dev] surround/n-channel panning

2002-03-12 Thread Richard W.E. Furse

Take another look at the VBAP paper - it explains the panning energy issue
well.

Also, Ambisonics doesn't do Doppler, room models, distance delays and
suchlike on its own (unless you make a real recording or put a virtual
Ambisonic mike into my VSpace virtual acoustic space model). Most of these
auditory cues can be reproduced over VBAP too.

--Richard

[...]
> It isn't so bad if you don't want to take relative phase and doppler into
> account.  It's a moving source, right?  So its apparent frequency changes.
> Panning doesn't begin to model even the simplest moving source,
> and the only
> real way to do that is to try to reconstruct the sound field at
> the listener's
> head (i.e. do ambisonics, sorry :)
[...]




RE: [linux-audio-dev] surround/n-channel panning

2002-03-11 Thread Richard W.E. Furse

I don't understand the pseudocode, but at first glance it looks fairly
wrong.

I'd take a look at VBAP (excellent paper at
http://lib.hut.fi/Diss/2001/isbn9512255324/isbn9512255324.pdf). For a
horizontal rig this boils down to a standard cos/sin pan law across the
nearest speaker pair.

And watch this space, because I've an interesting project on its way...

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 11 March 2002 19:10
> To: [EMAIL PROTECTED]
> Subject: [linux-audio-dev] surround/n-channel panning
>
>
> ignoring the subtleties of things like ambisonics and filtered
> channels for the time being, am i right in thinking that surround
> panning is just simple math? my mental model is:
>
>   total_distance = 0
>
>   foreach speaker
>   speaker.distance = speaker.compute_distance (pan center);
>   total_distance + speaker.distance
>
> foreach speaker
>   speaker.pan_gain_coefficient = speaker.distance/total_distance;
>
> i think there is more to it than this. i know that this doesn't work
> for stereo, for example - it doesn't produce an equal power pan.
>
> can someone point me at some good references?




RE: [linux-audio-dev] APIs

2002-02-16 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 14 February 2002 16:52
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] APIs
[...]
> the problem is really in syncing different streaming data rates and
> providing buffer management for different types of data. despite
> richard (f.)'s description of LADMEA, i don't believe that getting
> asynchronous data like MIDI and streaming data like audio and video to
> run properly in sync is easy at all.
[...]

Yep, getting data like this in sync is not easy, however a "transparent" API
like LADMEA hopefully allows the exchange coder to get on with solving those
problems (synchronous or asynchronous, in a Jack style, a MAIA style, a
GLAME style, a gstreamer style, ATM, audio/video or whatever) while allowing
the client to get on with writing his/her applications (FX processors,
sequencers, HDR etc).

> if people have ideas on how to do this, that would be great.
>
> --p

I'm still waiting for real feedback on LADMEA - there've been some valid
comments about who does memory allocation (this is easy to fix) but
otherwise no comment on whether or not the API works that's been based on a
proper reading of the API. I suppose this is my fault for writing a very
general (and relatively complex) API and not having time to write much
richer documentation than the code comments (although there's about twice as
much comment as code). It's not much longer than LADSPA however, and it
shouldn't be hard for programmers of your calibre ;-)

I really should stop plugging this if there's no interest. I don't have time
to flesh it out or write an SDK if no one thinks it's worth it - I have too
many interesting/difficult projects at work, I'm behind with other "hobby"
projects (e.g. LADSPA 1.1) and I've come up with some very exciting new
ideas in the 3D audio world that I'm currently developing. In the meantime,
I'd remind you all it's still out there... http://www.ladspa.org/ladmea/

--Richard




RE: [linux-audio-dev] Reference amplitudes and LADSPA (again)

2002-02-02 Thread Richard W.E. Furse

Perhaps we should include a flag on LADSPA plugins indicating that the
plugin is non-linear and therefore cares about input level, plus a
recommendation that such plugins expect peak amplitude around 1. This gives
the host the option to renormalise its signal to 1 on input and output from
whatever representation it uses and a way to know when this should be done.
I'm inclined to modify applyplugin with this in mind. Folk who've written
linear plugins will not notice the difference, folk who've written nonlinear
ones may be in a bit of trouble. Also, has anyone written plugins that map
the floats directly to shorts before processing? Is there much out there
that couldn't be changed easily?

I don't think this should be much more than a recommendation - some plugins
inevitably mess around with peak levels and forcing plugins to keep to these
levels (a) breaks relative scaling when processing two channels of stereo in
parallel and (b) is very artificial and (c) inefficient - a chain of linear
plugins may mess around with signal amplitude quite badly and it'll be much
more efficient to renormalise at one point in the chain rather than on each
plugin.

--Richard




RE: [linux-audio-dev] VST link (open?)

2002-01-24 Thread Richard W.E. Furse

> oh that's right - LADMEA was one of those designs in which the entire
> graph can be stalled by a single way-upstream "producer" node being
> delayed. am i remembering this right?
[...]

No. The exchange has all the information it needs about latencies etc to
know whether delays are tolerable or not downstream - and can recover as it
likes.

--Richard




RE: [linux-audio-dev] VST link (open?)

2002-01-23 Thread Richard W.E. Furse

[...]
> >I remind everyone that LADMEA is still out there. I've spent
> almost no time
> >on it since the original posting (and I know there are some areas needing
> >work) given the general lack of interest in anything this
> complex/general.
> >
> > www.ladspa.org/ladmea/
>
> but ladmea, like jack, is designed with a general assumption that
> synchronous execution is possible. the issue that VST Link solves is
[...]

Hmm, not so - LADMEA is designed very much with asynchronous operation in
mind (the clock and latency requirement stuff for instance). It's designed
to handle LAN or even WAN multimedia. Whether or not operation is
synchronous is up to the actual exchange implementation. If JACK was
modified to be a LADMEA exchange this would be synchronous - but this isn't
the only option.

And the nice thing is that (hopefully!) a tolerant client written using JACK
but talking to it through LADMEA would be able to use an asynchronous
exchange without any code change.

Of course, because of lack of interest I never did get around to building a
demo/SDK package. Perhaps that would convince folk, but I've only so much
free time...

--Richard




[linux-audio-dev] Soundcards: USB, Firewire, PCI, PCMCIA

2002-01-20 Thread Richard W.E. Furse

Hmm, I've got my hands somewhat dirty with this, although not from a
particularly Linux-friendly viewpoint. My requirement was/is a four channel
card for use with Ambisonic recording and a laptop (and ideally with my
current desktop too). Stages I've been through:

1.  M-Audio Quattro (USB): Got excited for the first time. Bought one. Didn't
work. Apparently the early ones with "Mac version" on the box really mean
it - it's not just a driver issue. While talking to support folk etc it
emerged that the card is high latency (I don't have numbers), though they
were hoping to fix this to an extent. Further, it turns out that 4x16x44.1
is about the limit that USB can take - you can't do 4x24x44.1 for instance,
simply because USB doesn't have the bandwidth. Also, I couldn't find any
hint of a decent recording level control for use in ASIO mode (bad with only
16bit) although to be fair I never had the thing working properly. Took it
back.

2.  MOTU Firewire thing: Got fed up and eventually foolish enough to go for
this. Did some research and it seemed that although this card is "firewire
compatible", it only really works with Macs. Possibly with some firewire
cards on some PCs/laptops, but not all and getting hold of this information
proved difficult. And the card is *expensive*! Eventually got disheartened
and decided that until this was all cleared up I'd just buy a cheap
four-input card for my desktop to tide me over and forget the laptop. Then I
found...

3.  Echo/Event Layla/Mona: These PCI cards have been around for a while. I've
just bought a Mona. This has four inputs with preamps/phantom power, six
outputs, and a host of digital ins/outs. The Layla is even better (although
I don't think it has the preamps/phantom power). I talk to the Mona through
ASIO although I found an incompatibility/bug in Steinberg's ASIO SDK -
luckily this comes in source form so it's fixable. Comments: the digital
engineering seems very poor - this "plug and play" card had an IRQ conflict
with my existing audio card, an old Gina by... Echo/Event. This resulted in
all sorts of odd Windoze crashes, mostly resulting in bluescreen. It seems
that neither of their cards bother to do the plug and play "negotiate"
thing. Problem was fixed eventually by moving the cards to different slots
on the motherboard (eek). The Mona "monitor" program (will the volume
sliders/monitors) is of a far lower grade than the one for my Gina card and
the graphical volume monitors on the rack and the software do some odd
things when both playing and recording. Apparently this is a temporary
program while they develop a Java version, though the idea of running a JVM
within a single CPU box doing low-latency audio fills me with dread. Luckily
you can switch the monitor program off without breaking anything
(apparently). Having said this, now that I've dealt with these issues the
card is working very nicely! And why did I go for this card in the first
place? Because they've just brought out a PCMCIA card that talks to the
external breakout box (where the A/D/A converters are). I've not seen this
yet and I have an instinctive distrust now for the quality of the control
logic involved, but I'm trusting that this can be made to work!

I'm not aware of any likelihood of *any* of these cards (or my old Gina)
being supported on Linux any time soon, although few technical teams I've
spoken to don't seem unfriendly - it seems more a matter of resource.
Comments from ALSA/OSS folk appreciated...

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Brad
> Bowman
> Sent: 18 January 2002 03:16
> To: linux-audio-dev
> Subject: [linux-audio-dev] USB Souncards
>
>
> I was planning to get a new sound card to use now
> with my current fragile laptop and future whizz-bang
> desktop.  As they only have USB in common I thought
> that might be best although hints in the earlier
> usb audio thread have worried me.  If I'm paying
> a premium price for poor performance then I might
> just wait until I'm in a position to get a nice
> desktop.
>
> So, in short, what are the issues with USB sound cards
> under Linux?  In particular, does it effect latency and
> realtime reliability?
[...]




RE: [linux-audio-dev] LADSPA 1.1

2002-01-16 Thread Richard W.E. Furse

Okay, I'll try to find time to sort this one out. Do you need it right now,
or can I leave it for a bit? I'd prefer to do it very slightly differently
(I'd prefer not to use so many bits plus ideally I'd also like to be able to
specify default values).

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Steve
> Harris
> Sent: 16 January 2002 15:27
> To: Linux-audio-dev
> Subject: [linux-audio-dev] LADSPA 1.1
>
>
> Late last year there was some discussion about LADSPA 1.1, the defaults
> issue still needs resolving, so can we agree on it?
>
> Paul's suggested addition to handle defaults looks like:
>
> --
> /* Hint LADSPA_HINT_DEFAULT_* indicates that in the absence of
>other information (such as a preset or user preferences) a port
>should be set to the suggested initial value. Notice that this
>hint is valid only for control ports, and should not be set for
>audio ports (hosts should ignore it if it is).
>
>HINT_DEFAULT_{MIN,MID,MAX} all require that the HintDescriptor
>has HINT_BOUNDED_ABOVE and/or HINT_BOUNDED_BELOW set, as required
>to compute the default value.
>  */
>
> #define LADSPA_HINT_DEFAULT_ZERO0x40   /* set to 0.0 */
> #define LADSPA_HINT_DEFAULT_ONE 0x80   /* set to 1.0 */
> #define LADSPA_HINT_DEFAULT_MIN0x100   /* set to min */
> #define LADSPA_HINT_DEFAULT_MID0x200   /* set to
> min+(max-min/2) */
> #define LADSPA_HINT_DEFAULT_MAX0x400   /* set to max */
>
> #define LADSPA_IS_HINT_DEFAULT_ZERO ((x) & LADSPA_HINT_DEFAULT_ZERO)
> #define LADSPA_IS_HINT_DEFAULT_ONE  ((x) & LADSPA_HINT_DEFAULT_ONE)
> #define LADSPA_IS_HINT_DEFAULT_MIN  ((x) & LADSPA_HINT_DEFAULT_MIN)
> #define LADSPA_IS_HINT_DEFAULT_MID  ((x) & LADSPA_HINT_DEFAULT_MID)
> #define LADSPA_IS_HINT_DEFAULT_MAX  ((x) & LADSPA_HINT_DEFAULT_MAX)
>
> --
>
> I think this is fine. It's backward compatible (1.1 plugins will work fine
> in a 1.0 host and vice versa). It will solve a lot of host problems.
>
> No riders please! Except, maybe... ;)
>
> - Steve




RE: [linux-audio-dev] LADSPA extension proposal (quick action wanted)

2001-12-06 Thread Richard W.E. Furse

Oh dear, here we go again. Richard pops up to be a pain in the neck. The
suggestions:

CONTROL PORTS SHOULD ACCEPT ARRAYS

This is in the same family as a much more common request, "control ports
should accept strings" (or doubles, events, MIDI, WFT etc). This requires a
rather more complex API than LADSPA as the API has to handle memory
management and more complex port descriptors. This requires a lot more work
for the host programmer and is a potential barrier to plugin and host
programmer. More in "going beyond LADSPA" below...

GUIS AS PART OF THE PLUGIN

As I've said before, I'd prefer an approach where an appropriate delivery
mechanism can be chosen depending on the toolkit. Flat files may work for
XML, or perhaps calls into the library containing the plugin itself. For
instance, a host written in GTK might want to look for a
"get_LADSPA_gtk_GUI()" in the library containing the plugin. If it finds it,
fine. If not, the host might wish to look for an XML wrapper, Snd spec or
whatever else it supports. I think it's important that it is possible to
separate GUI and plugin (e.g. for remote control of a dedicated FX box).

I think work on a number of GUI standards of this type is useful, as long as
we can try to get a consensus within each toolkit framework. I'll put
archives/links on www.ladspa.org if that's helpful - I think the key to
getting this sort of thing to work will be to make everything visible - I
*still* don't have a copy of the XML GUI spec! Does anyone implement it yet?

CATEGORIES

Again, this is ancilliary to the plugin itself although it did nearly end up
in the original spec. Perhaps we should agree on a set of calls that the
library might support, possibly in a number of different flavours depending
on application requirements. I also like Steve's ontology idea.

XML FOR PLUGIN PARAMETER SETS

This is slightly OT but relevant - I think this is a good idea. To do it
"right" it needs to be compatible at least with any spec for networks as any
network standard will have to deal with plugin parameters sets. This will
probably happen anyway - is anyone looking at this? I have a fairly trivial
but powerful project called PNet that I keep meaning to find time for that
would use such files, so I'm very interested in this one (and could produce
the spec if necessary).

DEFAULT PARAMETER VALUES

I regret this didn't go into the original spec - sorry Steve! One for v1.1.
In the meantime, defaults can go into GUI specs (which to my intuition is a
reasonably natural home for them anyway).

PER-PLUGIN VERSIONING

Hmmm, not sure about this. PluginID+Version is just a bigger int, so we can
get away with just PluginIDs in principle. I'm not sure versioning helps
much - it's just two numbers rather than one in a XML network description or
suchlike. There are plenty of IDs out there still if folk need more. I'd
vote for use of a new plugin ID when an interface changes substantially -
otherwise old saved networks will break.

CHANGING CONTROL PARAMETERS DURING PROCESSING

Hmm, this does sound like my style of specification English (and the "[at]"
isn't appropriate IMHO!). I don't think it's unambiguous, although perhaps a
little convoluted - the plugin is guaranteed that it can take sin(*ctrl) at
the start of run() and cos(*ctrl) further on and get consistent results.

It's important for a number of the reasons that the user and plugin know
that the control value will stay fixed for the duration of the run() call.
Some that spring to mind: (A) What happens when LADSPA runs on future
hardware with small word sizes and larger floats? (B) Even if the float
access is atomic, what happens when the host changes two interrelated
control ports separately so inconsistent values are read by the plugin? (C)
What happens if the C/C++ optimiser thinks it's more efficient to read twice
from the control port? And it's pretty easy for the host to deal with the
problem on its side.

GOING BEYOND LADSPA

To quote part of the original posting (see
http://www.ladspa.org/original_api.txt), "I believe this plugin API should
be a subset rather than a superset of the logical functionality of systems
in use at the moment". I still think this is true. The idea was that almost
all plugins would work with almost all hosts with minimal effort. This has
worked - it's easy to add LADSPA support to hosts and the API is flexible
enough for most of the common DSP algorithms out there (and we were still
missing quite a few when I last looked, but then I haven't kept up with
Steve's excellent efforts!). Because of this and because new programmers
will carry on writing new hosts, I think we need to keep LAD*S*PA simple!
That's just the host side of things - IMHO the API is already of borderline
complexity for entry-level DSP plugin programmers.

However, Ardour needs a richer plugin API. MN has one. VST has one.
Quasimodo has one. aRts has one. GLAME has one. PD has one. Csound has one
etc etc. So how should the Ardour team deal with the i

RE: [linux-audio-dev] Audio dependency rendering - an idea

2001-11-19 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul
> Winkler
[...]
> I've heard from at least one guy on the csound list who works heavily
> with make, by breaking the piece into many short sections and many
> sub-orchestras, and using a simple mixer orc/sco to combine everything
> and arrange the sections. This means that large pieces that would take
> all night to render can be worked on in near-realtime.
>
> and of course, for backing up or emailing such a monster, "make clean"
> is really handy!
[...]

Absolutely - it's a great way to work (I've been doing this for years). You
can drop in bits of algorithmic composition, use all sorts of tools, change
an underlying audio sample and have all dependent files rebuild, use
parallel processing on SMP boxes and generally scale the project, run "make
clean" and pack the whole thing down to a nice concise archive. Thoroughly
recommended.

--Richard




RE: [linux-audio-dev] Re: Video JACK?

2001-11-02 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Jelle
> Sent: 30 October 2001 21:52
> To: [EMAIL PROTECTED]
> Subject: [linux-audio-dev] Re: Video JACK?
>
>
> On Tue, Oct 16, 2001 at 10:09:46AM -0400, Paul Davis wrote:
>
> hi there, i said i would get back to this after i messed up my exams and
> that is now :)
>
> > >would you, at all, be interested in extending JACK to mediate video
> > >connections between software (even with variable clock rates)?
> >
> > if you can see some way to do that, then sure. personally, i
> > don't. the whole model upon which JACK is based is a single periodic
> > "tick" source (not necessarily with a regular period, however) driving
[...]
> okey, there is about three distinct signals that you want to schedule
>
> - fixed interval (audio at 44100/blocksize intervals per second)
> - variable interval (video, framerate as high as possible)
> - interrupt driven (incoming midi events, external clock)
>
> I am working on a potential JACK client that incorporates all of
> these signals. All components in the graph have a "process()" function.
>
> My idea was to seperate the different clocks and provide my own
> scheduler. the schedular loads a scheme (more or less) such as:
>
>  1. If there are any interrupts process them
>  2. process the audio at (interval) unless running 1
>  3. process video when audio is done
>
> it uses some kind of relay-system that passes a token around, so that
> only one thread is run at the same time. The audio thread has higher
> priority than the video thread and thus the video thread is stopped when
> the audio thread gets intervalled.
>
> I've implemented something that does 2 and 3 and that works very well.
> Audio is always in time and video goes as fast as possible, iaw is
> processed in CPU's "spare-time". i've extended this to do background
> loading in a thread 4 and that works just fine.
[...]

I'd be interested to know if this fits with the LADMEA approach (see
www.ladspa.org/ladmea).

--Richard




RE: [linux-audio-dev] surround encoding

2001-10-18 Thread Richard W.E. Furse


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Nick
> Bailey
> Sent: 18 October 2001 13:18
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] surround encoding
>
>
> Steve Harris wrote:
>
> >
> > Thanks, I'm very interested in ambisonics, but was put off by
> the price of
> > the equipment, looks like decoding is covered then. Do you know if its
> > posible to build a soundfield type mic using cheap elements, or does it
> > require really good ones?
> >
> > - Steve
>
> I believe so: you need an omni and two figure-of-eights for a
> minimal setup if
> I remember.  The 8s are places front-back and left-right in the
> horizontal plane,
> and coincident with the omni.  Of course, depends what you mean
> by cheap (most
> cheap mics are Cardiod, aren't they.)

Yep - I believe many of the original recordings were done this way. You need
another vertical figure-of-eight to capture the height information (Z
channel). Re PZM: this will probably allow you to construct a good
figure-of-eight and omni response, but there's nowhere to put the other
mikes. The figure-of-eight Blumlein arrangement mentioned is again probably
adequate for 2D although you'd need a little maths to turn it into B-Format.

The problem with all these approaches is getting the mike capsules
coincident enough to avoid phase problems. The ST250, Mk5 (and the other
one) get around this by use of tetrahedral capsules and a bit of matrix
maths. This also reduces the S/N ratio (essentially by averaging) - the
ST250 I own is wonderful.

The best cheap option for construction of soundfields in the studio (rather
than in the field) is probably to record in mono and spatialise into an
Ambisonic soundfield using the encoders such as are included in the CMT set
(or VSpace if you want a proper simulator).

--Richard




RE: Video JACK?, was Re: [linux-audio-dev] What about OpenML? Some questions

2001-10-17 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Jelle
> Sent: 16 October 2001 12:19
> To: [EMAIL PROTECTED]
[...]
> > >And while I'm at it, can JACK transport other than audio signals, and
> > >at another clock rate (say video)?
> >
> > In theory yes. The current reference implementation isn't complete,
> > and so in practice, no. This will change soon. Note, however, that
> > there is only one reference clock signal in a JACK system. If other
> > clock rates are not integer multiples of the reference clock, then its
> > difficult to make things work correctly. This is almost certainly true
> > for audio/video integration.
>
> to be honest, I'd rather have a mixed callback and streamed system. It's
> likely that video doesn't run at a fixed rate, but rather (eg.) as fast
> as possible. (I'm writing software which integrates audio and video, and
> the video can be switched between fixed_rate and variable_rate.)
>
> Streaming is probably beyond the scope of JACK, but it doesn't
> seem usefull
> to create another system for just transferring variable rate video (or
> whatever).
>
> would you, at all, be interested in extending JACK to mediate video
> connections between software (even with variable clock rates)?
>
> same question for LADSPA (i'm working on a extended "video"-version)

Of course, you could always take a look at alternative models such as LADMEA
(idea-stage prototype at http://www.ladspa.org/ladmea/).

--Richard




RE: [linux-audio-dev] surround encoding

2001-10-17 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Steve
> Harris
> Sent: 16 October 2001 17:33
> To: [EMAIL PROTECTED]
[...]
> > In extreme, people would have to design a new free surround format
> > and offer the decoder to amplifier manufacturers.
>
> What about ambisonics? I think the decoder hardware is expensive but
> aren't there some LADSPA plugins that can encode it? It's technologically
> superior to the 5.1 and 6.1 formats anyway IMHO. Though I asuming that the
> point is to produce something that can be played back on generic domestic
> equipment.
[...]

Now there's a subject and a half. Ambisonics is *different* to 5.1 and has
advantages and disadvantages (mostly advantages IMHO). Much of the power of
Ambisonic encoding is that you can find an optimal decoding strategy for
whatever speakers are available and you don't need a hardware decoder (there
are a number of software ones included in the CMT plugin set).

Anyone wanting to find out about Ambisonics might be interested in taking a
look at http://www.muse.demon.co.uk/3daudio.html.

--Richard




RE: [linux-audio-dev] surround encoding

2001-10-17 Thread Richard W.E. Furse

Don't forget Ambisonics ;-) Thoroughly free to use most of the techniques
there...

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 16 October 2001 04:37
> To: [EMAIL PROTECTED]
> Subject: [linux-audio-dev] surround encoding
>
>
> whats the legal status of surround encoding? are we free to write
> GPL'ed software that encodes 6 streams of audio data into DTS or Dolby
> digital?
>
> --p




RE: [linux-audio-dev] is there a hammerfall dsp driver in the works?

2001-10-15 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 15 October 2001 14:03
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] is there a hammerfall dsp driver in the
> works?
[...]
>
> >version for laptops) driver for linux at this time? If not, what
> are current
> >available solutions for the intel/amd laptops in linux as much as the
> >low-latency multichannel audio hardware is concerned?
>
> There are none. The VX Rocket is the only PCMCIA device that could be
> considered a serious audio interface, and it supports only 2 channels.
>
> --p

I believe there is (was) a multichannel version of this around (may be
output only), however I gather that both cards have rather bad S/N ratios
because the A/D converters have to sit inside the laptop, very close to a
lot of other electronics and probably not too well shielded.

Decent multichannel audio on laptops is turning out to be rather annoyingly
difficult. I've been waiting for ages for the USB-based M-Audio 4in/4out
card. This has shipped for the Mac, but (nearly a year on) they've still not
released it for the PC. Last time I asked they were having trouble with the
Windows drivers. Perhaps the Linux crowd should step in...

About the only other option I'm aware of (and which I'm now seriously
considering) is the MOTU Firewire card. A very serious piece of audio kit -
it's a fairly shallow 1U rack with lots of ins and outs. It's expensive and
quite a lot to carry around - and I have a suspicion that writing a Linux
driver may well be nontrivial.

--Richard




RE: [linux-audio-dev] Lots about latency and disk i/o and JACK...

2001-10-04 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 04 October 2001 02:52
[...]
> a LADSPA control port may have its value
> changed at *any* time. there is no inherent control rate built into or
> implied by the control port. the frequency with which the plugin can
> notice the changes is limited *within the host thread that calls the
> plugin* by the audio block size, but other parts of the plugin (e.g. a
> GUI) can inspect the port at any time.
[...]

This isn't clear - a LADSPA control value can only be changed between calls
to run() on the plugin, so during that call the plugin is guaranteed that
the value will be constant. The term "control rate" becomes meaningful with
LADSPA when run() calls work on a standard block size (kr=sr/ksmps, you know
this stuff) and (implicit) control values are handled at this rate.

Incidentally, an SAOL->LADSPA compiler sounds like an interesting project.
My memory of SAOL minutae has faded, and I suspect there might be some
issues with I-time processing and the number of operators that might have to
be coded up - but otherwise I think this is probably straightforward.

--Richard




RE: [linux-audio-dev] LADMEA revisited (was: LAAGA and supporting programs)

2001-10-04 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 02 October 2001 23:12
[...]
> >Yep, but SHM isn't the only answer. Sockets provide a particularly useful
> >implementation.
>
> Sockets still force a data copy and they are bandwidth limited as well
> AFAIK.  They also don't offer RT characteristics. Try to write a lot
> of data to a socket, and the kernel will do a bunch of
> non-deterministic memory allocation to accomodate you.

Yep, but sockets allow audio and other data to be streamed between machines,
very useful live in a studio. The point is that different exchanges are
suitable for different contexts. If Alice write a whole load of drivers for
JACK that use SHM to stream float audio, MP3 audio, MIDI, video etc etc
between clients, what happens when Bob want to pipe this data to another PC
in his studio. Does Bob really have to write separate new drivers to stream
float audio and MP3 audio and MIDI and video etc etc all over again but
using sockets this time?

Say this has happened, and Clive has just written a couple of new clients
that make funny noises using a new format (say 10kb DWT packets every 0.1s).
To use SHM, Clive has to delve into Alice's code and write a new driver
using SHM (or persuade Alice to do it - or start again from scratch - and
you know how much programmers like to do that). To use streams, Clive has to
delve into Bob's code (etc).

This seems to me to be a lot of wasted effort. Given a reasonable
description of the data type the exchange can automate this. In a LADMEA
world, there would probably be two exchanges here: a SHM based one and a
socket based one. Adding a new data type merely requires Clive to provide a
description of his data. And Alice and Bob never need to get their heads
around quite what an evil thing the DWT can be...

[...]
> if its process() callback takes too long, the client is removed from
> the graph. the graph will fail to meets its deadline on that
> particular execution cycle, but then it return to normal.

How are deadlines defined? Presumably these rather depend on the entities
downstream. A low-latency audio stream might by definition have a very early
deadline, however MIDI may be a different matter (a millisecond or two here
or there doesn't really matter) - and I seem to remember JACK is intending
to block MIDI which will unnecessarily *induce* latency of anything up to
the block length. But that's mostly a detail of implementation rather than
design philosophy.

> >> 7. Consider a case where none of the clients in a graph require
> >>live operation, e.g. a MIDI player is playing a 10min piece where
[...]
>
> just clock the server from a driver that isn't wired to an audio
> interface, but instead does file i/o. it can call process() as rapidly
> as it can pass of the data to some file i/o method (either write(2) or
> a buffer with a thread on the other side of it).

How does Dave write this? He wants to able stream video and MP3. And what if
he wants to be able to run some of his clients on different machines. And
actually he think's Clive's DWT thing is quite cool and would like to stream
this across his network (but has no idea what a wavelet is).

[...]
>I don't see that
> JACK has any problems handling any of these, other than the ones that
> arise from the interval size potentially being too large for some of
> the data types to be delivered "on time" (e.g. MIDI driven by audio).

This is why understanding bandwidth requirements is important. Modern
techniques can give bandwidth/latency/jitter guarantees, but these are only
useful if one knows ones requirements. Similarly, a live RT system can
refuse to glitch when it knows a change to its bandwidth requirement isn't
possible.

>
> --p

--Richard




RE: [linux-audio-dev] LADMEA revisited (was: LAAGA and supporting programs)

2001-10-02 Thread Richard W.E. Furse

Oops, you're right of course - brain must have shut down.

Hopefully issues are addressed to an extent in other emails. If all else
fails, the numbered points in the recent big posting are a good start point
for debate, as is the API (if it makes sense to folk). What I find
interesting is that so far no one has actually object to the concept-level
stuff in LADMEA except to suggest that it's more than we need.

With apologies for idiocy,

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Karl
> MacMillan
> Sent: 02 October 2001 21:31
> To: [EMAIL PROTECTED]
> Subject: RE: [linux-audio-dev] LADMEA revisited (was: LAAGA and
> supporting programs)
>
>
> On Tue, 2 Oct 2001, Richard W.E. Furse wrote:
>
> > > -Original Message-
> > [...]
> > > It is certainly true that there is only one type of
> "exchange" currently
> > > written for JACK, and that is for low-latency PCM. This does
> not mean that
> > > the API is not suitable for other types of connections,
> though. I think
> > > the beauty of the JACK API is that the clients have so few
> > > responsibilities, which actually makes it easier to provide different
> > > backends. I would certainly be interested in some specific
> cases that you
> > > think cannot be handled by the JACK api - as Paul said, now
> is a good time
> > > to talk about these issues.
> > >
> > > Karl
> > []
> >
> > Hmmm, LADMEA is written to be as short as possible with little/no
> > redundancy, possibly at the slight expense of performance.
> Which features do
> > you think LADMEA has that aren't necessary (other than the
> Codec spec which
> > is a bit of an appendix)?
> >
>
> I don't quite understand what you are asking from the context of my mail.
> I was not addressing LADMEA at all, but rather asking for clarification
> about the complaints that you have about JACK. So it isn't clear to me why
> you are asking what I think is unnecessary in LADMEA.
>
> Karl
>
> > --Richard
> >
>
> -
> Karl W. MacMillan
> Computer Music Department
> Peabody Institute of the Johns Hopkins University
> [EMAIL PROTECTED]
> mambo.peabody.jhu.edu/~karlmac
>




RE: [linux-audio-dev] LADMEA revisited (was: LAAGA and supporting programs)

2001-10-02 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 30 September 2001 19:02
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADMEA revisited (was: LAAGA and
> supporting programs)
>
>
> First of all, I'd like to thank Richard for his insightful and very
> useful email. The LADMEA header file didn't make clear to me what his
> concerns were; this email has.
[...]

Oops, sorry. I'm very tight for time (it's one of my working years). The
header file is intended as a condensed, functional form rather than an
explanation. I am a mathematician after all! I'd rather hoped that folks
that had got used to my style from LADSPA might have the persistence to read
and absorb. But then it is 50% longer I suppose...

> >The essential idea is to extend from approaches like LAAGA/JACK,
> GStreamer
> >or aRts where there is an all-encompassing audio infrastructure (an
> >"exchange") into which various client applications are slotted. In the
> >LADMEA world, "clients" and "exchanges" communicate through the
> lightweight
> >LADMEA interface. Clients can load exchanges, exchanges can load clients,
> >some other program can load both (when in library form).
>
> This seems like a noble goal. It suffers, IMHO, from one major
> flaw. There is a level of "abstraction" and "API independence" that it
> is not worth moving beyond.

Hmmm. This again is my fault - abstraction with such things vanishes in the
implementation. I think folk will find that an example or three will make
things very intuitive very quickly (even if just on a copy/paste level). I
ought to try to find some time to finish the SDK.

> Lets just briefly review the goals here: we want streaming media
> (audio, MIDI, video, whatever) to be able to talk to each other. we
> don't care about the internal processing networks that a program
> uses. we care only that program A can route data to program B. unix
> pipes, the canonical method of doing this, suffer because they don't
> have enough bandwidth and involve needless data copying (with
> resulting cache pollution); shared memory, which solves these
> problems, cannot be used without another mechanism to serve as a
> thread wakeup signal. ergo, we need a new mechanism.

Yep, but SHM isn't the only answer. Sockets provide a particularly useful
implementation.

> Designing a system into which JACK, GStreamer and aRts can all fit
> seems great, but I immediately fine myself asking "is it worth it?"
> For example: GStreamer is self-avowedly an internal application design
> structure, not an inter-application design structure, and so it really
> doesn't apply to the kinds of problems that are being addressed
> here.

I'm not sure I agree here - I think the GStreamer team has a better grasp of
the real issues here than most. And by the way, why can't Ardour be turned
into a GStreamer plugin? The GStreamer team would probably appreciated the
changes required to make that happen.

>   aRts and JACK ostensibly exist to solve mostly the same problem
> set. The claim is that aRts does not solve it and cannot solve it
> without a substantial rewrite. Although there is a history of APIs
> written to cover a variety similar-goaled, but differently implemented
> APIs, its not something that has tended to interest me very much. Its
> like the situation with pthreads and the wrappers for it: why bother?
> pthreads is a well designed, typically well implemented API and there
> isn't much reason to use a wrapper for it that i can see. buts that
> just me :)

I don't follow this. Whose claim is this? I'm not sure what the pthread
analogue is here.

> >This would mean that a newly written audio synth written using
> LADMEA could
> >immediately be slotted into LAAGA, GStreamer or aRts (assuming LADMEA
> >support) or use a LADMEA interface to ALSA. Correspondingly, if someone
> >writes a new framework for audio streaming over ATM (perhaps using
> >compression on some channels as bandwidth requires) then this can
> >immediately be used with the client applications such as
> recorders, players,
> >multitracks etc.
>
> The problem with these claims is that they are equally true of JACK
> all by itself. These goals/abilities don't differentiate LADMEA and
> JACK in any way. This is all true of the remaining claims in the
> paragraph.

Umm, sortof. All is fine as long as JACK is only used with PCM audio because
it's unlikely (!?) that we'll be using too many different compression
schemes here.

> >1.   How can a client transmit a data type to another
> (potentially remote)
> >client across a exchange that has never heard of the data type?
> >2.   How can a client know what existing channels it can use?
> >3.   If clients offer variants of data types (e.g. 16bit unsigned
> >little-endian PCM, float l-e PCM, double l-e PCM, MP3), how can
> the exchange
> >persuade the clients to agree? If they cannot, how can the
> exchange go about
> >inserting a codec? Note again that 

RE: [linux-audio-dev] LADMEA revisited (was: LAAGA and supporting programs)

2001-10-02 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 01 October 2001 17:32
> To: Richard Guenther
> Cc: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADMEA revisited (was: LAAGA and
> supporting programs)
[...]
> However, as I pointed out yesterday, and as Richard himself notes,
> there is no particular problem imagining an API with more flexibility
> like LADMEA "containing" "exchanges" like JACK.
[...]

Yep, absolutely. The reasons I'm not in a state of panic over about
proliferation of a subset API is that it shouldn't be hard to retrofit a
JACK exchange with LADMEA exchange support in the same way that it isn't
hard to fit LADSPA support to a host. It would be a bit more of a pain to
retrofit clients, but a generic JACK->LADMEA wrapper library probably
shouldn't be too hard to write although there would be a small performance
hit.

I'd probably be arguing more if I had time to build a decent LADMEA SDK now,
but this is unlikely unless I get more bored at work (this frees up
brainspace). ANY movement towards cross-app comms now is good and I don't
want people doing nothing waiting for things to resolve.

Advice to client coders: read the LADMEA API - if you like it, give me some
feedback. If you don't understand it, hassle me. Otherwise, go with JACK: if
this really gives you everything you need it won't be backbreaking to
retrofit later on (when I get really bored ;-).

--Richard




RE: [linux-audio-dev] LAAGA or JACK or...(was: LADMEA revisited...)

2001-10-02 Thread Richard W.E. Furse

> 4) "Think you know pro-audio on Linux? You don't know JACK!" is just
> too good to pass up, even for people like myself for don't watch
> TV in the USA :)

Tempting fate: there's also "Windows has Rewire, Linux has Jack." ;-)

Having said that, LAIC is plain obscure. Can't we just get the damn think
running and THEN think about renaming it for marketing purposes?

Damn, I meant to stay out of this. :-P

--Richard




RE: [linux-audio-dev] LADMEA revisited (was: LAAGA and supporting programs)

2001-10-02 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Karl
> MacMillan
> Sent: 29 September 2001 23:37
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LADMEA revisited (was: LAAGA and
> supporting programs)
[...]
> It is certainly true that there is only one type of "exchange" currently
> written for JACK, and that is for low-latency PCM. This does not mean that
> the API is not suitable for other types of connections, though. I think
> the beauty of the JACK API is that the clients have so few
> responsibilities, which actually makes it easier to provide different
> backends. I would certainly be interested in some specific cases that you
> think cannot be handled by the JACK api - as Paul said, now is a good time
> to talk about these issues.
>
> Karl
[]

Hmmm, LADMEA is written to be as short as possible with little/no
redundancy, possibly at the slight expense of performance. Which features do
you think LADMEA has that aren't necessary (other than the Codec spec which
is a bit of an appendix)?

--Richard




[linux-audio-dev] LADMEA revisited (was: LAAGA and supporting programs)

2001-09-29 Thread Richard W.E. Furse

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
> Sent: 28 September 2001 13:33
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] LAAGA and supporting programs
>
>
> >When can we expect LAAGA to be stable enough (API-wise) and be supported
> >by linux audio programs? It's the thing I've waited the most in Linux
> >related development, especially for the syncing capabilities it offers.
>
> * OK, can we start calling it JACK now, or has Karl's paper done us in ?
>
> its a good question. i'm sorry that i don't have an answer right
> now. i believe that the API as far as it currently exists will be
> stable. so far, several people on the list have been enthusiastic
> about it; conversely Richard Furse felt that the API wasn't adequate
> and offered his own much more, uhm, complex API. there wasn't much
> comment on that, but what there was didn't seem convinced by the need
> for more complexity. we need to be sure about this, because it would
> be crazy to proceed with the current delightfully straightforward API
> and then find out that Richard was right, and it can handle other things.

[...]

Hmmm, sorry I've not done a lot more on this - I'm very busy with other
things at the moment.

The original LADMEA prototype remains available at
http://www.ladspa.org/ladmea/ but I've not had time to build an SDK for it.
The API isn't long - I think if a people got their heads around it there
might be some debate.

The essential idea is to extend from approaches like LAAGA/JACK, GStreamer
or aRts where there is an all-encompassing audio infrastructure (an
"exchange") into which various client applications are slotted. In the
LADMEA world, "clients" and "exchanges" communicate through the lightweight
LADMEA interface. Clients can load exchanges, exchanges can load clients,
some other program can load both (when in library form).

This would mean that a newly written audio synth written using LADMEA could
immediately be slotted into LAAGA, GStreamer or aRts (assuming LADMEA
support) or use a LADMEA interface to ALSA. Correspondingly, if someone
writes a new framework for audio streaming over ATM (perhaps using
compression on some channels as bandwidth requires) then this can
immediately be used with the client applications such as recorders, players,
multitracks etc. Just about anything can be a client and it shouldn't be
hard to retrofit existing software such as Csound to read and write data to
LADMEA exchanges. And the choice of exchange can be tailored to the task:
some might allow crude and fast connections using RAM, some might allow
latent but high bandwidth connections across networks using compression,
some might be happy only with point/point connections, some may be happy
managing complex remote graphs. The way all this is managed is up to the
exchange - which might have a graph-based GUI, command line
connect/disconnect calls, a fixed patch-bay format or whatever.

I do not think that LADMEA is a particularly complex solution and hopefully
this is bourne out by a less than cursory investigation. The reason it
*appears* complex is that I've attempted to address a lot of issues in a
fairly confined space. Apart from the more obvious, somes of these include:

1.  How can a client transmit a data type to another (potentially remote)
client across a exchange that has never heard of the data type?
2.  How can a client know what existing channels it can use?
3.  If clients offer variants of data types (e.g. 16bit unsigned
little-endian PCM, float l-e PCM, double l-e PCM, MP3), how can the exchange
persuade the clients to agree? If they cannot, how can the exchange go about
inserting a codec? Note again that the exchange may never have heard of the
data types involved.
4.  How does a exchange know how much bandwidth it is likely to need to
communicate data (e.g. over a network)? How latent can data be? How much
jitter may be tolerated on a network? Note again...
5.  How can an exchange know when a client has failed?
6.  If sources go out of sync (e.g. audio streaming over an ISDN link across
the Atlantic drifts in sync relative to local overdubs) then how does the
exchange know this is happening and deal with it. (Ok, most exchanges won't,
but they could if they wanted.)
7.  Consider a case where none of the clients in a graph require live
operation, e.g. a MIDI player is playing a 10min piece where the MIDI is
sent to a soft synth which generates PCM audio that is then passed to a
sound file writer. Say this graph can be run live and uses 20% CPU time on a
single CPU system. An intelligent exchange should be able to work out that
the graph instead can be run "offline" in 2mins as there is no requirement
to synthesise in real time. The same thing applies for subgraphs and
exchanges should be allowed access to enough information to cache output
from subgraphs. Again, many exchanges wouldn't wish to support such
facilities -

[linux-audio-dev] XML and LADSPA settings/networks

2001-09-23 Thread Richard W.E. Furse

A quick thought on saving plugin settings in XML:

The structure used for this should be self-contained in the XML sense and so
must contain IDs. This is because the representation needs to be extensible
to support complete plugin networks. I've been meaning to do this but
haven't had time.

Putting IDs in filenames may also be a good idea, but I think this rather
depends on how the participating software wants to work with settings rather
than a part of any standard.

--Richard




RE: [linux-audio-dev] LADSPA_HINT_LOGARITHMIC

2001-09-05 Thread Richard W.E. Furse

Assuming just d and v are variable, you'll probably want some thing like

d(v):
d = d_low + (d_high - d_low) * (log(v) - log(v_low)) / (log(v_high) -
log(v_low));

v(d):
v = v_low * pow(v_high / v_low, (d - d_low) / (d_high - d_low));

--Richard

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Marcus
> Andersson
[...]
> I would like to know if there is any agreed upon logarithmic function to
> use for LADSPA control ports when the LADSPA_HINT_LOGARITHMIC is set. In
> practice, the function should map display coordinates (the slider) to
> control values.
>
> The best I have come up with myself is this:
>
> Control port value, v
> Control port low limit, v_low
> Control port high limit, v_high
> Display coordinates, d
> Display coordinates low limit, d_low
> Display coordinates high limit, d_high
[...]




RE: [linux-audio-dev] LADMEA Prototype

2001-08-16 Thread Richard W.E. Furse

You will have realtime/clock sync problems for instance when your clock is
on a different box to a real audio input or output. What happens if the
crystal on the audio card isn't in sync with the clock. Or one audio card
with another? This is a common problem.

Assuming this problem is resolved somehow, there's still no guarantee of
relative arrival times across a network (unless you want to use slow
centralised locking techniques, which will fail for latent networks). You're
going to have to implement latency requirements on inputs to judge when a
client has failed/is late. If you want to stream things other than PCM audio
you're going to need to be able to describe these data types and push them
across a network to a potentially different piece/version of
software/hardware. If you want to go *near* ATM you'll have to find some way
to describe your bandwidth requirements, bearing in mind that a well-written
exchange needn't understand the underlying data type. All this is the "new
layer" I was suggesting you'd need - before you know it you'll rewrite
LADMEA :-) Of course, you're right that sending data over a bridge is
non-trivial. This is why LADMEA doesn't have an 'S' in it!

Incidentally, as I think you suspect, it *is* possible to drop or add
samples to deal with sync problems. This is a well-known issue that hardware
digital mixers in the real world have to face. This is partly why they are
expensive - but the technology has been around for quite a while and we
really ought to be able to mirror it in our software-synthesis world. A
mixture of techniques are used, from dropping or inserting samples (bad) to
resynthesis of a block of audio with a length change (good but costly).

[Aside in case I didn't mention it recently: LADMEA allows you to use clocks
to drive software to keep it in sync that way. The clock tick is a data type
like any other (probably with a rather tight latency requirement!).]

As I've said before, the zero-copy issue I'm flexible on, I'd just prefer to
put it off until more fundamental issues are resolved. It isn't conceptually
hard but it is fiddly.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Steve
Harris
Sent: 16 August 2001 12:37
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype


On Wed, Aug 15, 2001 at 11:14:09PM +0100, Richard W.E. Furse wrote:
> This approach can be mediated by the exchange - a clever one can observe
> that two network machines are drifting out of sync and add or delete
samples
> to fix this. As far as I'm aware, LAAGA would need a whole new layer to
deal
> with this. I'm attempting both steps in one.

I don't think so. I was actually planning to implement one of these if
LAAGA takes off, I would have made a bridge driver that ran on both
machines and piped data between them.

The bridge would be synchronous (is that the right term, driven by the
arrival of data anyway), so drifting out of sync wouldn't be a possibllity.

I'm not coinvinced that its possible to add or delete samples to patch up
sync problems without having audible effects or too much overhead.

Obviously sending data through a bridge is not as versatile, but I don't
think its something that should be done lightly anyway. Even over ATM.

> context - LADMEA is intended primarily for inter-app communication, so
> zero-copy isn't nearly such a priority as in LADSPA IMHO.

I suspect that you're underestimating the amount of CPU that will be taken
up with the overhead of running the system, but we will see.

- Steve




RE: [linux-audio-dev] LADMEA Prototype

2001-08-16 Thread Richard W.E. Furse

The problem is isomorphic to disk I/O only in that I/O happens - some
exchange implementations might perform the lazy-writing for you. Whether
this happens or not, I don't see why LADMEA suffers from "related issues to
do with MIDI timing". What are these? LADMEA allows the client to output the
data the moment it has it (however it gets it). What could be better than
that?

If you don't allow asynchronous data you introduce unnecessary latency
averaging half your tick period into your signal chain. Asynchronous data
isn't that unusual once you go beyond PCM.

It sounds as if we're using "inactive" in two different senses. When we mean
the client is not switched on, the exchange will know this and either refuse
to run an incomplete graph or feed silence (or noise?) into the relevant
channel. When a client is broken or late, a exchange can know this and fail
or compensate using a variety of techniques. LAAGA's approach is one option.

When receiving data on a number of channels, you don't need a mutex because
of the way the client API is specified (unless I've made a mistake). This is
the exchange's problem. All data is timestamped already so the client has an
explicit of the "logical time" data relates to. Timestamps only need to be
checked when the client is interested in this data in its own right or when
the client has announced that it can tolerate latency on its inputs [Hmmm,
Richard now realises that the default latency requirement structure needs to
include a lower bound as well as an upper bound on latency so the client can
say how early data can be - easy to fix!].

Again I agree with your buffering comments but would prefer not to implement
at this stage to keep the API simple. The zero copy on 16channels of 44.1kHz
float audio is under 3Mb/sec which is a significant but not back-breaking
cost. For current test purposes I think this is tolerable.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
Sent: 16 August 2001 13:41
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype


>Yep, I understand in LAAGA the client *must* do something when the clock
>ticks - this is fundamentally the way that LAAGA does things. It isn't
>necessarily useful - for instance, a client listening to a MIDI port will
>generate data asynchronously and will have no interest in the tick (its
only
>interest may be in timestamping the data). How would LAAGA stream such MIDI
>data?

Every time the client's process() callback is called, the client has a
chance to pass MIDI data to its ports. Nobody would sensibly write a
client that collected MIDI data in the same thread as the one in which
the callback was made - the problem is completely isomorphous to disk
i/o. you gather data in one thread (possibly timestamp it, which LAAGA
allows to occur from other threads), and pass it along when process()
is called.

Now, it just so happens that MIDI is problematic, along with any other
non-continuous protocol, since the tick system would force MIDI output
to occur only at (or around) ticks. However, I don't see a general
solution to this - the streaming model that LADMEA offers suffers from
related issues to do with MIDI timing. You really have to use an
independently scheduled thread that uses a constantly reset timer to
deliver MIDI properly. Integrating such protocols with continuously
streaming ones like audio and video is non-trivial.

>   The tick mechanism more-or-less works for audio (because of its
>naturally fixed block size) but isn't even optimal there (e.g. when audio
>can be calculated ahead-of-time

again, the tick mechanism does not control when audio is generated. it
controls when audio or other data is *passed* to ports. because of the
design space for LAAGA, there has been a focus on real-time clients
that generate audio within their process() callback. but other clients
could easily be generating audio in another thread and get way
ahead. the purpose of the tick mechanism there is to ensure sample
sync between every client, in the sense that the audio delivered to
the audio interface from every client comes from the same "point in time".

 or when block sizes vary or when the
>application is in more than one process/machine). LADMEA doesn't stop you a
>clock/tick when it's useful - it just doesn't force you to.

to repeat: LAAGA does not require that you generate audio or anything
else on the tick. it merely uses the tick to say "now i'm ready for
your data/i have data for you". you may not have the data ready to go
(a real-time synth client) or you may (a disk file player); LAAGA
doesn't care.

>I don't agree with your fail-over (and "inactive") logic. In your example
>where a sound processor fails, I don't see why LAAGA gives you an
advantage.
>If no audio is generated on a channel, there's not a lot that can be done
>about it.

Is this how gear in a studio works? If my Quadraverb isn't "generating

RE: [linux-audio-dev] LADMEA Prototype

2001-08-16 Thread Richard W.E. Furse

The problem is that (as I understand it) you can only push data out at the
tick moments, which introduces unnecessary latency (on average half your
tick period).

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Steve
Harris
Sent: 16 August 2001 12:44
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype


On Wed, Aug 15, 2001 at 11:14:17PM +0100, Richard W.E. Furse wrote:
>Other ("free thinking") algorithms that don't have this
> approach (e.g. something listening to a MIDI port or real-time GUI) are
much
> less happy in the dictatorship.

I'm not sure about the MIDI case, but this is not really true for GUI
driven software. You just run the GUI in a different theread, its no real
hardship.

- Steve




RE: [linux-audio-dev] LADMEA Prototype

2001-08-15 Thread Richard W.E. Furse

I must admit I don't really follow this - what's the point of "extending
LADSPA to do IPC"? LADSPA is intended as a way to abstract out DSP
algorithms - IPC sounds like an odd thing to bring into this. I could be
missing the point however...

The XML categorisation stuff is an interesting idea however.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Richard
Guenther
Sent: 14 August 2001 16:48
To: Paul Davis
Cc: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype


On Tue, 14 Aug 2001, Paul Davis wrote:

[... dictator vs. community snipped]

Yeah :) Thats it. LADSPA is the dictator way, too - so I think
extending LADSPA to do IPC without interfering with its simple
API is more suitable to our user/developer community than doing
the same thing again. Lets have a GLAME/GStreamer like API for
the real men that want to do powerful nodes.

See my previous mail - Richard.

--
Richard Guenther <[EMAIL PROTECTED]>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/




RE: [linux-audio-dev] LADMEA Prototype

2001-08-15 Thread Richard W.E. Furse

In the streaming model, there's nothing to stop members of the community
being coded so their algorithm is of form "I'm a brainless idiot. When you
tell me to get busy I'll do my processing and send you the results straight
away." The algorithms that this suits will be happy in either the LAAGA or
LADMEA worlds. Other ("free thinking") algorithms that don't have this
approach (e.g. something listening to a MIDI port or real-time GUI) are much
less happy in the dictatorship.

LADMEA is quite different to GLAME and GStreamer in that it is not an
application, but a short API that allows a client to communicate with such
applications.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
Sent: 14 August 2001 13:56
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype


[...]

My model features a fascist dictator who screams at every node in
turn: "hey you brainless idiot in box 1! get busy, and tell me when
you're done!" The dictator maintains absolute rigid control over the
entire system. The nodes are droids with (almost) no knowledge of the
larger system in which they are embedded.

The "streaming" model represented by GLAME, GStreamer et al. instead
makes the nodes more like a community of cooperating neighbours. They
each do their own thing within a framework that allows them to
cooperate and communicate with each other. They each understand the
system, help shape it, and contribute to and draw from it.

[...]




RE: [linux-audio-dev] LADMEA Prototype

2001-08-15 Thread Richard W.E. Furse

Right, time to catch up on some mail.

Yep, I understand in LAAGA the client *must* do something when the clock
ticks - this is fundamentally the way that LAAGA does things. It isn't
necessarily useful - for instance, a client listening to a MIDI port will
generate data asynchronously and will have no interest in the tick (its only
interest may be in timestamping the data). How would LAAGA stream such MIDI
data? The tick mechanism more-or-less works for audio (because of its
naturally fixed block size) but isn't even optimal there (e.g. when audio
can be calculated ahead-of-time or when block sizes vary or when the
application is in more than one process/machine). LADMEA doesn't stop you a
clock/tick when it's useful - it just doesn't force you to.

I don't agree with your fail-over (and "inactive") logic. In your example
where a sound processor fails, I don't see why LAAGA gives you an advantage.
If no audio is generated on a channel, there's not a lot that can be done
about it. I suppose the LAAGA analogy is that the call to a client does not
return fast enough. This will bring LAAGA to a standstill or glitch. In a
LADMEA arrangement, the exchange has enough information (for streamed audio)
to spot the fact that data delivery is late and act intelligently, either by
stopping proceedings or generating a silent audio stream as a surrogate (or
freezing/glitching if the implementation prefers). In your example with the
patchbay, there is nothing a patchbay itself can do if the audio isn't ready
anyway, within a LAAGA or LADMEA framework. Incidentally, when all is
working the push model means that the patchbay can generate its output frame
immediately as soon as (but no sooner than) all its input data has arrived.
Couldn't be any sooner, no context switch required (for the in-process
case).

It is not true that "*every* kind of client has to do the same kind of check
as the hypothetical patchbay does on its ports-at-which-data-arrives". For
instance, a MIDI port listener will never need to (no inputs). A MIDI port
output will never need to (just output anything fed to it). An audio input
or a scripted software synth will never need to (no inputs). An audio output
will never need to (just output anything fed to it). An in/out audio
processor will never need to (just process inside sendTo()) and send back to
the exchange - doesn't need to know if its running realtime or not. Checks
*do* have to be made where there is more than one input required, e.g. in a
patchbay, FX process with real-time controls or combined MIDI/audio
sequencer or where there is some external binding to realtime (e.g. a
realtime GUI component). These checks are hardly onerous however ("if
(audiochanframes > 0 && controlchanframes > 0) {...}") given the flexibility
provided.

This approach can be mediated by the exchange - a clever one can observe
that two network machines are drifting out of sync and add or delete samples
to fix this. As far as I'm aware, LAAGA would need a whole new layer to deal
with this. I'm attempting both steps in one.

The comments about buffer management are fair - I missed this out for
simplicity. I think everything else present is necessary - adding buffer
management isn't, although it could provide a small optimisation in some
contexts. I'd prefer not to put anything in at this stage to keep the weight
of the API down - it's easy enough to add later. I don't actually think this
optimisation (or the run/run_adding kind) is especially significant in this
context - LADMEA is intended primarily for inter-app communication, so
zero-copy isn't nearly such a priority as in LADSPA IMHO.

Thanks for the feedback,

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
Sent: 14 August 2001 00:59
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype


>Very fair point - sync is a difficult problem and I'm only 90% sure it's
>solved here.
>
>I should have mentioned this more in the earlier post - there is nothing to
>stop clocks ticks from being delivered by a clock generator client (this
can
>be generalised out to include SMPTE and suchlike)

This isn't anything like the kind of clock tick I was describing when
I wrote:

>Summarised, thats because to work in the way expected by users (and
>quite reasonably so IMHO), the "network" needs to be driven by a
>uniform, overarching "clock" that ticks universally for every node. A
>node may choose to do nothing on a tick, but it must be "called" and
>return synchronously at each tick. Any system based on a model like

The kind of click I was referring to is an imperative instruction
delivered in the appropriate order to every node in the graph. The
instruction means "do something now". The node *must* do something
right then, although doing nothing qualifies as doing something. This
has ...almost... nothing to do with sync. Its at a much more basic
level - the basic driving force behind the entire system. Bu

RE: [linux-audio-dev] Broadcasting delays

2001-08-13 Thread Richard W.E. Furse

IMHO the biggest problem with broadcast audio is that there's currently no
generic way to connect a broadcasting framework meaningfully to other Linux
audio applications.

Which is of course a plug for everyone to get their heads around the brand
new LADMEA prototype at http://www.ladspa.org/ladmea/ ;-)

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Alex@gmx
Sent: 13 August 2001 20:27
To: [EMAIL PROTECTED]
Subject: [linux-audio-dev] Broadcasting delays


Hi everybody !

I´m most interested in audio broadcasting but still a newbie in Linux so
there are a lot of questions :
Because I´m a musician with the aim of making music with conferencing tools
I should have a delay of < 50ms (at least first tests in the LAN).

Former tests with WIN and SOLARIS gave me results of about 250 ms. I used
RAT and later on I programmed my own tool with Java JMF which was even
worst.

After some postings in ALSA Mailinglist I was recommeded to move to this
list.
So can anyone tell me what to do to get a minimum delay. One year ago I
heard of theoretical value of 25ms using RTP but I never could reach that.

Is this an interesting subject for this list ? I hope 

Regards
 Alex





RE: [linux-audio-dev] Re: [ardour-dev] which libc6

2001-08-13 Thread Richard W.E. Furse

Hmmm I wonder if the LADSPA plugins are being unloaded before shutdown?

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Samuel S
Chessman
Sent: 13 August 2001 17:20
To: D. R. Holsbeck
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: [linux-audio-dev] Re: [ardour-dev] which libc6


I used gdb last fall and determined that this was due to the ladspa
plugin library having a problem.  If you don't use plugins this goes away.

It is easy to test, unset LADSPA_PATH and run ardour.

Sam

On Mon, 13 Aug 2001, D. R. Holsbeck wrote:

> so which libc version is best for compiling Ardour against.
> Ive got most things kinda working. But I allways get a
> segfault when exiting. Which seems to be coming from free()
> in libc6. So I was just wonderin which version yall were using?
>
>

--
   Sam Chessman
[EMAIL PROTECTED]
Disruptive technologies always appear to incumbents as toys.





RE: [linux-audio-dev] LADMEA Prototype

2001-08-09 Thread Richard W.E. Furse

Very fair point - sync is a difficult problem and I'm only 90% sure it's
solved here.

I should have mentioned this more in the earlier post - there is nothing to
stop clocks ticks from being delivered by a clock generator client (this can
be generalised out to include SMPTE and suchlike) and I've been anticipating
clocks built into the exchanges themselves (and delivered through a channel
like any other data). The fact that the API doesn't specify the mechanism
hopefully(!) doesn't prohibit it - in fact this should allow more
flexibility.

Could you give me some examples where this would be a problem? I've been
dealing with the abstract for the past two days and some examples of the
concrete would be helpful!

BTW, I appreciate that the API is significantly more complex than LADSPA -
it intends to be much more general (and there's no 'S' in the name!). It's
probably in a bit of a counter-intuitive state as I only finished it today
and need to tidy it up. It's also very open in how it should be used, which
probably makes it harder in the abstract but simpler in practice. For a
normal client, the process is:
1. Find or create the channels to work with and request the data types
required.
2. Activate (some negotiation may occur to agree sample formats etc).
3. Receive incoming packets and write outgoing packets.
4. Deactivate when wish to or when requested.
Hopefully this will be a lot more obvious when I've had time to build an
SDK. I'm a bit reluctant to put too much time into this at the moment in
case radical changes are appropriate.

Thanks,

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
Sent: 09 August 2001 23:16
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] LADMEA Prototype

[...]

Summarised, thats because to work in the way expected by users (and
quite reasonably so IMHO), the "network" needs to be driven by a
uniform, overarching "clock" that ticks universally for every node. A
node may choose to do nothing on a tick, but it must be "called" and
return synchronously at each tick. Any system based on a model like
"sendToExchange" and "sendToClient" will, I think, be susceptible to
the same kind of network halting problems that GLAME can encounter,
though within GLAME, this is arguably a feature rather than a problem.

In LADMEA, the problem is slightly the reverse of the one in GLAME,
since LADMEA appears to use a push-to-whoever model, rather
than a pull-to-me model. But the same basic issue still applies. I
think.

Did I misunderstand something?

--p (back on monday, august 13th)






[linux-audio-dev] LADMEA Prototype

2001-08-09 Thread Richard W.E. Furse

Hi folks, as some of you will know I've been looking into a more flexible
way to construct an inter-application data streaming approach, primarily for
audio but with support for general multimedia. At last (apologies for the
delay) I have a prototype API in the flavour of LADSPA. This can be seen at
http://www.ladspa.org/ladmea/. This prototype is somewhat rough and I'm
expecting a lot of modification.

This is a modified repost of a message that the LAD server bounced because
it was too long (thanks to Jörn for the prompt warning) as I included the
API rather than a web reference. The original mail was CC'd to [EMAIL PROTECTED],
[EMAIL PROTECTED], [EMAIL PROTECTED] and
[EMAIL PROTECTED]

In terms of compatibility (and as sources of ideas) I've been looking
particularly at LADSPA, Csound, MN, ATM, GStreamer, OSS, ALSA, LAAGA and
aRts. There is a deliberate separation and abstraction of both the 'client'
and 'exchange' sides of the API ('codecs' are supported too as specialised
clients). I should not be too hard to build exchange links using GStreamer,
LAAGA, ATM and aRts. It should be easy to write new clients or wrap existing
software such as Csound, Ardour, OSS and ALSA. A LADSPA client wrapper
should be trivial. The API (with suitable exchange design) should handle
real time and offline processing. All of this should be possible without
either the client or exchange knowing what they are connected to. Hopefully
it should be possible for the exchange to be in-process, in an external
process or on a remote machine and exchanges do not need to know what data
they are handling (although this can help).

Why should a good program with plugins (e.g. GStreamer) be interested in an
API like this? Hopefully the presence of a (relatively) simple API such as
this will allow two big steps forward: firstly, it should make it possible
for existing external applications such as GStreamer, Csound and Ardour to
work together. Secondly, it allows clients (or codecs) to be written once
and be used in/by many different applications in the same way that LADSPA
plugins are. The target license would allow commercial applications to use
the API too, but for the moment it will start under LGPL.

This API concerns itself with the interface between the exchange and client.
It does not assume a particular way in which the exchange will connect,
publish, merge or share channels. Synchronisation is kept simple and no
support for transport is provided (it is assumed that transport instructions
such as rewind be sent as data like any other). There are extensible calls
to request the streaming characteristics of channels and the latency
requirements of clients and data specifications are provided in an
extensible way that does not require any understanding on the part of the
exchange. Data types and descriptor conventions are given unique identifiers
to keep data types compatible.

I think the only point where the API strays from being strictly generic is
the insistence of use of a single timestamping scheme (managed by the
exchange) when sending data. I've had a lot of trouble persuading myself
that this is necessary but have come down in support of it. This doesn't
require the client or exchange to use this convention internally, just at
its interfaces, and it is hoped this won't be too hard to do.

On the web at http://www.ladspa.org/ladmea/ I've included the API itself
(ladmea.h) and two files containing the obligatory streaming characteristic
descriptor convention and latency requirement descriptor conventions that
all clients and exchanges should understand.

The name of the API is reasonably flexible - there's been lots of debate
about potential names for this sort of technology for LAAGA (which is
great). At this stage I'll observe that ALBA is already in use and JACK is
susceptible to unfortunate jokes.

I hope this is all of use - I've spent rather more time on it than I'd hoped
and I've cut off development to give me something to release. Which is
probably very broken in some way!

Please let me know what you think - like LADSPA, this API will only be any
use if people are prepared to use it.

--Richard




RE: [linux-audio-dev] Wrappers and LAAGA

2001-07-27 Thread Richard W.E. Furse

A few people have got LADSPA support working on Windows already. After all,
it's all ANSI C except for the dlopen() mechanism which can be replaced
trivially with the DLL mechanism. The suggestion to drop the L has been
made, but I think it should stay.

However, some might see it a bit rude to take their Linux plugins and
compile them for Windows and because of this I've not encouraged spread to
other platforms. What do folk think? I wouldn't mind for my CMT plugins.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Ellis
Breen
Sent: 27 July 2001 15:40
To: [EMAIL PROTECTED]
Subject: [linux-audio-dev] Wrappers and LAAGA


I was wondering whether it was possible to implement LADSPA on non-Linux
OSes, and if so, whether it has been done? For example, without breaking any
licenses (by clever dynamic linking if necessary) could a DirectX, VST, MAS
or TDM wrapper/adapter be written for it (if any of these provide a superset
of the LADSPA API), thus allowing LADSPA plugins to run just about anywhere?
When/if a wrapper workably implements the proposed LADSPA GUI API could this
provide an incentive for developers to switch to LADSPA as a 'write-once,
run-anywhere' solution, maybe even not requiring recompilation for
non-architectural ports? If so, should such software should be kept under
wraps (boom boom) until then to prevent open-source software escaping into
the Win/Mac world, considering that if it can be done, someone will probably
eventually do it (and that LADSPA plugins being more modular in their nature
will work a lot better without the baggage of a wrapper or adapter, ie, in
Linux hosts designed from the ground up for LADSPA, therefore changing the
slope of the playing field somewhat)? If not LADSPA, how about MAIA, or a
superclass thereof?

[...]




RE: [linux-audio-dev] Laaga multiple sample rates (Re: LAAGA: updates, laaga-0.2.0 tarball available)

2001-07-13 Thread Richard W.E. Furse

Don't follow this - LADSPA's control/audio relationship is a deliberate
generalisation of Csound's - LADSPA plugins should be 100% happy working
with krate/arate inputs.

Maybe I'm missing the point too.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Jarno
Seppanen
Sent: 12 July 2001 15:08
To: [EMAIL PROTECTED]
Subject: Re: [linux-audio-dev] Laaga multiple sample rates (Re: LAAGA:
updates, laaga-0.2.0 tarball available)

[...]

> >per-port sample rates made me have to implement some (inferior) sort of a
> >downsampler and an upsampler right in the host, which will produce
surprising
> >results.
>
> Can you explain this a little more? I don't understand the problem
> (which is not to say I claim there isn't one) ...

OK, your solution is to create a "control" port type and make all its
buffers
be of length one, but I would prefer control signals to be just plain
signals
just with another sample rate.  The latter solution allows for connecting
and
converting different-rate (control) ports together, using a resampler
client.
Think combining Csound, where you can select arate and krate by hand, with
LADSPA, where you can't.

Later,
--
-Jarno




RE: [linux-audio-dev] the alternate API for LAAGA: its problems

2001-07-05 Thread Richard W.E. Furse

A heads-up for you all to let you know that I've just started building an
alternative API to LAAGA (in case you care). As posted before, LAAGA doesn't
seem the right approach to me - it seems to be more of a
framework/application than an API.

I don't really have time to do this, but London's too hot at the moment to
sleep ;-)

BTW, I'm very eager to have a look at GStreamer in more detail because this
is one of the approaches I'm trying to reconcile. Does anyone know where I
can get a look at the source? I've tried downloading a few times from
www.gstreamer.net (sourceforge) without success.

Thanks,

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
Sent: 05 July 2001 06:20
To: [EMAIL PROTECTED]
Subject: [linux-audio-dev] the alternate API for LAAGA: its problems


as alluded to in a previous message, there is a bit of a serious
problem in trying to provide a "legacy application" (i.e. most of
them) with an easy way to use LAAGA without a more significant
restructuring (my motivating example is MusE).

[...]




RE: [linux-audio-dev] silent for a while

2001-06-23 Thread Richard W.E. Furse

Ah, what a beautifully peaceful week ;-)

I had a chance to catch up with the state of LAAGA over the past week or so.
Nice clear bit of code.

However I'm now a bit puzzled what LAAGA attempts to achieve - as far as I
can see, any LAAGA application has to hand over its control logic to the AES
engine? Possibly I've missed where the `interface point' is in the code, but
this seems somewhat over-prescriptive.

What software would use this API? What changes would have to be made to
Csound to allow a MIDI sequencer to drive it while feeding audio into
Ardour? How would the link be set up? This, to me, is the kind of basic
challenge that an `application glue' framework needs to meet. As we
discussed when we met up, isn't an ALSA-like API better suited to this? How
would aRts do it?

I've had quite a few thoughts about this over the past week and hopefully
will find the time to pull them together into a consistent email over the
next week or two. I do think we're better off abstracting away from
OSS/ALSA/aRts or suchlike but I'm not sure we're doing it the right way.

My instinct is that the existing framework deals with two issues - how to
handle audio/data exchange and how to specify network topology. These seem
to be orthogonal concepts to me and better separated. To deal with the two
parts, I think the audio/data exchange part needs a lot of work, but I'm
reasonably happy with the topology API - it's not dissimilar to the API that
MN presents to client applications (Paul, the MNServer baseclass I think I
showed you briefly) and I know that works :-)

All good progress...

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Paul Davis
Sent: 18 June 2001 21:36
To: [EMAIL PROTECTED]
Subject: [linux-audio-dev] silent for a while


i'm off on a road-trip to illinois for the rest of this week, so don't
expect to hear anymore about my audioengine/laaga prototype till at
least sunday (jun 24th).

i would really appreciate it if kai, steve h., the richards and others
could take a look at the code to get a feel for the design and let me
know what you all think. its definitely not finished, but i'm excited
about the possibilities.

ideas on how to have clients get access to the list of port
connections are also very welcome.

best regards,
--p

ps. anybody else know the track by eberhard weber that gave the
subject line?




[linux-audio-dev] LAAGA: State of the Art?

2001-06-13 Thread Richard W.E. Furse

I'm feeling I ought to find/make time to try to understand where LAAGA is. I
haven't had time to follow the debate so I don't know which ideas are
winning through. Is there a definitive 'state of the art' document around?

Thanks, (and apologies for the laziness...)

--Richard




[linux-audio-dev] lad on the web

2001-06-05 Thread Richard W.E. Furse

BTW, www.ladspa.org is a "commercial-grade" website for which I'm currently
paying (quite a lot of money - it seemed like a good idea when I started).
It may well be possible to host more than one site there. If people would
like to investigate the facilities that Demon offer that please feel free.
Also, if I'm being ripped off [unlikely, they've been mostly good] please
let me know.

I must admit I'm not convinced about PHP-Nuke as a front-end. At first
glance it looks fantastic as a developer's portal but not right for a
musician's. Perhaps we should have two sites - one glossy and black,
advocating LADSPA and Linux as the "Audio Purist's" Nirvana, with big
buttons, a few audio snippets and big pictures of audio applications in use
(preferably animated [;-)]).

And one for developers (mentioning critical bands, Huffman encoding, DFTs
and spherical harmonics on the front page to make sure folk know we
understand some stuff about audio as well as computing stuff like LADSPA and
bus/event APIs). For this we should use anything trendy and useful - here I
would encourage PHP-Nuke if it delivers what it promises (as long as the
name doesn't appear anywhere obvious). I must admit what I've seen so far
hasn't inspired me, but if it's trendy then maybe it'll work...
[Incidentally, I'm not convinced by my own argument - it's hard to be when
things like Imerge have happened already.] On the subject of mailing lists
I'd like to comment that a fraction more noise on the list will cause me to
wander elsewhere - the world probably won't mind, but I might not be the
only one.

Incidentally this leaves aside a small third group, of good musicians who
are smart enough to be interested in what a computer can do for them (or
even someone who would like to experiment with traditional computer music
(or audio processing (or audio coding (or code (or C++ (or designing a
decent system (or being useful [tee hee whatever happened to Common
Music?]))). But I think these folk will find (or have found) their way here
anyway.

--Richard

PS I hadn't seen the LAD icon with the four outfacing speakers before. Very
nice.




RE: [linux-audio-dev] developers and development: some thoughts

2001-05-26 Thread Richard W.E. Furse

I'm for anything that splits up the traffic a little - I'd happy if we went
further, perhaps with areas for APIs, Networks, Hardware etc. I find the
traffic on this list too copious to keep up with these days.

BTW, is there a digest available? This would make life easier for me at
least.

--Richard

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Dave
Phillips
Sent: 25 May 2001 15:47
To: LAD Mail
Subject: [linux-audio-dev] developers and development: some thoughts

[...]

  So I'm wondering: Is there any interest in a division into LAD/systen
and LAD/apps ? The other existing channels (alsa-user mail-list,
linux.dev.sound, etc) are themselves too narrow or too poorly defined to
serve as forums for the discussion of issues such as the following:

[...]