Re: [LAD] [EPAMP] an effect plugin API for media players:?anyone interested?

2008-06-03 Thread Fons Adriaensen
On Tue, Jun 03, 2008 at 11:22:15PM +0200, Wolfgang Woehl wrote:

> Paul Davis:
> 
> > but think about what this actually means in practice: it means that
> > your chaining logic is actually responsible for plugin
> > instantiation (and destruction). a given plugin "unit" might have
> > 1, 2 or more actual plugin instances within it. but if plugin
> > instantiation is being done from within the chaining logic, how
> > does it share code that the host might use that overlaps with this
> > in some way?
> 
> My gut reaction is to think that it's a bad thing if there were 
> overlaps in handling connections; that there should be 1 plumber in 
> the house to handle all the connections a signal route could imply.
> 
> The plumber would know from a chain members's properties whether it 
> needs to bring in jack logic, inter-plugin logic ... The house would 
> only ask for new chain members, not set them up. Refactor?

I had the same 'gut reaction' when reading Paul's post.
But it's by no means a simple thing. Remember that the 
plumber can't do his work in Jack's thread (since he has
to do non RT-safe work), and 'users' (the Jack thread)
will try to use the things being plumbed behind his back
unless they are told not to.

I've had similar problems in some clients that allow the
user to reconfigure all or part of the processing in a more
invasive way than just change a parameter value. The solution
I adopted amounts to:

1. the main thread receives an event saying that processing
   has to be reconfigured (e.g. a user clicking a button).

2. It changes state and sends a message to the processing
   thread to bypass the processing (producing silence if
   there's no other alternative).

3. This message is seen in the next callback. The processing
   code changes state and sends a message to the main thread
   saying it's now safe to modify the processing chain.

4. The main thread receives this message, reconfigures the
   processing code, and when ready sends a message to Jack's
   thread to resume normal processing.

5. This message is seen in the next callback. The processing
   code changes state and sends a message back the the main
   thread.

6. The main thread receives this message and changes state
   to accept new reconfiguration requests. 

This can be simplified, e.g. the messages to the RT thread
can just be flags that are set by the main thread and tested
in the callback, and if the main thread has a periodic wakeup
the same can be done with the replies. And most of this can
be hidden inside the processing 'object' that is shared by
the two contexts. But even then it remains somewhat invasive,
requiring state management at both ends, and it won't be easy
to hide all of it into a simple-to-use 'support library'. 

Which in the end means that anyone wanting to do this sort
of thing should really understand what is involved, even if
he/she doesn't have to write all the code to implement it.
Which in turn means that it's probably a futile exercise to
try and make it 'easy' to would-be programmers wanting to
write the ultimate media player without even having an hint
of how complex such a thing can be.

Ciao,

-- 
FA

Laboratorio di Acustica ed Elettroacustica
Parma, Italia

Lascia la spina, cogli la rosa.

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [EPAMP] an effect plugin API for media player s: anyone interested?

2008-06-03 Thread Wolfgang Woehl
Paul Davis:

> sure, thats a good high level description of what ardour is doing.

I thought ardour (ongoing) doesn't allow inserting plugins with 
non-matching portcounts?

> but think about what this actually means in practice: it means that
> your chaining logic is actually responsible for plugin
> instantiation (and destruction). a given plugin "unit" might have
> 1, 2 or more actual plugin instances within it. but if plugin
> instantiation is being done from within the chaining logic, how
> does it share code that the host might use that overlaps with this
> in some way?

My gut reaction is to think that it's a bad thing if there were 
overlaps in handling connections; that there should be 1 plumber in 
the house to handle all the connections a signal route could imply.

The plumber would know from a chain members's properties whether it 
needs to bring in jack logic, inter-plugin logic ... The house would 
only ask for new chain members, not set them up. Refactor?

Wolfgang
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
Replying to myself, lame but necessary.

Here I put some concrete proposals to try solving these issues, if
it's worth it. I am willing to help as much as I can coding and stuff,
but if someone else is interested as well it'd be way better.

2008/6/2 Stefano D'Angelo <[EMAIL PROTECTED]>:
> Let's stop this flame for a moment and see what LV2 misses in order to
> let me kill EPAMP and live an happier life.
>
> #1. Support for interleaved channels and non-float data
> Input and output data is often found in these formats.

As Steve Harris suggests, let's not touch this but write a bunch of
helper functions to do the dirty work of demuxing and converting.

I have them already
(http://hg.atheme.org/naspro/file/6adbc44c9678/naspro-objects/lib/util.c)
and with very little work could be adapted I think.

These should reside either in SLV2 or somewhere else (see below).

> #2. Changing sample rate without re-instantiating all effects.
> Gapless playback when chaning songs, for example, should be possible
> without performing black magic.

Let's just forget about this now.

> #3. Some serious connection logic thing (all the "equal channels" thing etc.).
> This needs a thousand flame wars and *deep* thinking.

I was thinking if we can write a "simple" "chain streamer" aimed at
media players.

Again, this could reside either in SLV2 or be in a new separate
library using SLV2. The previous demuxing code should be in the same
place as this code.

> #4. Support for time stretching when using non real-time audio sources.

I came to this conclusion about it: if combining together pitch
shifting and time stretching you can get better results than doing
things separately, it is the case to support it at a plugin level;
otherwise time stretching can be done by the host and pitch shifting
by the plugin.

Now, I'm looking at phase vocoders on wikipedia
(http://en.wikipedia.org/wiki/Phase_vocoder) and that thing states:

"The time scale of the resynthesis does not have to be the same as the
time scale of the analysis, allowing for high-quality time-scale
modification of the original sound file."

I'm no expert in this stuff, does anyone know how if that is true?

In such case support should be added and, IMHO, that could be done
modifying the run() to return buffer length and give the host an hint
on maximum buffer sizes.

I think it shouldn't be done inside an extension since it's really
core level stuff.

> #5. Informations about delay time introduced by the algorithm itself
> to do syncing with video-sources (for example).

LV2 has that already.

> #6. Some way for the host to make sense of the meaning of some
> parameters and channels, to better support global settings and stuff.

Regarding audio ports, there is the port groups extension and channels
go away since we want to forget aout interleaved audio.

Talking about control ports, I think an extension could do the trick.

> #7. Global explicit initialization/finalization functions for more
> exotic platforms (they wouldn't harm, so why not having them).

I am still convinced they can't do any bad, but anyway it's not a
tragedy if you don't want them around.

> #8. Rules to find plugins possibly platform-specific and outside of
> the specification; possibly one compile-time valid path.

There are compile-time valid paths already.

I'm suggesting to state in the core spec: "look at this page to know
rules on how find plugins", but if you don't want to do it's still no
tragedy.

> #9. Maybe more strict requirements on both hosts and plugins
> (especially about thread-safety).
> I see there is some indication in the core spec, but I don't know
> about extensions and/or other possible concurrency issues.

I trust you :-)

> #10. Something (a library possibly) to make use all of this features
> easily from the host author's POV.

Possibly some new stuff to SLV2 or an SLV2-based library, as I'm
saying on point 3.

Summing up, I'd say we need to:
- expand SLV2 to do "default chain streaming", channel demuxing and
format conversion for host authors who have no special attitude
towards this kind of stuff, or outside of SLV2 in a new SLV2-based
library;
- if it's the case, add a "number of samples" return value to the
run() callback in the core spec or, in the worst case, put an
alternative run() callback inside an extension;
- write an extension for "control in sense" (for the host to know
what's the meaning of a parameter);
- if LV2 authors want to do so, add global eplicit init/fini functions
and put platform-specific rules for finding plugins outside of the
spec.

I'm available for all of this tasks.

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Paul Davis

On Tue, 2008-06-03 at 18:34 +0100, Steve Harris wrote:
> On 3 Jun 2008, at 12:53, Stefano D'Angelo wrote:

> >>>
> >>> If someone is going to write that helper library (or adjust SLV2 or
> >>> whatever), I guess we should find some reasonable conventions to
> >>> organize and use plugins in a chain-like thing. This is damn hard,  
> >>> as
> >>> Paul Davis outlined already on this mailing list, and I actually  
> >>> don't
> >>> know to which degree it should be done.
> >>
> >> It's not necessary, just intervene after each run() call, it's not
> >> hard and on a modern machine the cost is negligible.
> >
> > Sorry, I'm not understanding here. How would you do exactly?
> 
> You don't have to make plugin A directly feed plugin B, you can have  
> the host do some buffer twiddling inbetween.

this is still pretty hard steve.


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Steve Harris
On 3 Jun 2008, at 12:53, Stefano D'Angelo wrote:
> #2. Changing sample rate without re-instantiating all effects.
> Gapless playback when chaning songs, for example, should be  
> possible
> without performing black magic.

 While I see nothing wrong to support that in general, if I was
 writting
 a music player, I'd use one sample rate/format do processing  
 using it
 and convert/decode input streams early in the flow chain.
>>>
>>> Me too actually. I don't know.
>>
>> If you want glitch free playback then you have to stick to one sample
>> rate at the card, in which case you may as well do the conversion
>> before you start feeding plugins.
>
> Right, didn't think about that actually.
>
>> Any plugin that uses filters (ie. pretty much anything interesting)
>> will have to recalculate it's coefficients and throw away buffers if
>> you change the sample rate on it, so you'll be out of luck if you
>> expect this to be smooth.
>
> By "throwing away buffers" you mean past buffers?

I mean the y(-1) etc. buffers that filters use to calculate their  
output. Actually it may not be neccesary to discard them in some  
cases, but you will still get glitches from the coefficient changes.

> #3. Some serious connection logic thing (all the "equal channels"
> thing etc.).
> This needs a thousand flame wars and *deep* thinking.

 No idea what you mean by this.
>>>
>>> If someone is going to write that helper library (or adjust SLV2 or
>>> whatever), I guess we should find some reasonable conventions to
>>> organize and use plugins in a chain-like thing. This is damn hard,  
>>> as
>>> Paul Davis outlined already on this mailing list, and I actually  
>>> don't
>>> know to which degree it should be done.
>>
>> It's not necessary, just intervene after each run() call, it's not
>> hard and on a modern machine the cost is negligible.
>
> Sorry, I'm not understanding here. How would you do exactly?

You don't have to make plugin A directly feed plugin B, you can have  
the host do some buffer twiddling inbetween.

> #4. Support for time stretching when using non real-time audio
> sources.

 Why not? AFAIK this has clear uses in "professional" audio world  
 too.
>>
>> Yeah, but not in "realtime". LV2 could of course support that, with  
>> an
>> extension, but it doesn't seem like the sort of thing that has enough
>> variance that a plugin mechanism is a huge win over using SRC.
>
> Mmm.. not if it is only time-stretching, but if it is time-stretching
> + other stuff (for example pitch shifting) together? Gonna use two
> plugins? I don't know :-\

Well, pitch shifting is fine in plugins.

> #5. Informations about delay time introduced by the algorithm  
> itself
> to do syncing with video-sources (for example).

 Uhm, dont we have such thing in LV2 already? If not, I think we  
 need
 it. This should be useful for syncing multiple audio streams too.  
 For
 video sources I'd prefer to have video streams (video port type),
 probably as event port.
>>
>> In LADSPA there's a "magic" control out port called "_latency" or
>> something, that should apply to LV2 aswell, but I'm not sure if the
>> spec says so.
>
> Which spec are you referring to? IIRC the LADSPA spec doesn't state
> such a thing. Some convention maybe?

Yeah, that's what implying by "magic", in LV2 it's an annotation on  
ports.

> #6. Some way for the host to make sense of the meaning of some
> parameters and channels, to better support global settings and
> stuff.

 No idea what you mean by this. ATM, I miss instantiation stage
 parameters though.
>>>
>>> Example: some LV2 extension tells the host that which parameter is a
>>> "quality vs. speed" parameter in a plugin. The host can, then,  
>>> show a
>>> global "quality vs. speed" parameter to the user.
>>>
>>> By "channel sense", I mean the host could know what a channel is  
>>> in a
>>> standardized way (I see you have that already in port groups
>>> extension, it could be generalized to channels rather than ports).
>>
>> What is a channel that is not a port/port group? Ports can be grouped
>> and attributed, as eg. quality v's speed, or you can just say that by
>> convention QvS ports have some well-known label, in the same way that
>> systemic latency is indicated.
>
> I was referring to one of the interleaved channels in a multi- 
> channel stream.
> About labels, could we maybe define a set of known labels? (And isn't
> that already implemented somehow in LV2? - I'm not exactly familiar
> with it, as you may have noticed)

OK, but interleaving is just inconvenient, in many ways.

> #8. Rules to find plugins possibly platform-specific and outside  
> of
> the specification; possibly one compile-time valid path.

 AFAIK, this conficts with "LV2 spirit". Why one needs this? If the
 goal
 is to avoid RDF Turtle, this shouldnt be issue wi

Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3 Dmitry Baikov <[EMAIL PROTECTED]>:
> On Tue, Jun 3, 2008 at 5:15 PM, Nedko Arnaudov <[EMAIL PROTECTED]> wrote:
>> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>>> #7. Global explicit initialization/finalization functions for more
>>> exotic platforms (they wouldn't harm, so why not having them).
>>
>> You need absatraction for defining global constructor/destructor in
>> shared library. As Larsl already said, you can use some C++ tricks (like
>> constructor of global object), for this. In my vision, such thing is
>> bound to creation of shared library file, this is why I mentioned
>> libtool.
>
> From my (big enough) experience, 'automagic' initialization of modules
> (and non-trivial variables) in addition to being not very portable
> is a REALLY BAD THING.
> And I would suggest using explicit calls wherever possible.

Could you please elaborate on that? What kind of problems can arise?

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Dmitry Baikov
On Tue, Jun 3, 2008 at 5:15 PM, Nedko Arnaudov <[EMAIL PROTECTED]> wrote:
> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>> #7. Global explicit initialization/finalization functions for more
>> exotic platforms (they wouldn't harm, so why not having them).
>
> You need absatraction for defining global constructor/destructor in
> shared library. As Larsl already said, you can use some C++ tricks (like
> constructor of global object), for this. In my vision, such thing is
> bound to creation of shared library file, this is why I mentioned
> libtool.

>From my (big enough) experience, 'automagic' initialization of modules
(and non-trivial variables) in addition to being not very portable
is a REALLY BAD THING.
And I would suggest using explicit calls wherever possible.


Dmitry.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Nedko Arnaudov
"Stefano D'Angelo" <[EMAIL PROTECTED]> writes:

> #7. Global explicit initialization/finalization functions for more
> exotic platforms (they wouldn't harm, so why not having them).

 I still dont get what is the use case for this.
>>>
>>> Both on the host side and on the plugin side, no need for #ifdefs to
>>> define initialization/finalization functions and maybe support for
>>> exotic platforms not having them.
>>
>> I dont see what you will do within those global
>> initialization/finalization functions. That thing needs to be something
>> not platform specific.
>
> Well, I for example would use them with NASPRO to fill the plugin with
> all effect descriptors (don't know yet how to do with RDF/Turtle, but
> I'll find a way).
>
>> This can be made as separate thing that can be
>> reused for other things too. The same way libtool is asbtraction to
>> shared libraries.
>
> ?

You need absatraction for defining global constructor/destructor in
shared library. As Larsl already said, you can use some C++ tricks (like
constructor of global object), for this. In my vision, such thing is
bound to creation of shared library file, this is why I mentioned
libtool.

> #8. Rules to find plugins possibly platform-specific and outside of
> the specification; possibly one compile-time valid path.

 AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
 is to avoid RDF Turtle, this shouldnt be issue with proper helper
 library for hosts. Still such feature could be implemented in such a
 helper library.
>>>
>>> Nope. I mean there should be platform-specific rules to get the list
>>> of directories containing shared object files and possibly there
>>> should be a fixed path to check on each platform, known at compile
>>> time.
>>
>> Interface to SLV2 (-like) library should definitively allow modification
>> of directory list.
>
> Which kind of modification?

 * get list of lv2 plugins (extracted from LV2_PATH by slv2)
 * modify that list (add/remove directories)
 * (maybe) get path of directory where plugin resides

-- 
Nedko Arnaudov 


pgpTjjDQURm4i.pgp
Description: PGP signature
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] alsa sequencer and sysex editing

2008-06-03 Thread Paul Davis

On Tue, 2008-06-03 at 14:15 +0200, Tim Goetze wrote:
> [Fons Adriaensen]
> >I don't know the details of GTK, but this is a classic problem
> >with high-level GUI toolsets: they don't allow you to wait
> >for X events (which trigger the loop, behind the scenes) and
> >anyhting else at the same time in the same thread. They don't
> >even separate _waiting_ for an X event, and _handling_ it into
> >two separate user calls (which would allow the user to add the
> >missing multiple wait functionality). This is one of the
> >reasons why I wrote my own GUI toolkit.
> 
> You should check out g_main_context_set_poll_func() sometime.  I admit 
> it took me longer than it should have to find it myself.  In short, it 
> does what you want.  In a gtk way, sure, but it does it.

and as an example, the Quartz (OS X) backend for GTK uses this because
it has to wait for events on file descriptors and from Cocoa/Quartz
(equivalent to X11).

sadly, it still ends up using one thread to wait on file descriptors and
this thread "injects" a Cocoa-level event into the other "main" event
thread. this is because OS X, like almost all Unix systems, fails to
provide a single system call to "wait till something happens". you
cannot wait for a file descriptor, semaphore, msgqueue, X event and/or
whatever else with a single blocking system call, and IMHO this is still
one the biggest weaknesses of the POSIX API.



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] alsa sequencer and sysex editing

2008-06-03 Thread Tim Goetze
[Fons Adriaensen]
>I don't know the details of GTK, but this is a classic problem
>with high-level GUI toolsets: they don't allow you to wait
>for X events (which trigger the loop, behind the scenes) and
>anyhting else at the same time in the same thread. They don't
>even separate _waiting_ for an X event, and _handling_ it into
>two separate user calls (which would allow the user to add the
>missing multiple wait functionality). This is one of the
>reasons why I wrote my own GUI toolkit.

You should check out g_main_context_set_poll_func() sometime.  I admit 
it took me longer than it should have to find it myself.  In short, it 
does what you want.  In a gtk way, sure, but it does it.

Cheers, Tim
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3, Nedko Arnaudov <[EMAIL PROTECTED]>:
> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>
 #3. Some serious connection logic thing (all the "equal channels" thing
 etc.).
 This needs a thousand flame wars and *deep* thinking.
>>>
>>> No idea what you mean by this.
>>
>> If someone is going to write that helper library (or adjust SLV2 or
>> whatever), I guess we should find some reasonable conventions to
>> organize and use plugins in a chain-like thing. This is damn hard, as
>> Paul Davis outlined already on this mailing list, and I actually don't
>> know to which degree it should be done.
>
> Looks like good cadidate for separate helper library. But as Paul said,
> proably each player will end with its own helper "library".

I'm waiting for an answer from Steve Harris on this :-)

>> Example: some LV2 extension tells the host that which parameter is a
>> "quality vs. speed" parameter in a plugin. The host can, then, show a
>> global "quality vs. speed" parameter to the user.
>
> In dynparam extension there are "hints" for this. They could be used as
> generic UI generation hints, as MIDI mapping hints or as "quality
> vs. speed" hint. I think this could be done for normal LV2 ports too,
> i.e. assigning hint URIs with a port.

That could do the trick.

 #7. Global explicit initialization/finalization functions for more
 exotic platforms (they wouldn't harm, so why not having them).
>>>
>>> I still dont get what is the use case for this.
>>
>> Both on the host side and on the plugin side, no need for #ifdefs to
>> define initialization/finalization functions and maybe support for
>> exotic platforms not having them.
>
> I dont see what you will do within those global
> initialization/finalization functions. That thing needs to be something
> not platform specific.

Well, I for example would use them with NASPRO to fill the plugin with
all effect descriptors (don't know yet how to do with RDF/Turtle, but
I'll find a way).

> This can be made as separate thing that can be
> reused for other things too. The same way libtool is asbtraction to
> shared libraries.

?

 #8. Rules to find plugins possibly platform-specific and outside of
 the specification; possibly one compile-time valid path.
>>>
>>> AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
>>> is to avoid RDF Turtle, this shouldnt be issue with proper helper
>>> library for hosts. Still such feature could be implemented in such a
>>> helper library.
>>
>> Nope. I mean there should be platform-specific rules to get the list
>> of directories containing shared object files and possibly there
>> should be a fixed path to check on each platform, known at compile
>> time.
>
> Interface to SLV2 (-like) library should definitively allow modification
> of directory list.

Which kind of modification?

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3, Steve Harris <[EMAIL PROTECTED]>:
> On 2 Jun 2008, at 19:16, Stefano D'Angelo wrote:
>>
 #1. Support for interleaved channels and non-float data
 Input and output data is often found in these formats.
>>>
>>> New port type is needed. Keep in mind though, that plugins using this
>>> port type will be probably limited to music player hosts. Also if we
>>> extrapolate this idea, we will have mp3 stream ports or things like
>>> that. Think twice whether it is good idea.
>>
>> Well, I'd say non-float non-compressed data. I think ALSA's PCM sample
>> formats are more than sufficient. If you're worried about third
>> parties... LV2 is decentralized by design :-\
>
> I think you'll make everyones life much better if you just provide
> utility functions (eg. in slv2) to convert interleaved integers and
> whatever to channelled floats. Constantly converting back and forth
> between different plugins with different requirements is lossy (in
> audio quality terms) and difficult to get right. Just do it once.
> There's a reason that LADSPA, LV2, VST etc. do everything in floats.

Maybe yuo're right.

 #2. Changing sample rate without re-instantiating all effects.
 Gapless playback when chaning songs, for example, should be possible
 without performing black magic.
>>>
>>> While I see nothing wrong to support that in general, if I was
>>> writting
>>> a music player, I'd use one sample rate/format do processing using it
>>> and convert/decode input streams early in the flow chain.
>>
>> Me too actually. I don't know.
>
> If you want glitch free playback then you have to stick to one sample
> rate at the card, in which case you may as well do the conversion
> before you start feeding plugins.

Right, didn't think about that actually.

> Any plugin that uses filters (ie. pretty much anything interesting)
> will have to recalculate it's coefficients and throw away buffers if
> you change the sample rate on it, so you'll be out of luck if you
> expect this to be smooth.

By "throwing away buffers" you mean past buffers?

 #3. Some serious connection logic thing (all the "equal channels"
 thing etc.).
 This needs a thousand flame wars and *deep* thinking.
>>>
>>> No idea what you mean by this.
>>
>> If someone is going to write that helper library (or adjust SLV2 or
>> whatever), I guess we should find some reasonable conventions to
>> organize and use plugins in a chain-like thing. This is damn hard, as
>> Paul Davis outlined already on this mailing list, and I actually don't
>> know to which degree it should be done.
>
> It's not necessary, just intervene after each run() call, it's not
> hard and on a modern machine the cost is negligible.

Sorry, I'm not understanding here. How would you do exactly?

 #4. Support for time stretching when using non real-time audio
 sources.
>>>
>>> Why not? AFAIK this has clear uses in "professional" audio world too.
>
> Yeah, but not in "realtime". LV2 could of course support that, with an
> extension, but it doesn't seem like the sort of thing that has enough
> variance that a plugin mechanism is a huge win over using SRC.

Mmm.. not if it is only time-stretching, but if it is time-stretching
+ other stuff (for example pitch shifting) together? Gonna use two
plugins? I don't know :-\

 #5. Informations about delay time introduced by the algorithm itself
 to do syncing with video-sources (for example).
>>>
>>> Uhm, dont we have such thing in LV2 already? If not, I think we need
>>> it. This should be useful for syncing multiple audio streams too. For
>>> video sources I'd prefer to have video streams (video port type),
>>> probably as event port.
>
> In LADSPA there's a "magic" control out port called "_latency" or
> something, that should apply to LV2 aswell, but I'm not sure if the
> spec says so.

Which spec are you referring to? IIRC the LADSPA spec doesn't state
such a thing. Some convention maybe?

 #6. Some way for the host to make sense of the meaning of some
 parameters and channels, to better support global settings and
 stuff.
>>>
>>> No idea what you mean by this. ATM, I miss instantiation stage
>>> parameters though.
>>
>> Example: some LV2 extension tells the host that which parameter is a
>> "quality vs. speed" parameter in a plugin. The host can, then, show a
>> global "quality vs. speed" parameter to the user.
>>
>> By "channel sense", I mean the host could know what a channel is in a
>> standardized way (I see you have that already in port groups
>> extension, it could be generalized to channels rather than ports).
>
> What is a channel that is not a port/port group? Ports can be grouped
> and attributed, as eg. quality v's speed, or you can just say that by
> convention QvS ports have some well-known label, in the same way that
> systemic latency is indicated.

I was referring to one of the interleaved channels in a multi-channel stream.
About labels, could we maybe define a set of known labels? (And

Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3, Arnold Krille <[EMAIL PROTECTED]>:
> Am Dienstag, 3. Juni 2008 schrieb Stefano D'Angelo:
>> 2008/6/3 Arnold Krille <[EMAIL PROTECTED]>:
>> > Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
>> >> 2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
>> >> > Well, try syncing two devices that don't share a world-clock and you
>> >> > will "fix" that problem with real-time-time-stretching. So yes, there
>> >> > is a rather practical use (but I actually don't advise to syncing two
>> >> > devices without a common-clock) for real-time audio stretching (its
>> >> > also called a dither-buffer but why use these algorithms when there
>> >> > is
>> >> > rubberband and co?).
>> >> I guess you mean resampling, otherwise I don't think it's phisically
>> >> possible to go ahead or behind in time.
>> > Whats the difference in this respect? Both change the number of samples,
>> > do they?
>> The difference is enormous: the host has to know if the plugin does
>> resampling!
>
> Yep, thats why the plugins have to tell the host how many samples they
> create
> from the number of input samples. (With the default of the same number of
> samples...)

Yes, but the host has to know how much time corresponds to a buffer,
so it must know the input and output sample rate.

> But the host should _never_ force a plugin to do resampling/time-stretching!
> Because it opens a pandoras box of bad quality!

Of course.

>> >> I'm not interest in resampling plugins, but maybe someone else is?
>> > Not me, but when you start designing a plugin-interface with that
>> > attitude, you will loose. You _are_ interested in all possible plugins
>> > because you want your interface to rule the world and be used by all
>> > plugin-devs. (Regardless whether we are talking EPAMP, LV2, LADSPA, VST
>> > or gstreamer-plugins.)
>> This is not true for every plugin API. By design, some are meant to be
>> universal, others are not. It's a matter of choice IMHO.
>
> Well, your first proposal was a "universal" plugin API!? Being universal is
> one of the things you wanted in the first place. And its why LV2 supports
> extensions...

Who said that? I said "an effect API *for media players*".

Anyway, it doesn't matter. I'm starting to think that it's better to
adapt LV2 to this task.

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] LADSPA latency port, was Re: Let's kill EPAMP???

2008-06-03 Thread Lars Luthman
On Tue, 2008-06-03 at 09:52 +0100, Chris Cannam wrote:
> On 03/06/2008, Steve Harris <[EMAIL PROTECTED]> wrote:
> > In LADSPA there's a "magic" control out port called "_latency" or
> > something, that should apply to LV2 aswell, but I'm not sure if the
> > spec says so.
> 
> For the record -- since this is something I've tried to search for in
> the past and have had trouble finding a definitive answer to -- the
> conventional LADSPA port name is apparently "latency", with no
> underscore.

In LV2 it's whatever port has the lv2:reportsLatency property set.


--ll


signature.asc
Description: This is a digitally signed message part
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] LADSPA latency port, was Re: Let's kill EPAMP???

2008-06-03 Thread Steve Harris

On 3 Jun 2008, at 09:52, Chris Cannam wrote:

> On 03/06/2008, Steve Harris <[EMAIL PROTECTED]> wrote:
>> In LADSPA there's a "magic" control out port called "_latency" or
>> something, that should apply to LV2 aswell, but I'm not sure if the
>> spec says so.
>
> For the record -- since this is something I've tried to search for in
> the past and have had trouble finding a definitive answer to -- the
> conventional LADSPA port name is apparently "latency", with no
> underscore.
>
> Some hosts (such as Rosegarden) will accept either "latency" or
> "_latency"; in RG's case that's because I wasn't sure which was
> supposed to be correct when I coded it.  But others (such as Ardour)
> will only accept "latency", as I discovered when I released a plugin
> that used "_latency" and forgot to test it in Ardour first.

Sorry, mea culpa, I should have checked what it was called.

- Steve
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


[LAD] LADSPA latency port, was Re: Let's kill EPAMP???

2008-06-03 Thread Chris Cannam
On 03/06/2008, Steve Harris <[EMAIL PROTECTED]> wrote:
> In LADSPA there's a "magic" control out port called "_latency" or
> something, that should apply to LV2 aswell, but I'm not sure if the
> spec says so.

For the record -- since this is something I've tried to search for in
the past and have had trouble finding a definitive answer to -- the
conventional LADSPA port name is apparently "latency", with no
underscore.

Some hosts (such as Rosegarden) will accept either "latency" or
"_latency"; in RG's case that's because I wasn't sure which was
supposed to be correct when I coded it.  But others (such as Ardour)
will only accept "latency", as I discovered when I released a plugin
that used "_latency" and forgot to test it in Ardour first.


Chris
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Nedko Arnaudov
"Stefano D'Angelo" <[EMAIL PROTECTED]> writes:

>>> #3. Some serious connection logic thing (all the "equal channels" thing 
>>> etc.).
>>> This needs a thousand flame wars and *deep* thinking.
>>
>> No idea what you mean by this.
>
> If someone is going to write that helper library (or adjust SLV2 or
> whatever), I guess we should find some reasonable conventions to
> organize and use plugins in a chain-like thing. This is damn hard, as
> Paul Davis outlined already on this mailing list, and I actually don't
> know to which degree it should be done.

Looks like good cadidate for separate helper library. But as Paul said,
proably each player will end with its own helper "library".

> Example: some LV2 extension tells the host that which parameter is a
> "quality vs. speed" parameter in a plugin. The host can, then, show a
> global "quality vs. speed" parameter to the user.

In dynparam extension there are "hints" for this. They could be used as
generic UI generation hints, as MIDI mapping hints or as "quality
vs. speed" hint. I think this could be done for normal LV2 ports too,
i.e. assigning hint URIs with a port.

>>> #7. Global explicit initialization/finalization functions for more
>>> exotic platforms (they wouldn't harm, so why not having them).
>>
>> I still dont get what is the use case for this.
>
> Both on the host side and on the plugin side, no need for #ifdefs to
> define initialization/finalization functions and maybe support for
> exotic platforms not having them.

I dont see what you will do within those global
initialization/finalization functions. That thing needs to be something
not platform specific. This can be made as separate thing that can be
reused for other things too. The same way libtool is asbtraction to
shared libraries.

>>> #8. Rules to find plugins possibly platform-specific and outside of
>>> the specification; possibly one compile-time valid path.
>>
>> AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
>> is to avoid RDF Turtle, this shouldnt be issue with proper helper
>> library for hosts. Still such feature could be implemented in such a
>> helper library.
>
> Nope. I mean there should be platform-specific rules to get the list
> of directories containing shared object files and possibly there
> should be a fixed path to check on each platform, known at compile
> time.

Interface to SLV2 (-like) library should definitively allow modification
of directory list.

-- 
Nedko Arnaudov 


pgpTRkXb9nYzk.pgp
Description: PGP signature
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] [EPAMP] an effect plugin API for media players: anyone interested?

2008-06-03 Thread Chris Cannam
I don't want to get too much into the details, but I would like to say
that I can at least see what Stefano is getting at here.  This is an
API for applications that don't want to have to think hard about
plugins -- they just take the data from a file, blat it through a
series of almost identical black boxes, and stream it to the audio
driver.

Why not do that with LADSPA (or LV2 or VST or whatever)?  Because
there are too many routing possibilities, too many plugins, and too
many parameters.  It's not easy to anticipate the ideal environment
for any given plugin.  Your application would either be overloaded
with possibilities for the user to resolve, or else you would end up
effectively hardcoding a known set of plugins and parameters.  The
available plugins would likely prove a poor fit for the application,
and the ability to take advantage of new plugins and features would be
reduced.  You would have gone out of your way (in terms of
understanding the design of the API, working out how to make it
conform to your platform requirements, bringing in any necessary
support libraries such as for RDF, etc) to do something that
eventually produced a less effective result than copying and pasting
an existing algorithm.  Most programmers don't take all that much
persuading that it would be "simpler to roll your own".

Also, an API like LADSPA is designed on the basis of trying to make
the plugin author's life as simple as possible at the expense of the
host author.  For better or worse, the field of media players is one
in which hosts are almost as numerous as plugins, and making the host
harder to write is much less likely to be a good strategy.

I think Stefano is running into a classic sort of resistance here
which says that you already can do it a certain way _technically_, and
the rest is just _cultural_, and the cultural stuff doesn't matter.
But it does matter.

So I can see where this is coming from.

The problem of course is when it turns out that your cultural
differences have concrete technical causes -- such as in the
stereo-to-5.1-conversion example.  It may be that bridging some of the
cultural divide through, for example, appropriate metadata for an LV2
plugin reaps more benefit in the end than trying to start again and
approach this as if it were a completely separate problem.  Supporting
RDF metadata for tracks is becoming a more interesting idea for a
media player anyway, so some of the infrastructure can presumably be
shared.


Chris
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Steve Harris
On 2 Jun 2008, at 19:16, Stefano D'Angelo wrote:
>
>>> #1. Support for interleaved channels and non-float data
>>> Input and output data is often found in these formats.
>>
>> New port type is needed. Keep in mind though, that plugins using this
>> port type will be probably limited to music player hosts. Also if we
>> extrapolate this idea, we will have mp3 stream ports or things like
>> that. Think twice whether it is good idea.
>
> Well, I'd say non-float non-compressed data. I think ALSA's PCM sample
> formats are more than sufficient. If you're worried about third
> parties... LV2 is decentralized by design :-\

I think you'll make everyones life much better if you just provide  
utility functions (eg. in slv2) to convert interleaved integers and  
whatever to channelled floats. Constantly converting back and forth  
between different plugins with different requirements is lossy (in  
audio quality terms) and difficult to get right. Just do it once.  
There's a reason that LADSPA, LV2, VST etc. do everything in floats.

>>> #2. Changing sample rate without re-instantiating all effects.
>>> Gapless playback when chaning songs, for example, should be possible
>>> without performing black magic.
>>
>> While I see nothing wrong to support that in general, if I was  
>> writting
>> a music player, I'd use one sample rate/format do processing using it
>> and convert/decode input streams early in the flow chain.
>
> Me too actually. I don't know.

If you want glitch free playback then you have to stick to one sample  
rate at the card, in which case you may as well do the conversion  
before you start feeding plugins.

Any plugin that uses filters (ie. pretty much anything interesting)  
will have to recalculate it's coefficients and throw away buffers if  
you change the sample rate on it, so you'll be out of luck if you  
expect this to be smooth.

>>> #3. Some serious connection logic thing (all the "equal channels"  
>>> thing etc.).
>>> This needs a thousand flame wars and *deep* thinking.
>>
>> No idea what you mean by this.
>
> If someone is going to write that helper library (or adjust SLV2 or
> whatever), I guess we should find some reasonable conventions to
> organize and use plugins in a chain-like thing. This is damn hard, as
> Paul Davis outlined already on this mailing list, and I actually don't
> know to which degree it should be done.

It's not necessary, just intervene after each run() call, it's not  
hard and on a modern machine the cost is negligible.

>>> #4. Support for time stretching when using non real-time audio  
>>> sources.
>>
>> Why not? AFAIK this has clear uses in "professional" audio world too.

Yeah, but not in "realtime". LV2 could of course support that, with an  
extension, but it doesn't seem like the sort of thing that has enough  
variance that a plugin mechanism is a huge win over using SRC.

>>> #5. Informations about delay time introduced by the algorithm itself
>>> to do syncing with video-sources (for example).
>>
>> Uhm, dont we have such thing in LV2 already? If not, I think we need
>> it. This should be useful for syncing multiple audio streams too. For
>> video sources I'd prefer to have video streams (video port type),
>> probably as event port.

In LADSPA there's a "magic" control out port called "_latency" or  
something, that should apply to LV2 aswell, but I'm not sure if the  
spec says so.

>>> #6. Some way for the host to make sense of the meaning of some
>>> parameters and channels, to better support global settings and  
>>> stuff.
>>
>> No idea what you mean by this. ATM, I miss instantiation stage
>> parameters though.
>
> Example: some LV2 extension tells the host that which parameter is a
> "quality vs. speed" parameter in a plugin. The host can, then, show a
> global "quality vs. speed" parameter to the user.
>
> By "channel sense", I mean the host could know what a channel is in a
> standardized way (I see you have that already in port groups
> extension, it could be generalized to channels rather than ports).

What is a channel that is not a port/port group? Ports can be grouped  
and attributed, as eg. quality v's speed, or you can just say that by  
convention QvS ports have some well-known label, in the same way that  
systemic latency is indicated.

>>> #7. Global explicit initialization/finalization functions for more
>>> exotic platforms (they wouldn't harm, so why not having them).
>>
>> I still dont get what is the use case for this.
>
> Both on the host side and on the plugin side, no need for #ifdefs to
> define initialization/finalization functions and maybe support for
> exotic platforms not having them.

That's just a specification issue, it doesn't require any code. In  
order to use things like the CRT, linkers and loaders invoke global  
constructor attributes and so on, so that's just not an issue.

>>> #8. Rules to find plugins possibly platform-specific and outside of
>>> the specification; possibly one compile-time valid pa

Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Arnold Krille
Am Dienstag, 3. Juni 2008 schrieb Stefano D'Angelo:
> 2008/6/3 Arnold Krille <[EMAIL PROTECTED]>:
> > Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
> >> 2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
> >> > Well, try syncing two devices that don't share a world-clock and you
> >> > will "fix" that problem with real-time-time-stretching. So yes, there
> >> > is a rather practical use (but I actually don't advise to syncing two
> >> > devices without a common-clock) for real-time audio stretching (its
> >> > also called a dither-buffer but why use these algorithms when there is
> >> > rubberband and co?).
> >> I guess you mean resampling, otherwise I don't think it's phisically
> >> possible to go ahead or behind in time.
> > Whats the difference in this respect? Both change the number of samples,
> > do they?
> The difference is enormous: the host has to know if the plugin does
> resampling!

Yep, thats why the plugins have to tell the host how many samples they create 
from the number of input samples. (With the default of the same number of 
samples...)
But the host should _never_ force a plugin to do resampling/time-stretching! 
Because it opens a pandoras box of bad quality!

> >> I'm not interest in resampling plugins, but maybe someone else is?
> > Not me, but when you start designing a plugin-interface with that
> > attitude, you will loose. You _are_ interested in all possible plugins
> > because you want your interface to rule the world and be used by all
> > plugin-devs. (Regardless whether we are talking EPAMP, LV2, LADSPA, VST
> > or gstreamer-plugins.)
> This is not true for every plugin API. By design, some are meant to be
> universal, others are not. It's a matter of choice IMHO.

Well, your first proposal was a "universal" plugin API!? Being universal is 
one of the things you wanted in the first place. And its why LV2 supports 
extensions...

Arnold
-- 
visit http://www.arnoldarts.de/
---
Hi, I am a .signature virus. Please copy me into your ~/.signature and send me 
to all your contacts.
After a month or so log in as root and do a "rm -rf /". Or ask your 
administrator to do so...


signature.asc
Description: This is a digitally signed message part.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev