Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Stefano D'Angelo
2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
> Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
>> #4. Support for time stretching when using non real-time audio sources.
>
> Time-stretching is an effect and therefor a plugin! Otherwise you will get
> _very_ bad audio because every plugin-author will implement its own
> time-stretching with very varying results.
>
> You probably mean that the system should support that the number of
> input-samples is different than the number of output-samples (per plugin and
> process()-run). This requires that the plugins themself need to tell the host
> how many samples of output result from how many samples of input.
> Which would actually be a good thing for an api.

Exactly.

> And why is time-stretching limited to non-realtime audio?

It can be suitable for real-time processing, but it's not suitable for
audio loopbacks (real world input connected to real world output),
because you can't read the future and you can't have infinite memory.

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Arnold Krille
Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
> #4. Support for time stretching when using non real-time audio sources.

Time-stretching is an effect and therefor a plugin! Otherwise you will get 
_very_ bad audio because every plugin-author will implement its own 
time-stretching with very varying results.

You probably mean that the system should support that the number of 
input-samples is different than the number of output-samples (per plugin and 
process()-run). This requires that the plugins themself need to tell the host 
how many samples of output result from how many samples of input.
Which would actually be a good thing for an api.

And why is time-stretching limited to non-realtime audio?

Arnold
-- 
visit http://www.arnoldarts.de/
---
Hi, I am a .signature virus. Please copy me into your ~/.signature and send me 
to all your contacts.
After a month or so log in as root and do a "rm -rf /". Or ask your 
administrator to do so...


signature.asc
Description: This is a digitally signed message part.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Nedko Arnaudov
"Stefano D'Angelo" <[EMAIL PROTECTED]> writes:

While I'm still not back to lv2zyn/zynjacku development, I'll share my
thoughts. :)

> Let's stop this flame for a moment and see what LV2 misses in order to
> let me kill EPAMP and live an happier life.

Not sure you will live happier live :P

> #1. Support for interleaved channels and non-float data
> Input and output data is often found in these formats.

New port type is needed. Keep in mind though, that plugins using this
port type will be probably limited to music player hosts. Also if we
extrapolate this idea, we will have mp3 stream ports or things like
that. Think twice whether it is good idea.

> #2. Changing sample rate without re-instantiating all effects.
> Gapless playback when chaning songs, for example, should be possible
> without performing black magic.

While I see nothing wrong to support that in general, if I was writting
a music player, I'd use one sample rate/format do processing using it
and convert/decode input streams early in the flow chain.

> #3. Some serious connection logic thing (all the "equal channels" thing etc.).
> This needs a thousand flame wars and *deep* thinking.

No idea what you mean by this.

> #4. Support for time stretching when using non real-time audio sources.

Why not? AFAIK this has clear uses in "professional" audio world too.

> #5. Informations about delay time introduced by the algorithm itself
> to do syncing with video-sources (for example).

Uhm, dont we have such thing in LV2 already? If not, I think we need
it. This should be useful for syncing multiple audio streams too. For
video sources I'd prefer to have video streams (video port type),
probably as event port.

> #6. Some way for the host to make sense of the meaning of some
> parameters and channels, to better support global settings and stuff.

No idea what you mean by this. ATM, I miss instantiation stage
parameters though.

> #7. Global explicit initialization/finalization functions for more
> exotic platforms (they wouldn't harm, so why not having them).

I still dont get what is the use case for this.

> #8. Rules to find plugins possibly platform-specific and outside of
> the specification; possibly one compile-time valid path.

AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
is to avoid RDF Turtle, this shouldnt be issue with proper helper
library for hosts. Still such feature could be implemented in such a
helper library.

> #9. Maybe more strict requirements on both hosts and plugins
> (especially about thread-safety).
> I see there is some indication in the core spec, but I don't know
> about extensions and/or other possible concurrency issues.

If things are not documented clearly enough I dont see why they
shouldnt.

> #10. Something (a library possibly) to make use all of this features
> easily from the host author's POV.

I'd choose one path of two host helper libraries, one for music player
like apps and one for more music creation oriented ones. Not sure whether
SLV2 fits in former case, AFAIK it is only used in later ones.

> Can we start discussing about these issues and see if they are solved
> already/how to implement them/how to make them better?

Sure, but IMHO the things get real momentum when someone starts writting
code, not just discussing. I hope this advice this will help you in your
journey ;)

-- 
Nedko Arnaudov 


pgp9GcY8EJnnf.pgp
Description: PGP signature
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Stefano D'Angelo
2008/6/2 Nedko Arnaudov <[EMAIL PROTECTED]>:
> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>
> While I'm still not back to lv2zyn/zynjacku development, I'll share my
> thoughts. :)

Fair enough :-)

>> Let's stop this flame for a moment and see what LV2 misses in order to
>> let me kill EPAMP and live an happier life.
>
> Not sure you will live happier live :P

The less stuff to maintain, the better ;-)

>> #1. Support for interleaved channels and non-float data
>> Input and output data is often found in these formats.
>
> New port type is needed. Keep in mind though, that plugins using this
> port type will be probably limited to music player hosts. Also if we
> extrapolate this idea, we will have mp3 stream ports or things like
> that. Think twice whether it is good idea.

Well, I'd say non-float non-compressed data. I think ALSA's PCM sample
formats are more than sufficient. If you're worried about third
parties... LV2 is decentralized by design :-\

>> #2. Changing sample rate without re-instantiating all effects.
>> Gapless playback when chaning songs, for example, should be possible
>> without performing black magic.
>
> While I see nothing wrong to support that in general, if I was writting
> a music player, I'd use one sample rate/format do processing using it
> and convert/decode input streams early in the flow chain.

Me too actually. I don't know.

>> #3. Some serious connection logic thing (all the "equal channels" thing 
>> etc.).
>> This needs a thousand flame wars and *deep* thinking.
>
> No idea what you mean by this.

If someone is going to write that helper library (or adjust SLV2 or
whatever), I guess we should find some reasonable conventions to
organize and use plugins in a chain-like thing. This is damn hard, as
Paul Davis outlined already on this mailing list, and I actually don't
know to which degree it should be done.

>> #4. Support for time stretching when using non real-time audio sources.
>
> Why not? AFAIK this has clear uses in "professional" audio world too.
>
>> #5. Informations about delay time introduced by the algorithm itself
>> to do syncing with video-sources (for example).
>
> Uhm, dont we have such thing in LV2 already? If not, I think we need
> it. This should be useful for syncing multiple audio streams too. For
> video sources I'd prefer to have video streams (video port type),
> probably as event port.
>
>> #6. Some way for the host to make sense of the meaning of some
>> parameters and channels, to better support global settings and stuff.
>
> No idea what you mean by this. ATM, I miss instantiation stage
> parameters though.

Example: some LV2 extension tells the host that which parameter is a
"quality vs. speed" parameter in a plugin. The host can, then, show a
global "quality vs. speed" parameter to the user.

By "channel sense", I mean the host could know what a channel is in a
standardized way (I see you have that already in port groups
extension, it could be generalized to channels rather than ports).

>> #7. Global explicit initialization/finalization functions for more
>> exotic platforms (they wouldn't harm, so why not having them).
>
> I still dont get what is the use case for this.

Both on the host side and on the plugin side, no need for #ifdefs to
define initialization/finalization functions and maybe support for
exotic platforms not having them.

>> #8. Rules to find plugins possibly platform-specific and outside of
>> the specification; possibly one compile-time valid path.
>
> AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
> is to avoid RDF Turtle, this shouldnt be issue with proper helper
> library for hosts. Still such feature could be implemented in such a
> helper library.

Nope. I mean there should be platform-specific rules to get the list
of directories containing shared object files and possibly there
should be a fixed path to check on each platform, known at compile
time.

>> #9. Maybe more strict requirements on both hosts and plugins
>> (especially about thread-safety).
>> I see there is some indication in the core spec, but I don't know
>> about extensions and/or other possible concurrency issues.
>
> If things are not documented clearly enough I dont see why they
> shouldnt.
>
>> #10. Something (a library possibly) to make use all of this features
>> easily from the host author's POV.
>
> I'd choose one path of two host helper libraries, one for music player
> like apps and one for more music creation oriented ones. Not sure whether
> SLV2 fits in former case, AFAIK it is only used in later ones.

I could help, let's just see how this discussion turns out.

>> Can we start discussing about these issues and see if they are solved
>> already/how to implement them/how to make them better?
>
> Sure, but IMHO the things get real momentum when someone starts writting
> code, not just discussing. I hope this advice this will help you in your
> journey ;)

At least I don't want to waste effort if I start writing

Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Wolfgang Woehl
Arnold Krille:

> And why is time-stretching limited to non-realtime audio?

  Aaannnddd wwwhhhyyy iiisss tttiiimmmeee---ssstttrrrettch 

sorry, time's up.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Arnold Krille
Am Montag, 2. Juni 2008 schrieb Wolfgang Woehl:
> Arnold Krille:
> > And why is time-stretching limited to non-realtime audio?
>   Aaannnddd wwwhhhyyy iiisss tttiiimmmeee---ssstttrrrettch 
> sorry, time's up.

Well, try syncing two devices that don't share a world-clock and you 
will "fix" that problem with real-time-time-stretching. So yes, there is a 
rather practical use (but I actually don't advise to syncing two devices 
without a common-clock) for real-time audio stretching (its also called a 
dither-buffer but why use these algorithms when there is rubberband and co?).

Have fun,

Arnold
-- 
visit http://www.arnoldarts.de/
---
Hi, I am a .signature virus. Please copy me into your ~/.signature and send me 
to all your contacts.
After a month or so log in as root and do a "rm -rf /". Or ask your 
administrator to do so...


signature.asc
Description: This is a digitally signed message part.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Stefano D'Angelo
2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
> Am Montag, 2. Juni 2008 schrieb Wolfgang Woehl:
>> Arnold Krille:
>> > And why is time-stretching limited to non-realtime audio?
>>   Aaannnddd wwwhhhyyy iiisss tttiiimmmeee---ssstttrrrettch 
>> sorry, time's up.
>
> Well, try syncing two devices that don't share a world-clock and you
> will "fix" that problem with real-time-time-stretching. So yes, there is a
> rather practical use (but I actually don't advise to syncing two devices
> without a common-clock) for real-time audio stretching (its also called a
> dither-buffer but why use these algorithms when there is rubberband and co?).

I guess you mean resampling, otherwise I don't think it's phisically
possible to go ahead or behind in time.

I'm not interest in resampling plugins, but maybe someone else is?

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Steve Harris
On 2 Jun 2008, at 19:16, Stefano D'Angelo wrote:
>
>>> #1. Support for interleaved channels and non-float data
>>> Input and output data is often found in these formats.
>>
>> New port type is needed. Keep in mind though, that plugins using this
>> port type will be probably limited to music player hosts. Also if we
>> extrapolate this idea, we will have mp3 stream ports or things like
>> that. Think twice whether it is good idea.
>
> Well, I'd say non-float non-compressed data. I think ALSA's PCM sample
> formats are more than sufficient. If you're worried about third
> parties... LV2 is decentralized by design :-\

I think you'll make everyones life much better if you just provide  
utility functions (eg. in slv2) to convert interleaved integers and  
whatever to channelled floats. Constantly converting back and forth  
between different plugins with different requirements is lossy (in  
audio quality terms) and difficult to get right. Just do it once.  
There's a reason that LADSPA, LV2, VST etc. do everything in floats.

>>> #2. Changing sample rate without re-instantiating all effects.
>>> Gapless playback when chaning songs, for example, should be possible
>>> without performing black magic.
>>
>> While I see nothing wrong to support that in general, if I was  
>> writting
>> a music player, I'd use one sample rate/format do processing using it
>> and convert/decode input streams early in the flow chain.
>
> Me too actually. I don't know.

If you want glitch free playback then you have to stick to one sample  
rate at the card, in which case you may as well do the conversion  
before you start feeding plugins.

Any plugin that uses filters (ie. pretty much anything interesting)  
will have to recalculate it's coefficients and throw away buffers if  
you change the sample rate on it, so you'll be out of luck if you  
expect this to be smooth.

>>> #3. Some serious connection logic thing (all the "equal channels"  
>>> thing etc.).
>>> This needs a thousand flame wars and *deep* thinking.
>>
>> No idea what you mean by this.
>
> If someone is going to write that helper library (or adjust SLV2 or
> whatever), I guess we should find some reasonable conventions to
> organize and use plugins in a chain-like thing. This is damn hard, as
> Paul Davis outlined already on this mailing list, and I actually don't
> know to which degree it should be done.

It's not necessary, just intervene after each run() call, it's not  
hard and on a modern machine the cost is negligible.

>>> #4. Support for time stretching when using non real-time audio  
>>> sources.
>>
>> Why not? AFAIK this has clear uses in "professional" audio world too.

Yeah, but not in "realtime". LV2 could of course support that, with an  
extension, but it doesn't seem like the sort of thing that has enough  
variance that a plugin mechanism is a huge win over using SRC.

>>> #5. Informations about delay time introduced by the algorithm itself
>>> to do syncing with video-sources (for example).
>>
>> Uhm, dont we have such thing in LV2 already? If not, I think we need
>> it. This should be useful for syncing multiple audio streams too. For
>> video sources I'd prefer to have video streams (video port type),
>> probably as event port.

In LADSPA there's a "magic" control out port called "_latency" or  
something, that should apply to LV2 aswell, but I'm not sure if the  
spec says so.

>>> #6. Some way for the host to make sense of the meaning of some
>>> parameters and channels, to better support global settings and  
>>> stuff.
>>
>> No idea what you mean by this. ATM, I miss instantiation stage
>> parameters though.
>
> Example: some LV2 extension tells the host that which parameter is a
> "quality vs. speed" parameter in a plugin. The host can, then, show a
> global "quality vs. speed" parameter to the user.
>
> By "channel sense", I mean the host could know what a channel is in a
> standardized way (I see you have that already in port groups
> extension, it could be generalized to channels rather than ports).

What is a channel that is not a port/port group? Ports can be grouped  
and attributed, as eg. quality v's speed, or you can just say that by  
convention QvS ports have some well-known label, in the same way that  
systemic latency is indicated.

>>> #7. Global explicit initialization/finalization functions for more
>>> exotic platforms (they wouldn't harm, so why not having them).
>>
>> I still dont get what is the use case for this.
>
> Both on the host side and on the plugin side, no need for #ifdefs to
> define initialization/finalization functions and maybe support for
> exotic platforms not having them.

That's just a specification issue, it doesn't require any code. In  
order to use things like the CRT, linkers and loaders invoke global  
constructor attributes and so on, so that's just not an issue.

>>> #8. Rules to find plugins possibly platform-specific and outside of
>>> the specification; possibly one compile-time valid pa

Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Arnold Krille
Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
> 2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
> > Am Montag, 2. Juni 2008 schrieb Wolfgang Woehl:
> >> Arnold Krille:
> >> > And why is time-stretching limited to non-realtime audio?
> >>
> >>   Aaannnddd wwwhhhyyy iiisss tttiiimmmeee---ssstttrrrettch 
> >> sorry, time's up.
> >
> > Well, try syncing two devices that don't share a world-clock and you
> > will "fix" that problem with real-time-time-stretching. So yes, there is
> > a rather practical use (but I actually don't advise to syncing two
> > devices without a common-clock) for real-time audio stretching (its also
> > called a dither-buffer but why use these algorithms when there is
> > rubberband and co?).
> I guess you mean resampling, otherwise I don't think it's phisically
> possible to go ahead or behind in time.

Whats the difference in this respect? Both change the number of samples, do 
they?

> I'm not interest in resampling plugins, but maybe someone else is?

Not me, but when you start designing a plugin-interface with that attitude, 
you will loose. You _are_ interested in all possible plugins because you want 
your interface to rule the world and be used by all plugin-devs. (Regardless 
whether we are talking EPAMP, LV2, LADSPA, VST or gstreamer-plugins.)

Arnold
-- 
visit http://www.arnoldarts.de/
---
Hi, I am a .signature virus. Please copy me into your ~/.signature and send me 
to all your contacts.
After a month or so log in as root and do a "rm -rf /". Or ask your 
administrator to do so...


signature.asc
Description: This is a digitally signed message part.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Nedko Arnaudov
"Stefano D'Angelo" <[EMAIL PROTECTED]> writes:

>>> #3. Some serious connection logic thing (all the "equal channels" thing 
>>> etc.).
>>> This needs a thousand flame wars and *deep* thinking.
>>
>> No idea what you mean by this.
>
> If someone is going to write that helper library (or adjust SLV2 or
> whatever), I guess we should find some reasonable conventions to
> organize and use plugins in a chain-like thing. This is damn hard, as
> Paul Davis outlined already on this mailing list, and I actually don't
> know to which degree it should be done.

Looks like good cadidate for separate helper library. But as Paul said,
proably each player will end with its own helper "library".

> Example: some LV2 extension tells the host that which parameter is a
> "quality vs. speed" parameter in a plugin. The host can, then, show a
> global "quality vs. speed" parameter to the user.

In dynparam extension there are "hints" for this. They could be used as
generic UI generation hints, as MIDI mapping hints or as "quality
vs. speed" hint. I think this could be done for normal LV2 ports too,
i.e. assigning hint URIs with a port.

>>> #7. Global explicit initialization/finalization functions for more
>>> exotic platforms (they wouldn't harm, so why not having them).
>>
>> I still dont get what is the use case for this.
>
> Both on the host side and on the plugin side, no need for #ifdefs to
> define initialization/finalization functions and maybe support for
> exotic platforms not having them.

I dont see what you will do within those global
initialization/finalization functions. That thing needs to be something
not platform specific. This can be made as separate thing that can be
reused for other things too. The same way libtool is asbtraction to
shared libraries.

>>> #8. Rules to find plugins possibly platform-specific and outside of
>>> the specification; possibly one compile-time valid path.
>>
>> AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
>> is to avoid RDF Turtle, this shouldnt be issue with proper helper
>> library for hosts. Still such feature could be implemented in such a
>> helper library.
>
> Nope. I mean there should be platform-specific rules to get the list
> of directories containing shared object files and possibly there
> should be a fixed path to check on each platform, known at compile
> time.

Interface to SLV2 (-like) library should definitively allow modification
of directory list.

-- 
Nedko Arnaudov 


pgpTRkXb9nYzk.pgp
Description: PGP signature
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-02 Thread Stefano D'Angelo
2008/6/3 Arnold Krille <[EMAIL PROTECTED]>:
> Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
>> 2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
>> > Am Montag, 2. Juni 2008 schrieb Wolfgang Woehl:
>> >> Arnold Krille:
>> >> > And why is time-stretching limited to non-realtime audio?
>> >>
>> >>   Aaannnddd wwwhhhyyy iiisss tttiiimmmeee---ssstttrrrettch 
>> >> sorry, time's up.
>> >
>> > Well, try syncing two devices that don't share a world-clock and you
>> > will "fix" that problem with real-time-time-stretching. So yes, there is
>> > a rather practical use (but I actually don't advise to syncing two
>> > devices without a common-clock) for real-time audio stretching (its also
>> > called a dither-buffer but why use these algorithms when there is
>> > rubberband and co?).
>> I guess you mean resampling, otherwise I don't think it's phisically
>> possible to go ahead or behind in time.
>
> Whats the difference in this respect? Both change the number of samples, do
> they?

The difference is enormous: the host has to know if the plugin does resampling!

>> I'm not interest in resampling plugins, but maybe someone else is?
>
> Not me, but when you start designing a plugin-interface with that attitude,
> you will loose. You _are_ interested in all possible plugins because you want
> your interface to rule the world and be used by all plugin-devs. (Regardless
> whether we are talking EPAMP, LV2, LADSPA, VST or gstreamer-plugins.)

This is not true for every plugin API. By design, some are meant to be
universal, others are not. It's a matter of choice IMHO.

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Arnold Krille
Am Dienstag, 3. Juni 2008 schrieb Stefano D'Angelo:
> 2008/6/3 Arnold Krille <[EMAIL PROTECTED]>:
> > Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
> >> 2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
> >> > Well, try syncing two devices that don't share a world-clock and you
> >> > will "fix" that problem with real-time-time-stretching. So yes, there
> >> > is a rather practical use (but I actually don't advise to syncing two
> >> > devices without a common-clock) for real-time audio stretching (its
> >> > also called a dither-buffer but why use these algorithms when there is
> >> > rubberband and co?).
> >> I guess you mean resampling, otherwise I don't think it's phisically
> >> possible to go ahead or behind in time.
> > Whats the difference in this respect? Both change the number of samples,
> > do they?
> The difference is enormous: the host has to know if the plugin does
> resampling!

Yep, thats why the plugins have to tell the host how many samples they create 
from the number of input samples. (With the default of the same number of 
samples...)
But the host should _never_ force a plugin to do resampling/time-stretching! 
Because it opens a pandoras box of bad quality!

> >> I'm not interest in resampling plugins, but maybe someone else is?
> > Not me, but when you start designing a plugin-interface with that
> > attitude, you will loose. You _are_ interested in all possible plugins
> > because you want your interface to rule the world and be used by all
> > plugin-devs. (Regardless whether we are talking EPAMP, LV2, LADSPA, VST
> > or gstreamer-plugins.)
> This is not true for every plugin API. By design, some are meant to be
> universal, others are not. It's a matter of choice IMHO.

Well, your first proposal was a "universal" plugin API!? Being universal is 
one of the things you wanted in the first place. And its why LV2 supports 
extensions...

Arnold
-- 
visit http://www.arnoldarts.de/
---
Hi, I am a .signature virus. Please copy me into your ~/.signature and send me 
to all your contacts.
After a month or so log in as root and do a "rm -rf /". Or ask your 
administrator to do so...


signature.asc
Description: This is a digitally signed message part.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3, Arnold Krille <[EMAIL PROTECTED]>:
> Am Dienstag, 3. Juni 2008 schrieb Stefano D'Angelo:
>> 2008/6/3 Arnold Krille <[EMAIL PROTECTED]>:
>> > Am Montag, 2. Juni 2008 schrieb Stefano D'Angelo:
>> >> 2008/6/2 Arnold Krille <[EMAIL PROTECTED]>:
>> >> > Well, try syncing two devices that don't share a world-clock and you
>> >> > will "fix" that problem with real-time-time-stretching. So yes, there
>> >> > is a rather practical use (but I actually don't advise to syncing two
>> >> > devices without a common-clock) for real-time audio stretching (its
>> >> > also called a dither-buffer but why use these algorithms when there
>> >> > is
>> >> > rubberband and co?).
>> >> I guess you mean resampling, otherwise I don't think it's phisically
>> >> possible to go ahead or behind in time.
>> > Whats the difference in this respect? Both change the number of samples,
>> > do they?
>> The difference is enormous: the host has to know if the plugin does
>> resampling!
>
> Yep, thats why the plugins have to tell the host how many samples they
> create
> from the number of input samples. (With the default of the same number of
> samples...)

Yes, but the host has to know how much time corresponds to a buffer,
so it must know the input and output sample rate.

> But the host should _never_ force a plugin to do resampling/time-stretching!
> Because it opens a pandoras box of bad quality!

Of course.

>> >> I'm not interest in resampling plugins, but maybe someone else is?
>> > Not me, but when you start designing a plugin-interface with that
>> > attitude, you will loose. You _are_ interested in all possible plugins
>> > because you want your interface to rule the world and be used by all
>> > plugin-devs. (Regardless whether we are talking EPAMP, LV2, LADSPA, VST
>> > or gstreamer-plugins.)
>> This is not true for every plugin API. By design, some are meant to be
>> universal, others are not. It's a matter of choice IMHO.
>
> Well, your first proposal was a "universal" plugin API!? Being universal is
> one of the things you wanted in the first place. And its why LV2 supports
> extensions...

Who said that? I said "an effect API *for media players*".

Anyway, it doesn't matter. I'm starting to think that it's better to
adapt LV2 to this task.

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3, Steve Harris <[EMAIL PROTECTED]>:
> On 2 Jun 2008, at 19:16, Stefano D'Angelo wrote:
>>
 #1. Support for interleaved channels and non-float data
 Input and output data is often found in these formats.
>>>
>>> New port type is needed. Keep in mind though, that plugins using this
>>> port type will be probably limited to music player hosts. Also if we
>>> extrapolate this idea, we will have mp3 stream ports or things like
>>> that. Think twice whether it is good idea.
>>
>> Well, I'd say non-float non-compressed data. I think ALSA's PCM sample
>> formats are more than sufficient. If you're worried about third
>> parties... LV2 is decentralized by design :-\
>
> I think you'll make everyones life much better if you just provide
> utility functions (eg. in slv2) to convert interleaved integers and
> whatever to channelled floats. Constantly converting back and forth
> between different plugins with different requirements is lossy (in
> audio quality terms) and difficult to get right. Just do it once.
> There's a reason that LADSPA, LV2, VST etc. do everything in floats.

Maybe yuo're right.

 #2. Changing sample rate without re-instantiating all effects.
 Gapless playback when chaning songs, for example, should be possible
 without performing black magic.
>>>
>>> While I see nothing wrong to support that in general, if I was
>>> writting
>>> a music player, I'd use one sample rate/format do processing using it
>>> and convert/decode input streams early in the flow chain.
>>
>> Me too actually. I don't know.
>
> If you want glitch free playback then you have to stick to one sample
> rate at the card, in which case you may as well do the conversion
> before you start feeding plugins.

Right, didn't think about that actually.

> Any plugin that uses filters (ie. pretty much anything interesting)
> will have to recalculate it's coefficients and throw away buffers if
> you change the sample rate on it, so you'll be out of luck if you
> expect this to be smooth.

By "throwing away buffers" you mean past buffers?

 #3. Some serious connection logic thing (all the "equal channels"
 thing etc.).
 This needs a thousand flame wars and *deep* thinking.
>>>
>>> No idea what you mean by this.
>>
>> If someone is going to write that helper library (or adjust SLV2 or
>> whatever), I guess we should find some reasonable conventions to
>> organize and use plugins in a chain-like thing. This is damn hard, as
>> Paul Davis outlined already on this mailing list, and I actually don't
>> know to which degree it should be done.
>
> It's not necessary, just intervene after each run() call, it's not
> hard and on a modern machine the cost is negligible.

Sorry, I'm not understanding here. How would you do exactly?

 #4. Support for time stretching when using non real-time audio
 sources.
>>>
>>> Why not? AFAIK this has clear uses in "professional" audio world too.
>
> Yeah, but not in "realtime". LV2 could of course support that, with an
> extension, but it doesn't seem like the sort of thing that has enough
> variance that a plugin mechanism is a huge win over using SRC.

Mmm.. not if it is only time-stretching, but if it is time-stretching
+ other stuff (for example pitch shifting) together? Gonna use two
plugins? I don't know :-\

 #5. Informations about delay time introduced by the algorithm itself
 to do syncing with video-sources (for example).
>>>
>>> Uhm, dont we have such thing in LV2 already? If not, I think we need
>>> it. This should be useful for syncing multiple audio streams too. For
>>> video sources I'd prefer to have video streams (video port type),
>>> probably as event port.
>
> In LADSPA there's a "magic" control out port called "_latency" or
> something, that should apply to LV2 aswell, but I'm not sure if the
> spec says so.

Which spec are you referring to? IIRC the LADSPA spec doesn't state
such a thing. Some convention maybe?

 #6. Some way for the host to make sense of the meaning of some
 parameters and channels, to better support global settings and
 stuff.
>>>
>>> No idea what you mean by this. ATM, I miss instantiation stage
>>> parameters though.
>>
>> Example: some LV2 extension tells the host that which parameter is a
>> "quality vs. speed" parameter in a plugin. The host can, then, show a
>> global "quality vs. speed" parameter to the user.
>>
>> By "channel sense", I mean the host could know what a channel is in a
>> standardized way (I see you have that already in port groups
>> extension, it could be generalized to channels rather than ports).
>
> What is a channel that is not a port/port group? Ports can be grouped
> and attributed, as eg. quality v's speed, or you can just say that by
> convention QvS ports have some well-known label, in the same way that
> systemic latency is indicated.

I was referring to one of the interleaved channels in a multi-channel stream.
About labels, could we maybe define a set of known labels? (And

Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3, Nedko Arnaudov <[EMAIL PROTECTED]>:
> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>
 #3. Some serious connection logic thing (all the "equal channels" thing
 etc.).
 This needs a thousand flame wars and *deep* thinking.
>>>
>>> No idea what you mean by this.
>>
>> If someone is going to write that helper library (or adjust SLV2 or
>> whatever), I guess we should find some reasonable conventions to
>> organize and use plugins in a chain-like thing. This is damn hard, as
>> Paul Davis outlined already on this mailing list, and I actually don't
>> know to which degree it should be done.
>
> Looks like good cadidate for separate helper library. But as Paul said,
> proably each player will end with its own helper "library".

I'm waiting for an answer from Steve Harris on this :-)

>> Example: some LV2 extension tells the host that which parameter is a
>> "quality vs. speed" parameter in a plugin. The host can, then, show a
>> global "quality vs. speed" parameter to the user.
>
> In dynparam extension there are "hints" for this. They could be used as
> generic UI generation hints, as MIDI mapping hints or as "quality
> vs. speed" hint. I think this could be done for normal LV2 ports too,
> i.e. assigning hint URIs with a port.

That could do the trick.

 #7. Global explicit initialization/finalization functions for more
 exotic platforms (they wouldn't harm, so why not having them).
>>>
>>> I still dont get what is the use case for this.
>>
>> Both on the host side and on the plugin side, no need for #ifdefs to
>> define initialization/finalization functions and maybe support for
>> exotic platforms not having them.
>
> I dont see what you will do within those global
> initialization/finalization functions. That thing needs to be something
> not platform specific.

Well, I for example would use them with NASPRO to fill the plugin with
all effect descriptors (don't know yet how to do with RDF/Turtle, but
I'll find a way).

> This can be made as separate thing that can be
> reused for other things too. The same way libtool is asbtraction to
> shared libraries.

?

 #8. Rules to find plugins possibly platform-specific and outside of
 the specification; possibly one compile-time valid path.
>>>
>>> AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
>>> is to avoid RDF Turtle, this shouldnt be issue with proper helper
>>> library for hosts. Still such feature could be implemented in such a
>>> helper library.
>>
>> Nope. I mean there should be platform-specific rules to get the list
>> of directories containing shared object files and possibly there
>> should be a fixed path to check on each platform, known at compile
>> time.
>
> Interface to SLV2 (-like) library should definitively allow modification
> of directory list.

Which kind of modification?

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Nedko Arnaudov
"Stefano D'Angelo" <[EMAIL PROTECTED]> writes:

> #7. Global explicit initialization/finalization functions for more
> exotic platforms (they wouldn't harm, so why not having them).

 I still dont get what is the use case for this.
>>>
>>> Both on the host side and on the plugin side, no need for #ifdefs to
>>> define initialization/finalization functions and maybe support for
>>> exotic platforms not having them.
>>
>> I dont see what you will do within those global
>> initialization/finalization functions. That thing needs to be something
>> not platform specific.
>
> Well, I for example would use them with NASPRO to fill the plugin with
> all effect descriptors (don't know yet how to do with RDF/Turtle, but
> I'll find a way).
>
>> This can be made as separate thing that can be
>> reused for other things too. The same way libtool is asbtraction to
>> shared libraries.
>
> ?

You need absatraction for defining global constructor/destructor in
shared library. As Larsl already said, you can use some C++ tricks (like
constructor of global object), for this. In my vision, such thing is
bound to creation of shared library file, this is why I mentioned
libtool.

> #8. Rules to find plugins possibly platform-specific and outside of
> the specification; possibly one compile-time valid path.

 AFAIK, this conficts with "LV2 spirit". Why one needs this? If the goal
 is to avoid RDF Turtle, this shouldnt be issue with proper helper
 library for hosts. Still such feature could be implemented in such a
 helper library.
>>>
>>> Nope. I mean there should be platform-specific rules to get the list
>>> of directories containing shared object files and possibly there
>>> should be a fixed path to check on each platform, known at compile
>>> time.
>>
>> Interface to SLV2 (-like) library should definitively allow modification
>> of directory list.
>
> Which kind of modification?

 * get list of lv2 plugins (extracted from LV2_PATH by slv2)
 * modify that list (add/remove directories)
 * (maybe) get path of directory where plugin resides

-- 
Nedko Arnaudov 


pgpTjjDQURm4i.pgp
Description: PGP signature
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Dmitry Baikov
On Tue, Jun 3, 2008 at 5:15 PM, Nedko Arnaudov <[EMAIL PROTECTED]> wrote:
> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>> #7. Global explicit initialization/finalization functions for more
>> exotic platforms (they wouldn't harm, so why not having them).
>
> You need absatraction for defining global constructor/destructor in
> shared library. As Larsl already said, you can use some C++ tricks (like
> constructor of global object), for this. In my vision, such thing is
> bound to creation of shared library file, this is why I mentioned
> libtool.

>From my (big enough) experience, 'automagic' initialization of modules
(and non-trivial variables) in addition to being not very portable
is a REALLY BAD THING.
And I would suggest using explicit calls wherever possible.


Dmitry.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
2008/6/3 Dmitry Baikov <[EMAIL PROTECTED]>:
> On Tue, Jun 3, 2008 at 5:15 PM, Nedko Arnaudov <[EMAIL PROTECTED]> wrote:
>> "Stefano D'Angelo" <[EMAIL PROTECTED]> writes:
>>> #7. Global explicit initialization/finalization functions for more
>>> exotic platforms (they wouldn't harm, so why not having them).
>>
>> You need absatraction for defining global constructor/destructor in
>> shared library. As Larsl already said, you can use some C++ tricks (like
>> constructor of global object), for this. In my vision, such thing is
>> bound to creation of shared library file, this is why I mentioned
>> libtool.
>
> From my (big enough) experience, 'automagic' initialization of modules
> (and non-trivial variables) in addition to being not very portable
> is a REALLY BAD THING.
> And I would suggest using explicit calls wherever possible.

Could you please elaborate on that? What kind of problems can arise?

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Steve Harris
On 3 Jun 2008, at 12:53, Stefano D'Angelo wrote:
> #2. Changing sample rate without re-instantiating all effects.
> Gapless playback when chaning songs, for example, should be  
> possible
> without performing black magic.

 While I see nothing wrong to support that in general, if I was
 writting
 a music player, I'd use one sample rate/format do processing  
 using it
 and convert/decode input streams early in the flow chain.
>>>
>>> Me too actually. I don't know.
>>
>> If you want glitch free playback then you have to stick to one sample
>> rate at the card, in which case you may as well do the conversion
>> before you start feeding plugins.
>
> Right, didn't think about that actually.
>
>> Any plugin that uses filters (ie. pretty much anything interesting)
>> will have to recalculate it's coefficients and throw away buffers if
>> you change the sample rate on it, so you'll be out of luck if you
>> expect this to be smooth.
>
> By "throwing away buffers" you mean past buffers?

I mean the y(-1) etc. buffers that filters use to calculate their  
output. Actually it may not be neccesary to discard them in some  
cases, but you will still get glitches from the coefficient changes.

> #3. Some serious connection logic thing (all the "equal channels"
> thing etc.).
> This needs a thousand flame wars and *deep* thinking.

 No idea what you mean by this.
>>>
>>> If someone is going to write that helper library (or adjust SLV2 or
>>> whatever), I guess we should find some reasonable conventions to
>>> organize and use plugins in a chain-like thing. This is damn hard,  
>>> as
>>> Paul Davis outlined already on this mailing list, and I actually  
>>> don't
>>> know to which degree it should be done.
>>
>> It's not necessary, just intervene after each run() call, it's not
>> hard and on a modern machine the cost is negligible.
>
> Sorry, I'm not understanding here. How would you do exactly?

You don't have to make plugin A directly feed plugin B, you can have  
the host do some buffer twiddling inbetween.

> #4. Support for time stretching when using non real-time audio
> sources.

 Why not? AFAIK this has clear uses in "professional" audio world  
 too.
>>
>> Yeah, but not in "realtime". LV2 could of course support that, with  
>> an
>> extension, but it doesn't seem like the sort of thing that has enough
>> variance that a plugin mechanism is a huge win over using SRC.
>
> Mmm.. not if it is only time-stretching, but if it is time-stretching
> + other stuff (for example pitch shifting) together? Gonna use two
> plugins? I don't know :-\

Well, pitch shifting is fine in plugins.

> #5. Informations about delay time introduced by the algorithm  
> itself
> to do syncing with video-sources (for example).

 Uhm, dont we have such thing in LV2 already? If not, I think we  
 need
 it. This should be useful for syncing multiple audio streams too.  
 For
 video sources I'd prefer to have video streams (video port type),
 probably as event port.
>>
>> In LADSPA there's a "magic" control out port called "_latency" or
>> something, that should apply to LV2 aswell, but I'm not sure if the
>> spec says so.
>
> Which spec are you referring to? IIRC the LADSPA spec doesn't state
> such a thing. Some convention maybe?

Yeah, that's what implying by "magic", in LV2 it's an annotation on  
ports.

> #6. Some way for the host to make sense of the meaning of some
> parameters and channels, to better support global settings and
> stuff.

 No idea what you mean by this. ATM, I miss instantiation stage
 parameters though.
>>>
>>> Example: some LV2 extension tells the host that which parameter is a
>>> "quality vs. speed" parameter in a plugin. The host can, then,  
>>> show a
>>> global "quality vs. speed" parameter to the user.
>>>
>>> By "channel sense", I mean the host could know what a channel is  
>>> in a
>>> standardized way (I see you have that already in port groups
>>> extension, it could be generalized to channels rather than ports).
>>
>> What is a channel that is not a port/port group? Ports can be grouped
>> and attributed, as eg. quality v's speed, or you can just say that by
>> convention QvS ports have some well-known label, in the same way that
>> systemic latency is indicated.
>
> I was referring to one of the interleaved channels in a multi- 
> channel stream.
> About labels, could we maybe define a set of known labels? (And isn't
> that already implemented somehow in LV2? - I'm not exactly familiar
> with it, as you may have noticed)

OK, but interleaving is just inconvenient, in many ways.

> #8. Rules to find plugins possibly platform-specific and outside  
> of
> the specification; possibly one compile-time valid path.

 AFAIK, this conficts with "LV2 spirit". Why one needs this? If the
 goal
 is to avoid RDF Turtle, this shouldnt be issue wi

Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Paul Davis

On Tue, 2008-06-03 at 18:34 +0100, Steve Harris wrote:
> On 3 Jun 2008, at 12:53, Stefano D'Angelo wrote:

> >>>
> >>> If someone is going to write that helper library (or adjust SLV2 or
> >>> whatever), I guess we should find some reasonable conventions to
> >>> organize and use plugins in a chain-like thing. This is damn hard,  
> >>> as
> >>> Paul Davis outlined already on this mailing list, and I actually  
> >>> don't
> >>> know to which degree it should be done.
> >>
> >> It's not necessary, just intervene after each run() call, it's not
> >> hard and on a modern machine the cost is negligible.
> >
> > Sorry, I'm not understanding here. How would you do exactly?
> 
> You don't have to make plugin A directly feed plugin B, you can have  
> the host do some buffer twiddling inbetween.

this is still pretty hard steve.


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-03 Thread Stefano D'Angelo
Replying to myself, lame but necessary.

Here I put some concrete proposals to try solving these issues, if
it's worth it. I am willing to help as much as I can coding and stuff,
but if someone else is interested as well it'd be way better.

2008/6/2 Stefano D'Angelo <[EMAIL PROTECTED]>:
> Let's stop this flame for a moment and see what LV2 misses in order to
> let me kill EPAMP and live an happier life.
>
> #1. Support for interleaved channels and non-float data
> Input and output data is often found in these formats.

As Steve Harris suggests, let's not touch this but write a bunch of
helper functions to do the dirty work of demuxing and converting.

I have them already
(http://hg.atheme.org/naspro/file/6adbc44c9678/naspro-objects/lib/util.c)
and with very little work could be adapted I think.

These should reside either in SLV2 or somewhere else (see below).

> #2. Changing sample rate without re-instantiating all effects.
> Gapless playback when chaning songs, for example, should be possible
> without performing black magic.

Let's just forget about this now.

> #3. Some serious connection logic thing (all the "equal channels" thing etc.).
> This needs a thousand flame wars and *deep* thinking.

I was thinking if we can write a "simple" "chain streamer" aimed at
media players.

Again, this could reside either in SLV2 or be in a new separate
library using SLV2. The previous demuxing code should be in the same
place as this code.

> #4. Support for time stretching when using non real-time audio sources.

I came to this conclusion about it: if combining together pitch
shifting and time stretching you can get better results than doing
things separately, it is the case to support it at a plugin level;
otherwise time stretching can be done by the host and pitch shifting
by the plugin.

Now, I'm looking at phase vocoders on wikipedia
(http://en.wikipedia.org/wiki/Phase_vocoder) and that thing states:

"The time scale of the resynthesis does not have to be the same as the
time scale of the analysis, allowing for high-quality time-scale
modification of the original sound file."

I'm no expert in this stuff, does anyone know how if that is true?

In such case support should be added and, IMHO, that could be done
modifying the run() to return buffer length and give the host an hint
on maximum buffer sizes.

I think it shouldn't be done inside an extension since it's really
core level stuff.

> #5. Informations about delay time introduced by the algorithm itself
> to do syncing with video-sources (for example).

LV2 has that already.

> #6. Some way for the host to make sense of the meaning of some
> parameters and channels, to better support global settings and stuff.

Regarding audio ports, there is the port groups extension and channels
go away since we want to forget aout interleaved audio.

Talking about control ports, I think an extension could do the trick.

> #7. Global explicit initialization/finalization functions for more
> exotic platforms (they wouldn't harm, so why not having them).

I am still convinced they can't do any bad, but anyway it's not a
tragedy if you don't want them around.

> #8. Rules to find plugins possibly platform-specific and outside of
> the specification; possibly one compile-time valid path.

There are compile-time valid paths already.

I'm suggesting to state in the core spec: "look at this page to know
rules on how find plugins", but if you don't want to do it's still no
tragedy.

> #9. Maybe more strict requirements on both hosts and plugins
> (especially about thread-safety).
> I see there is some indication in the core spec, but I don't know
> about extensions and/or other possible concurrency issues.

I trust you :-)

> #10. Something (a library possibly) to make use all of this features
> easily from the host author's POV.

Possibly some new stuff to SLV2 or an SLV2-based library, as I'm
saying on point 3.

Summing up, I'd say we need to:
- expand SLV2 to do "default chain streaming", channel demuxing and
format conversion for host authors who have no special attitude
towards this kind of stuff, or outside of SLV2 in a new SLV2-based
library;
- if it's the case, add a "number of samples" return value to the
run() callback in the core spec or, in the worst case, put an
alternative run() callback inside an extension;
- write an extension for "control in sense" (for the host to know
what's the meaning of a parameter);
- if LV2 authors want to do so, add global eplicit init/fini functions
and put platform-specific rules for finding plugins outside of the
spec.

I'm available for all of this tasks.

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-04 Thread Steve Harris
On 3 Jun 2008, at 18:54, Paul Davis wrote:

>
> On Tue, 2008-06-03 at 18:34 +0100, Steve Harris wrote:
>> On 3 Jun 2008, at 12:53, Stefano D'Angelo wrote:
>
>
> If someone is going to write that helper library (or adjust SLV2  
> or
> whatever), I guess we should find some reasonable conventions to
> organize and use plugins in a chain-like thing. This is damn hard,
> as
> Paul Davis outlined already on this mailing list, and I actually
> don't
> know to which degree it should be done.

 It's not necessary, just intervene after each run() call, it's not
 hard and on a modern machine the cost is negligible.
>>>
>>> Sorry, I'm not understanding here. How would you do exactly?
>>
>> You don't have to make plugin A directly feed plugin B, you can have
>> the host do some buffer twiddling inbetween.
>
> this is still pretty hard steve.

Well, yes, and no. Actually doing something is easy, doing the Right  
Thing™ is basically impossible. With LV2-style annotations, it might  
be possible to tie up L to L and R to R and so on, but beyond that  
it's essentially guesswork.

However, for a media player the host could just ignore all plugins  
that have anything other than a LR pair of ins and a LR pair of outs,  
or a 5.1 set or whatever.

You don't have to solve all the hard problem to make use a plugin  
format.

- Steve
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-04 Thread Steve Harris
On 3 Jun 2008, at 19:39, Stefano D'Angelo wrote:
>> #4. Support for time stretching when using non real-time audio  
>> sources.
>
> I came to this conclusion about it: if combining together pitch
> shifting and time stretching you can get better results than doing
> things separately, it is the case to support it at a plugin level;
> otherwise time stretching can be done by the host and pitch shifting
> by the plugin.
>
> Now, I'm looking at phase vocoders on wikipedia
> (http://en.wikipedia.org/wiki/Phase_vocoder) and that thing states:
>
> "The time scale of the resynthesis does not have to be the same as the
> time scale of the analysis, allowing for high-quality time-scale
> modification of the original sound file."
>
> I'm no expert in this stuff, does anyone know how if that is true?

It's true, but it's really, really hard to get right.

You don't have to use timestretching to resample audio for a music  
player, infact you would avoid that at all costs, you just need to  
resample.

> In such case support should be added and, IMHO, that could be done
> modifying the run() to return buffer length and give the host an hint
> on maximum buffer sizes.
>
> I think it shouldn't be done inside an extension since it's really
> core level stuff.

No, I disagree very strongly. Time stretching is one of a very small  
number of applications for the feature, the user is not going to want  
to insert it into a chain of effects, so just use SRC, and feed the  
plugins with the resampled material.

Having plugins being capable of outputting an arbitrary number of  
samples is a really horrible thing to deal with in a realtime  
environment, I'd want to avoid it al all costs.

Everything else you said makes sense.

- Steve
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-04 Thread Stefano D'Angelo
2008/6/4 Steve Harris <[EMAIL PROTECTED]>:
> On 3 Jun 2008, at 19:39, Stefano D'Angelo wrote:
>>>
>>> #4. Support for time stretching when using non real-time audio sources.
>>
>> I came to this conclusion about it: if combining together pitch
>> shifting and time stretching you can get better results than doing
>> things separately, it is the case to support it at a plugin level;
>> otherwise time stretching can be done by the host and pitch shifting
>> by the plugin.
>>
>> Now, I'm looking at phase vocoders on wikipedia
>> (http://en.wikipedia.org/wiki/Phase_vocoder) and that thing states:
>>
>> "The time scale of the resynthesis does not have to be the same as the
>> time scale of the analysis, allowing for high-quality time-scale
>> modification of the original sound file."
>>
>> I'm no expert in this stuff, does anyone know how if that is true?
>
> It's true, but it's really, really hard to get right.
>
> You don't have to use timestretching to resample audio for a music player,
> infact you would avoid that at all costs, you just need to resample.

The question basically was about time-stretching combined with pitch
balancing (output to have the same pitch as the input). Resampling is
a related but different thing; it just happens that you can
time-stretch without balancing the pitch by resampling and not
changing the output sample rate settings.

I guess my English is getting worse lately :-)

>> In such case support should be added and, IMHO, that could be done
>> modifying the run() to return buffer length and give the host an hint
>> on maximum buffer sizes.
>>
>> I think it shouldn't be done inside an extension since it's really
>> core level stuff.
>
> No, I disagree very strongly. Time stretching is one of a very small number
> of applications for the feature, the user is not going to want to insert it
> into a chain of effects, so just use SRC, and feed the plugins with the
> resampled material.

It's true that it has probably a very small number of applications,
and I understand it can be implemented as an extension, but I don't
think it's right to just claim that "users don't want to do that" and
forget about it.

For example, I'm not sure whether not having that feature prevents or
makes it hard somehow to do stuff like DJ-style scratching or speed
changing effects with sample-level accuracy.

Is it ok for you if I write an extension for it?

> Having plugins being capable of outputting an arbitrary number of samples is
> a really horrible thing to deal with in a realtime environment, I'd want to
> avoid it al all costs.

Well, if you know the maximum output buffer size somehow, I don't see
any reason to be worried about that. Is there something I should know?

> Everything else you said makes sense.

At least :-)

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-04 Thread Lars Luthman
On Wed, 2008-06-04 at 11:46 +0200, Stefano D'Angelo wrote:
> Is it ok for you if I write an extension for it?

This is a question that should never ever be asked when talking about
LV2. =)


--ll


signature.asc
Description: This is a digitally signed message part
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-04 Thread Stefano D'Angelo
2008/6/4 Lars Luthman <[EMAIL PROTECTED]>:
> On Wed, 2008-06-04 at 11:46 +0200, Stefano D'Angelo wrote:
>> Is it ok for you if I write an extension for it?
>
> This is a question that should never ever be asked when talking about
> LV2. =)

Well, apart from the technical side, I'm also interested in the
"community" side of things.

My project (NASPRO) is not only about making all plugin APIs
interoperable, but also about vision, partecipation, cooperation and
knowledge sharing ;-) (I like spamming so much :-P)

Stefano
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-05 Thread Sampo Savolainen
On Wed, 2008-06-04 at 11:46 +0200, Stefano D'Angelo wrote:

> For example, I'm not sure whether not having that feature prevents or
> makes it hard somehow to do stuff like DJ-style scratching or speed
> changing effects with sample-level accuracy.

To do a scratching effect, you actually need a resampler and then play
the resampled data back at the original samplerate. When scratching,
both the pitch and the speed of the sound goes up and down depending on
the speed of the disc. 

To do that with digital audio you must either change the sample rate of
the data or the interface. Because you can't continuously change the
sample rate of a pc sound interface, you must resample the data.


The pitch or and speed matching stuff dj's do is exactly the same. They
can just match either one (unless they're really lucky) because both
change when the one parameter (the speed of the disc). 

Rubberband ( http://breakfastquay.com/rubberband/ ) contains a plugin
(ladspa) that can change only the pitch of an audio stream.

All of these effects have a strict 1:1 ratio between samples going in
and going out. The only effect which differs in this would be changing
the 'length' of the signal, but that is a different story altogether.


I hope I got your original question right. If I didn't, sorry for the
noise everyone :)

 Sampo

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] Let's kill EPAMP???

2008-06-05 Thread Sampo Savolainen
On Thu, 2008-06-05 at 22:35 +0300, Sampo Savolainen wrote:
> All of these effects have a strict 1:1 ratio between samples going in
> and going out. The only effect which differs in this would be changing
> the 'length' of the signal, but that is a different story altogether.

Sorry, I'm tired: the resampling does not follow the 1:1 rule at all.
This is because when you mess with how fast your record is playing, you
are in a sense messing with "time" on the disc. 

To put this in another way: you are taking X seconds of sound and
playing it back in Y seconds where X != Y.

This means that you can't do an effect like "real" scratching in
realtime without the plugin asserting some sort of
transport/playhead/speed control on the underlying media player.


.. Now, the rubberband pitch shifter IS 1:1. Unlike all the DJ stuff I
mentioned..

 * note to self: this is why you shouldn't write emails when you're
about to go to bed

 Good night,
  Sampo

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev