Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Stefano D'Angelo

2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:

Jeff McClintock wrote:
 I actually don't know how many plugins are LTI, but, for example, a
 lot of delays, reverbs, choruses, eq. filters, compressors, modulators
 and sound mixers should be, and that's quite enough after all.

 Yeah, It's a good optimization.  The SynthEdit plugin API supports
 inputs being flagged as 'linear', if several such plugins are used in
 parallel they are automatically collapsed into a single instance which
 is fed the summed signals of the original plugins.  Plugin are
 collapsed  only when their control inputs are the same.

 BEFORE optimation:

 [plugin]--[delay1]--
 [plugin]--[delay2]-/

 AFTER:

 [plugin]---[delay1]---
 [plugin]-/

  e.g. two parallel 100ms delays are combined.  Two different length
 delays aren't.

   This is most useful in synth patches where each voice is an
 identical parallel sub-patch.


 Jeff McClintock

How often are more than one plugin with the same control inputs used in
paralel? I was rather thinking of colapsing (or swapping) plugins in
series. They'd have to be linear and time invariant, of course.
Or maybe plugins could 'know' how to colapse themselves, sort of like
overriding Plugin::operator+(const Plugin), to use a C++ metaphor.


Well, stereo sounds passing through mono plugins is one case.
However as Jeff describes this optimization, it is applicable when
output signals are summed, and I don't know how often it happens.
Anyway it is another idea to optimize processing for linear plugins,
definitively not something to discard.
This makes me think that some common basic pieces like mixers and
delay filters can have special properties which involve even more
aggressive optimization. Maybe it's worth considering how this special
blocks could be developed and used.

Stefano


Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Paul Davis
On Mon, 2007-02-19 at 14:18 +0100, Stefano D'Angelo wrote:

  How often are more than one plugin with the same control inputs used in
  paralel? I was rather thinking of colapsing (or swapping) plugins in
  series. They'd have to be linear and time invariant, of course.
  Or maybe plugins could 'know' how to colapse themselves, sort of like
  overriding Plugin::operator+(const Plugin), to use a C++ metaphor.
 
 Well, stereo sounds passing through mono plugins is one case.

nope. thats not a linear arrangement of the two mono plugins, but a
parallel arrangement. the signal going to each instance of the mono
plugin is different.

 However as Jeff describes this optimization, it is applicable when
 output signals are summed, and I don't know how often it happens.
 Anyway it is another idea to optimize processing for linear plugins,
 definitively not something to discard.
 This makes me think that some common basic pieces like mixers and
 delay filters can have special properties which involve even more
 aggressive optimization. Maybe it's worth considering how this special
 blocks could be developed and used.

you can think all you want. unless there a plugin-host callback that
allows the plugin to determine its operating environment in huge detail,
this kind of idea is pretty impossible to make use of.

--p




[linux-audio-dev] Hunt for old list archives.

2007-02-19 Thread Marc-Olivier Barre

Hi all,

As discussed previously on this list, we are getting ready for a
migration of the three LA* lists to linuxaudio.org.

Many things are done to make our list better. One of them concerns
archives. As you may have noted, LA* archives found on
music.columbia.edu date back to 2002. Ico told me that there used to
be an LA list hosted somewhere else way before that (1998).

Having to do this migration, we will also migrate our archives. If
possible it would be cool to be able to merge also the older archives
in the whole.

The question is, do some of you have any idea where I could find these
archives, if they still exit...

Cheers,
__
Marc-Olivier Barre,
Markinoko.


Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Camilo Polyméris

Stefano D'Angelo wrote:

2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:

Jeff McClintock wrote:
 I actually don't know how many plugins are LTI, but, for example, a
 lot of delays, reverbs, choruses, eq. filters, compressors, 
modulators

 and sound mixers should be, and that's quite enough after all.

 Yeah, It's a good optimization.  The SynthEdit plugin API supports
 inputs being flagged as 'linear', if several such plugins are used in
 parallel they are automatically collapsed into a single instance which
 is fed the summed signals of the original plugins.  Plugin are
 collapsed  only when their control inputs are the same.

 BEFORE optimation:

 [plugin]--[delay1]--
 [plugin]--[delay2]-/

 AFTER:

 [plugin]---[delay1]---
 [plugin]-/

  e.g. two parallel 100ms delays are combined.  Two different length
 delays aren't.

   This is most useful in synth patches where each voice is an
 identical parallel sub-patch.


 Jeff McClintock

How often are more than one plugin with the same control inputs used in
paralel? I was rather thinking of colapsing (or swapping) plugins in
series. They'd have to be linear and time invariant, of course.
Or maybe plugins could 'know' how to colapse themselves, sort of like
overriding Plugin::operator+(const Plugin), to use a C++ metaphor.


Well, stereo sounds passing through mono plugins is one case.
However as Jeff describes this optimization, it is applicable when
output signals are summed, and I don't know how often it happens.
Anyway it is another idea to optimize processing for linear plugins,
definitively not something to discard.
This makes me think that some common basic pieces like mixers and
delay filters can have special properties which involve even more
aggressive optimization. Maybe it's worth considering how this special
blocks could be developed and used.

Stefano



Yes, I agree I think if one comes up with a couple of rules like that, 
it could be possible to design a system which automatically simplifies 
processing networks.

To recap:
* If two parallel filters have equal control inputs and their outputs 
are summed, replace with one filter and feed with summed inputs.
* If two serial filters are LTI, they impulse response can be added to 
one filter.
* If two serial filters are LTI, and their impulse response is unknown, 
they can be swapped.
* Filter classes could know how to merge to instances into one. Those 
instances may even cancel each other out.

* Remove filter chains which have no outputs.
etc... With a little thinking and some formal work, one could come up 
with more ideas like those.
Software like puredata and jMax (which use such common basic pieces in 
many diferent configurations) could benefit from such a system. I looked 
at their websites, but could not find any references to similar ideas.


Camilo


Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Stefano D'Angelo

2007/2/19, Paul Davis [EMAIL PROTECTED]:

On Mon, 2007-02-19 at 14:18 +0100, Stefano D'Angelo wrote:

  How often are more than one plugin with the same control inputs used in
  paralel? I was rather thinking of colapsing (or swapping) plugins in
  series. They'd have to be linear and time invariant, of course.
  Or maybe plugins could 'know' how to colapse themselves, sort of like
  overriding Plugin::operator+(const Plugin), to use a C++ metaphor.

 Well, stereo sounds passing through mono plugins is one case.



nope. thats not a linear arrangement of the two mono plugins, but a
parallel arrangement. the signal going to each instance of the mono
plugin is different.


I'm obscure even in Italian, I can just imagine how it can sound like
in English :-)
I was not talking about that specific thing, I was talking about a
case which could take benefit of some kind of parallel processing
merging.


 However as Jeff describes this optimization, it is applicable when
 output signals are summed, and I don't know how often it happens.
 Anyway it is another idea to optimize processing for linear plugins,
 definitively not something to discard.
 This makes me think that some common basic pieces like mixers and
 delay filters can have special properties which involve even more
 aggressive optimization. Maybe it's worth considering how this special
 blocks could be developed and used.

you can think all you want. unless there a plugin-host callback that
allows the plugin to determine its operating environment in huge detail,
this kind of idea is pretty impossible to make use of.


What?
Once again: misunderstood! These optimizations involve that the
wrapper (I should stop calling it this way) knows about the network
of processing objects (read: plugins) and that these last ones contain
generic information on their functionality (ex. STFT for LTI proc.
objects).
Then the wrapper takes care of optimizing the net.

Stefano


Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Stefano D'Angelo

2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:

Stefano D'Angelo wrote:
 2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:
 Jeff McClintock wrote:
  I actually don't know how many plugins are LTI, but, for example, a
  lot of delays, reverbs, choruses, eq. filters, compressors,
 modulators
  and sound mixers should be, and that's quite enough after all.
 
  Yeah, It's a good optimization.  The SynthEdit plugin API supports
  inputs being flagged as 'linear', if several such plugins are used in
  parallel they are automatically collapsed into a single instance which
  is fed the summed signals of the original plugins.  Plugin are
  collapsed  only when their control inputs are the same.
 
  BEFORE optimation:
 
  [plugin]--[delay1]--
  [plugin]--[delay2]-/
 
  AFTER:
 
  [plugin]---[delay1]---
  [plugin]-/
 
   e.g. two parallel 100ms delays are combined.  Two different length
  delays aren't.
 
This is most useful in synth patches where each voice is an
  identical parallel sub-patch.
 
 
  Jeff McClintock
 
 How often are more than one plugin with the same control inputs used in
 paralel? I was rather thinking of colapsing (or swapping) plugins in
 series. They'd have to be linear and time invariant, of course.
 Or maybe plugins could 'know' how to colapse themselves, sort of like
 overriding Plugin::operator+(const Plugin), to use a C++ metaphor.

 Well, stereo sounds passing through mono plugins is one case.
 However as Jeff describes this optimization, it is applicable when
 output signals are summed, and I don't know how often it happens.
 Anyway it is another idea to optimize processing for linear plugins,
 definitively not something to discard.
 This makes me think that some common basic pieces like mixers and
 delay filters can have special properties which involve even more
 aggressive optimization. Maybe it's worth considering how this special
 blocks could be developed and used.

 Stefano


Yes, I agree I think if one comes up with a couple of rules like that,
it could be possible to design a system which automatically simplifies
processing networks.
To recap:
* If two parallel filters have equal control inputs and their outputs
are summed, replace with one filter and feed with summed inputs.


Maybe too specific... maybe also plugins with different control inputs
can be merged, I must see this.


* If two serial filters are LTI, they impulse response can be added to
one filter.


Added = multiplied :-)


* If two serial filters are LTI, and their impulse response is unknown,
they can be swapped.


Yes, but why?


* Filter classes could know how to merge to instances into one. Those
instances may even cancel each other out.


Yes


* Remove filter chains which have no outputs.


Absolutely not: what about a GUI oscillator?


etc... With a little thinking and some formal work, one could come up
with more ideas like those.


I think too that this is an interesting path to follow: NASPRO (the
wrapper) will absolutely go this way, just after wrapping LADSPA,
DSSI, LV2 (without extensions) and similar.
When LV2 extensions will be implemented then work on these stuff will begin.


Software like puredata and jMax (which use such common basic pieces in
many diferent configurations) could benefit from such a system. I looked
at their websites, but could not find any references to similar ideas.


Good, another use for it :-)

Stefano


Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Paul Davis
On Mon, 2007-02-19 at 18:10 +0100, Stefano D'Angelo wrote:

  nope. thats not a linear arrangement of the two mono plugins, but a
  parallel arrangement. the signal going to each instance of the mono
  plugin is different.
 
 I'm obscure even in Italian, I can just imagine how it can sound like
 in English :-)
 I was not talking about that specific thing, I was talking about a
 case which could take benefit of some kind of parallel processing
 merging.

you don't merge or gain anything with a parallel graph. only serial
ordering is amenable to optimization, and such arrangements are very
rare.

  you can think all you want. unless there a plugin-host callback that
  allows the plugin to determine its operating environment in huge detail,
  this kind of idea is pretty impossible to make use of.
 
 What?
 Once again: misunderstood! These optimizations involve that the
 wrapper (I should stop calling it this way) knows about the network
 of processing objects (read: plugins) and that these last ones contain
 generic information on their functionality (ex. STFT for LTI proc.
 objects).
 Then the wrapper takes care of optimizing the net.

find me a host author who would want to use such a thing... managing
plugins is a central task of a host, and handing that over to some
wrapper that hides information from the host doesn't make the host's
life easier, it makes it more complex. 

--p




Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Camilo Polyméris

Stefano D'Angelo wrote:

2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:

Stefano D'Angelo wrote:
 2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:
 Jeff McClintock wrote:
  I actually don't know how many plugins are LTI, but, for 
example, a

  lot of delays, reverbs, choruses, eq. filters, compressors,
 modulators
  and sound mixers should be, and that's quite enough after all.
 
  Yeah, It's a good optimization.  The SynthEdit plugin API supports
  inputs being flagged as 'linear', if several such plugins are 
used in
  parallel they are automatically collapsed into a single instance 
which

  is fed the summed signals of the original plugins.  Plugin are
  collapsed  only when their control inputs are the same.
 
  BEFORE optimation:
 
  [plugin]--[delay1]--
  [plugin]--[delay2]-/
 
  AFTER:
 
  [plugin]---[delay1]---
  [plugin]-/
 
   e.g. two parallel 100ms delays are combined.  Two different length
  delays aren't.
 
This is most useful in synth patches where each voice is an
  identical parallel sub-patch.
 
 
  Jeff McClintock
 
 How often are more than one plugin with the same control inputs 
used in

 paralel? I was rather thinking of colapsing (or swapping) plugins in
 series. They'd have to be linear and time invariant, of course.
 Or maybe plugins could 'know' how to colapse themselves, sort of like
 overriding Plugin::operator+(const Plugin), to use a C++ metaphor.

 Well, stereo sounds passing through mono plugins is one case.
 However as Jeff describes this optimization, it is applicable when
 output signals are summed, and I don't know how often it happens.
 Anyway it is another idea to optimize processing for linear plugins,
 definitively not something to discard.
 This makes me think that some common basic pieces like mixers and
 delay filters can have special properties which involve even more
 aggressive optimization. Maybe it's worth considering how this special
 blocks could be developed and used.

 Stefano


Yes, I agree I think if one comes up with a couple of rules like that,
it could be possible to design a system which automatically simplifies
processing networks.
To recap:
* If two parallel filters have equal control inputs and their outputs
are summed, replace with one filter and feed with summed inputs.


Maybe too specific... maybe also plugins with different control inputs
can be merged, I must see this.
I meant Jeff's idea: the simplification of parallel filters. He 
mentioned the SynthEdit API using it.



* If two serial filters are LTI, they impulse response can be added to
one filter.


Added = multiplied :-)

Actually, *



* If two serial filters are LTI, and their impulse response is unknown,
they can be swapped.


Yes, but why?
That, per se, is no optimization, but moving stuff around can help 
making the other rules apply.

Like, if you have:
   eq - LTIfilter - eq -
you can first swap the first two:
   LTIfilter - eq - eq -
and then reduce the second and third:
   LTIfilter - sum_of_eqs -



* Filter classes could know how to merge to instances into one. Those
instances may even cancel each other out.


Yes


* Remove filter chains which have no outputs.


Absolutely not: what about a GUI oscillator?

Ok. Filter chains without outputs nor side-effects. (Like optimizing 
away pure functions)

etc... With a little thinking and some formal work, one could come up
with more ideas like those.


I think too that this is an interesting path to follow: NASPRO (the
wrapper) will absolutely go this way, just after wrapping LADSPA,
DSSI, LV2 (without extensions) and similar.
When LV2 extensions will be implemented then work on these stuff will 
begin.



What's naspro?

Software like puredata and jMax (which use such common basic pieces in
many diferent configurations) could benefit from such a system. I looked
at their websites, but could not find any references to similar ideas.


Good, another use for it :-)

Stefano






Re: [linux-audio-dev] Re: processing plugin standard wrapper

2007-02-19 Thread Stefano D'Angelo

2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:

Stefano D'Angelo wrote:
 2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:
 Stefano D'Angelo wrote:
  2007/2/19, Camilo Polyméris [EMAIL PROTECTED]:
  Jeff McClintock wrote:
   I actually don't know how many plugins are LTI, but, for
 example, a
   lot of delays, reverbs, choruses, eq. filters, compressors,
  modulators
   and sound mixers should be, and that's quite enough after all.
  
   Yeah, It's a good optimization.  The SynthEdit plugin API supports
   inputs being flagged as 'linear', if several such plugins are
 used in
   parallel they are automatically collapsed into a single instance
 which
   is fed the summed signals of the original plugins.  Plugin are
   collapsed  only when their control inputs are the same.
  
   BEFORE optimation:
  
   [plugin]--[delay1]--
   [plugin]--[delay2]-/
  
   AFTER:
  
   [plugin]---[delay1]---
   [plugin]-/
  
e.g. two parallel 100ms delays are combined.  Two different length
   delays aren't.
  
 This is most useful in synth patches where each voice is an
   identical parallel sub-patch.
  
  
   Jeff McClintock
  
  How often are more than one plugin with the same control inputs
 used in
  paralel? I was rather thinking of colapsing (or swapping) plugins in
  series. They'd have to be linear and time invariant, of course.
  Or maybe plugins could 'know' how to colapse themselves, sort of like
  overriding Plugin::operator+(const Plugin), to use a C++ metaphor.
 
  Well, stereo sounds passing through mono plugins is one case.
  However as Jeff describes this optimization, it is applicable when
  output signals are summed, and I don't know how often it happens.
  Anyway it is another idea to optimize processing for linear plugins,
  definitively not something to discard.
  This makes me think that some common basic pieces like mixers and
  delay filters can have special properties which involve even more
  aggressive optimization. Maybe it's worth considering how this special
  blocks could be developed and used.
 
  Stefano
 

 Yes, I agree I think if one comes up with a couple of rules like that,
 it could be possible to design a system which automatically simplifies
 processing networks.
 To recap:
 * If two parallel filters have equal control inputs and their outputs
 are summed, replace with one filter and feed with summed inputs.

 Maybe too specific... maybe also plugins with different control inputs
 can be merged, I must see this.
I meant Jeff's idea: the simplification of parallel filters. He
mentioned the SynthEdit API using it.

 * If two serial filters are LTI, they impulse response can be added to
 one filter.

 Added = multiplied :-)
Actually, *


:-)


 * If two serial filters are LTI, and their impulse response is unknown,
 they can be swapped.

 Yes, but why?
That, per se, is no optimization, but moving stuff around can help
making the other rules apply.
Like, if you have:
eq - LTIfilter - eq -
you can first swap the first two:
LTIfilter - eq - eq -
and then reduce the second and third:
LTIfilter - sum_of_eqs -


But how do you know they are 2 eqs if you don't know their impulse response?
Maybe you mean that this happens when they are two instances of the same object?


 * Filter classes could know how to merge to instances into one. Those
 instances may even cancel each other out.

 Yes

 * Remove filter chains which have no outputs.

 Absolutely not: what about a GUI oscillator?

Ok. Filter chains without outputs nor side-effects. (Like optimizing
away pure functions)


Ok.


 etc... With a little thinking and some formal work, one could come up
 with more ideas like those.

 I think too that this is an interesting path to follow: NASPRO (the
 wrapper) will absolutely go this way, just after wrapping LADSPA,
 DSSI, LV2 (without extensions) and similar.
 When LV2 extensions will be implemented then work on these stuff will
 begin.

What's naspro?


It's what I'm working on and what we're talking about!
I called such thing NASPRO which is a recursive acronym for NASPRO
Architecture for Sound PRocessing Objects. The real naspro
(pronounced like 'nnashpro) is a typical southern italian icing used
for sweets :-)


 Software like puredata and jMax (which use such common basic pieces in
 many diferent configurations) could benefit from such a system. I looked
 at their websites, but could not find any references to similar ideas.

 Good, another use for it :-)

 Stefano


[linux-audio-dev] What does it mean for jack to be rolling (newby)

2007-02-19 Thread Jonathan Ryshpan
I've been using jack and qjackctl with audacity under Linux (FC6).  It
seems to be working OK with 1 approx 1 msec xrun in a half hour
recording session, when unfortunately my backup system kicked in. 

However there's something I don't understand about jack, even after
reading a lot of the documentation.  At the bottom of the qjackctl
window is a set of arrows, like on an audio control device.  When I
click on the arrow pointing right, the green Stopped in the main
window changes to Rolling.  What does this mean?  What are the other
arrows for?  No useful info on the web. 

The above recording session was done while jack was Stopped.  Would
jack work better if it were Rolling?  

Please excuse these elementary questions.
 
Thanks - jon



Re: [linux-audio-dev] What does it mean for jack to be rolling (newby)

2007-02-19 Thread vreuzon

Jonathan Ryshpan a écrit :

The above recording session was done while jack was Stopped.  Would
 jack work better if it were Rolling?


This play button refers to jack transport functions :

The JACK Audio Connection Kit provides simple transport interfaces 
for starting, stopping and repositioning a set of clients. This 
document describes the overall design of these interfaces, their 
detailed specifications are in jack/transport.h


(from : 
http://jackit.sourceforge.net/docs/reference/html/transport-design.html

)

v




[linux-audio-dev] [completely OT] c++ and UTF-8 question

2007-02-19 Thread Julien Claassen
Hi!
  I'm sorry to ask that here, but it seems I can't get an anser anywhere else.
  Does the libstdc++ support UTF-8 strings? Or is there some simple example 
code snippet somewhere to derive/modify something which would fullfill this 
need?
  Kindest regards and thanks!
   Julien


Music was my first love and it will be my last (John Miles)

 FIND MY WEB-PROJECT AT: 
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
=== AND MY PERSONAL PAGES AT: ===
http://www.juliencoder.de


Re: [linux-audio-dev] What does it mean for jack to be rolling (newby)

2007-02-19 Thread Jonathan Ryshpan
On Mon, 2007-02-19 at 13:18 -0800, vreuzon wrote:
 Jonathan Ryshpan a écrit :
  The above recording session was done while jack was Stopped.  Would
  jack work better if it were Rolling?
 
 This play button refers to jack transport functions :
 
  The JACK Audio Connection Kit provides simple transport interfaces 
  for starting, stopping and repositioning a set of clients. This 
  document describes the overall design of these interfaces, their 
  detailed specifications are in jack/transport.h
 
 from : 
 http://jackit.sourceforge.net/docs/reference/html/transport-design.html

Thanks for your quick reply.  However...

I have read this, and also part of the documentation of the transport.h
File Reference to which it refers.  Rolling is not defined anywhere;
it's just used.

If the original design document has been followed rolling would appear
to mean that the clients are passing control among themselves by means
of calls and callbacks.  However if this is the case, I don't see how
audacity could work as a jack client unless jack is rolling.  But it
does work.  So what does rolling mean?

jon





Re: [linux-audio-dev] What does it mean for jack to be rolling (newby)

2007-02-19 Thread Paul Davis
On Mon, 2007-02-19 at 13:33 -0800, Jonathan Ryshpan wrote:
 On Mon, 2007-02-19 at 13:18 -0800, vreuzon wrote:
  Jonathan Ryshpan a écrit :
   The above recording session was done while jack was Stopped.  Would
   jack work better if it were Rolling?
  
  This play button refers to jack transport functions :
  
   The JACK Audio Connection Kit provides simple transport interfaces 
   for starting, stopping and repositioning a set of clients. This 
   document describes the overall design of these interfaces, their 
   detailed specifications are in jack/transport.h
  
  from : 
  http://jackit.sourceforge.net/docs/reference/html/transport-design.html
 
 Thanks for your quick reply.  However...
 
 I have read this, and also part of the documentation of the transport.h
 File Reference to which it refers.  Rolling is not defined anywhere;
 it's just used.
 
 If the original design document has been followed rolling would appear
 to mean that the clients are passing control among themselves by means
 of calls and callbacks.  However if this is the case, I don't see how
 audacity could work as a jack client unless jack is rolling.  But it
 does work.  So what does rolling mean?

most JACK clients pay no attention to JACK transport status. only those
that wish to participate in a fully synchronized start/stop/move-to
system do so, and there are few of them. clients are free to completely
ignore transport status without any side effects.

rolling means that transport-aware clients should think of themselves
as moving along a linear timeline. JACK transport info tells them where
they are.

--p




Re: [linux-audio-dev] What does it mean for jack to be rolling (newby)

2007-02-19 Thread Robin Gareus
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Jonathan Ryshpan wrote:
 On Mon, 2007-02-19 at 13:18 -0800, vreuzon wrote:
 Jonathan Ryshpan a �crit :
 The above recording session was done while jack was Stopped.  Would
 jack work better if it were Rolling?
 This play button refers to jack transport functions :

 The JACK Audio Connection Kit provides simple transport interfaces 
 for starting, stopping and repositioning a set of clients. This 
 document describes the overall design of these interfaces, their 
 detailed specifications are in jack/transport.h
 from : 
 http://jackit.sourceforge.net/docs/reference/html/transport-design.html
 
 Thanks for your quick reply.  However...
 
 I have read this, and also part of the documentation of the transport.h
 File Reference to which it refers.  Rolling is not defined anywhere;
 it's just used.

Rolling (like Starting and Stopped) is a state of the
jack-transport (SMPTE  timecode) mechanism. (the diagram on the page)

This has nothing to do with JACK audio-process callbacks which is/are
always running!  Stopping the jack-transport is just like turning off
the motor on an old tape recorder while the amp (and patchbay) keeps
working.

every JACK application can *optionally* synchronize it's play position
to jack-transport! AFAIR audacity does not support this (it has it's own
motor )

#robin
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF2iCLeVUk8U+VK0IRAi5yAJ4i0G8y+v4ji/gBbUT6ePfzTRkO7gCgqJA0
z4Vk82zhQGzOj8bHTXlpFbw=
=XyWe
-END PGP SIGNATURE-


Re: [linux-audio-dev] [completely OT] c++ and UTF-8 question

2007-02-19 Thread Camilo Polyméris

Julien Claassen wrote:

Hi!
  I'm sorry to ask that here, but it seems I can't get an anser anywhere else.
  Does the libstdc++ support UTF-8 strings? Or is there some simple example 
code snippet somewhere to derive/modify something which would fullfill this 
need?

  Kindest regards and thanks!
   Julien


  

Not really, but it is easy to implement:
   typedef basic_stringwchar_t string;
For UTF-16 (which is variable width) you'd have to supply your own 
char_traits, or reimplement some string functions.

Camilo



Re: [linux-audio-dev] [completely OT] c++ and UTF-8 question

2007-02-19 Thread Paul Davis
On Mon, 2007-02-19 at 19:49 -0300, Camilo Polyméris wrote:
 Julien Claassen wrote:
  Hi!
I'm sorry to ask that here, but it seems I can't get an anser anywhere 
  else.
Does the libstdc++ support UTF-8 strings? Or is there some simple example 
  code snippet somewhere to derive/modify something which would fullfill this 
  need?
Kindest regards and thanks!
 Julien
 
  

 Not really, but it is easy to implement:
 typedef basic_stringwchar_t string;
 For UTF-16 (which is variable width) you'd have to supply your own 
 char_traits, or reimplement some string functions.

Glib::ustring is probably what you want. Glib is not part of any
graphics toolkit - it is a low level portability library providing lots
of cross-platform and utility goodness.

--p





Re: [linux-audio-dev] [completely OT] c++ and UTF-8 question

2007-02-19 Thread Julien Claassen
Thanks both of you!
  I think I'll have a go at the char_traits anyway, for I don't want to be 
dependent on even more external libraries. I know that Glib is not a graphics 
library, but it still draws attention by being not present with a lot of 
console users. At least the development packages. - Perhaps I steal some code 
from Glib...
  Kindest regards and thanks
Julien


Music was my first love and it will be my last (John Miles)

 FIND MY WEB-PROJECT AT: 
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
=== AND MY PERSONAL PAGES AT: ===
http://www.juliencoder.de