Re: [PD] puredata evolution

2007-06-04 Thread Matteo Sisti Sette
Hi,

PD is often said to be meant to be a tool for just prototyping.

It certainly had to be so when it was born but hey, nowadays, with the 
available cpu power of average machines... don't you think that it is no 
more necessarily so?

I'm curious: how many of you really re-code their pd patches into something 
else when you're finished experimenting and you've reached a reasonably 
final version of what you're patching?
I don't.

I put this under the thread of pd evolution because in my opinion, in my 
wishes, in my personal vision of pd's future, one really important step 
would be to assume it is no more a tool for prototyping but an environment 
for developing final applications. I think pd-vanilla is the data-flow, 
audio-oriented analoguous of an interpreted programming language. Ok, an 
interpreted language is 1000 times slower than a compiled one, so, in the 
case of applications requiring 100% state-of-the-art cpu power, it can only 
be used for prototyping; however, for applications that require much less 
computing power than is available, a lot of interpreted languages are used 
for developing real-life applications.


IMHO there are just a couple of crucial things that  need to be solved in 
order to make that step in PD.
One is a few bugs that make your life impossible when you're developing a 
somewhat large application (i.e. many abstraction, reused many times, and 
nested, as is needed when designing a complex system either top-down or 
bottom-up or mixed). Every time you hit CTRL+S, you're likely to be obliged 
to close PD and restart it.
And the second is the GUI is tremendously slow.
I don't mind it may be somewhat limited if one wants to build arbitrary 
interfaces. I just accept (and eventually love) pd's gui, with its bangs, 
toggles, sliders, and the way they work; with appropriate programming (i.e. 
patching) they are powerful (you can use them as display avoiding manual 
changes, you can obtain multiple controllers of the same value that don't 
loose their coherence etc etc).
I just take them as they are and as they work...
but the problem is: it's all so slow.
I can't believe it can't be faster and less cpu-expensive.

Reimplementing the CURRENT guy mechanism, without any change in its 
specifications, in a faster way would be imho an enormous benefit to those 
who use PD as a developing environment. Because it would encourage rather 
than discourage such practices as nesting GOP-enabled abstractions to 
mantain complex interfaces manageable.

I don't know whether some of the aspects I describe are specific to the 
Windows version, as I only use PD under Windows. However, coherently with 
the idea of a reliable developing environment, if this environment is 
supposed to be platform-independent, platform-specific issues should be 
solved.

I'd love to hear other opinions about all this.

Bye
m. 

 
 
 --
 Email.it, the professional e-mail, gratis per te: http://www.email.it/f
 
 Sponsor:
 In REGALO 'All the Good Thing' di NELLY FURTADO
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=6617d=4-6

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-06-03 Thread Mathieu Bouchard

On Thu, 31 May 2007, Charles Henry wrote:


If well done, it's also an intermediate step towards automatic threading.
It's important to cut hard goals into easier goals, because it reduces
required investment and gives quicker returns.

I think that's a very good point.  It could also lead to some new
insights into the problem as a whole, during testing.


That's an important point of Extreme Programming. Suppose you always work 
on a new project with different goals than all your previous projects. 
Then you don't have the experience necessary to design the program 
because you need to know what happens when implementing it. Therefore you 
design as you need it, you grow a design gradually so that you can use 
the experience that you gain implementing it, to redesign existing parts 
or design further parts.


Top-down design is usually difficult because of misc problems you will 
find later on.


Top-down design on its own doesn't work. It needs to be complemented by 
bottom-up design, but a good corporate designer can conceal that fact from 
the Inquisition and even from himself.


(Bottom-up design on its own doesn't work either)

I am curious... what kind of changes do you think would have to be made 
to allow this function?


I'm not that deep into it yet, so I cannot talk so much about it. I 
shouldn't be thinking that much about threading before the conference, or 
even this year at all.


 _ _ __ ___ _  _ _ ...
| Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-06-02 Thread Mathieu Bouchard

On Thu, 31 May 2007, Miller Puckette wrote:


The great majority of DSP objects are side-effect-free and thread-safe.


On average it doesn't matter, because video is the high-bandwidth, 
high-crunching task, while audio is becoming (or already is) 
low-bandwidth, low-crunching, in comparison to the machine's capacity, 
even without using SIMD. There still isn't any pd DSP subsystem that 
carries video (there was one in jMax...).


what's more, I don't think there's any reliable way to determine that an 
object is threadsafe.


A reliable way to determine is to look up a table that lists all the 
object classes that are known to be threadsafe. That's at least as 
reliable as the humans that certify the threadsafeness.


So a parallelized version of Pd would, in practice, occasionally crash 
mysteriously.


Only if there are object classes in that list, that shouldn't be there.

Furthermore, as new DSP objects get written new sources of crashes would 
appear, leaving us in all liklihood in a situation where no version of 
Pd ever emerged that was entirely free of thread-related crashes.  Not a 
real pretty sight.


If there are any crashes that you can't debug, you can still reduce the 
amount of threadliness.


Almost all race-conditions in pd would result in wrong output instead of 
crashes. If you want to address threading issues, consider all 
race-conditions, not just crashes.



Another possibility would be to make Pd open up several address spaces and
run portions of the patch in then.  This was how Max/FTS worked on the ISPW.


With or without shared memory?


Just to offer my two cents...


USA's currency is falling down nowadays.

 _ _ __ ___ _  _ _ ...
| Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-31 Thread Tim Blechmann
hi niklas,

i'm curious about your implementation:
- have you been doing some profiling of the scheduling overhead?
- which latency settings have you been using? it would be great to know
the worst-case response times of the locking synchronization ...

in general, the expressive power of dataflow languages in terms of
parallelism is really amazing, however neither pd nor nova are
general-purpose programming languages, but low-latency soft-realtime
audio programming languages, which makes a usable implementation rather
complex ...

cheers, tim

On Thu, 2007-05-31 at 02:19 +0200, Niklas Klügel wrote:
 Tim Blechmann wrote:
  On Wed, 2007-05-30 at 12:13 +0200, Niklas Klügel wrote:

  I think it depends on the application for the most part, we

  can't
  
  get a generic speedup from using multiple cores (forgive me if

  wrong)
  
  that would apply to every single pd program. but some types of
  computations such as large ffts can be performed faster when
  distributed to different cores, in which case, the code for the fft
  has to be parallelized a priori.  Plus, the memory is tricky.  You

  can
  
  have a memory access bottleneck, when using a shared memory resource
  between multiple processors.
  It's definitely a problem that is worth solving, but I'm not
  suggesting to do anything about it soon.  It sounds like something
  that would require a complete top-down re-design to be successful.
  yikes
 
  Chuck
 


  I once wrote such a toolset that does automatically scale up
  with multiple threads throughout the whole network. it worked
  by detecting cycles in the graph and splits of the signals while
  segmenting the graph in autonomous sequential parts and essentially
  adding some smart and lightweight locks everyhwere the signals
  split or merged. it even reassigned threats on the lock-level to
  balance the workload in the graph and preventing deadlocks.
  the code is/was around 2.5k lines of c++ code and a bloody mess :)
  so, i don't know much about the internals of pd but it'd be probably
  possible. 
  
 
  detaching ffts (i.e. canvases with larger blocksizes than 64) should be
  rather trivial ... 
 
  distributing a synchronous dsp graph to several threads is not trivial,
  especially when it comes to a huge number of nodes. for small numbers of
  nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
  probably usable, but when it comes to huge dsp graphs, the
  synchronization overhead is probably to big, so the graph would have to
  be split to parallel chunks which are then scheduled ...

 true, i didn't try big graphs, so i can't really say how it would behave.
 it was more a fun project to see if it was doable. at that time i had
 the impression that the locking and the re-assignment of threads
 was quite efficient and done only on demand, if the graph
 has more sequential parts than the number of created threads
 ; i am curious how it can be achieved in a lock-free way.
 
 about the issues of explicitely threading parts of the graph (that came 
 up in the
 discussion lateron), i must say i don't get why you would want to do it.
  seeing how the numbers of cores are about
 to increase, i'd say that it is contraproductive in relation to the 
 technological
 development of hardware and the software running on top of it lagging 
 behind as well
 as the steady implicit maintenance of the software involved. from my 
 point of view
 a graphical dataflow language has the perfect semantics to express the 
 parallelisms
 of a program in an intuitive way. therefore i'd say that rather than 
 adding constructs
 for explicit parallelism to the language that is able to express them anyhow
 adding constructs for explicit serialization of a process makes more sense.
 maybe i'm talking nonsense here, please correct me.
 
 so long...
 Niklas
 
 
 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list
--
[EMAIL PROTECTED]ICQ: 96771783
http://tim.klingt.org

Every word is like an unnecessary stain on silence and nothingness
  Samuel Beckett


signature.asc
Description: This is a digitally signed message part
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-31 Thread Charles Henry
  I once wrote such a toolset that does automatically scale up
  with multiple threads throughout the whole network. it worked
  by detecting cycles in the graph and splits of the signals while
  segmenting the graph in autonomous sequential parts and essentially
  adding some smart and lightweight locks everyhwere the signals
  split or merged. it even reassigned threats on the lock-level to
  balance the workload in the graph and preventing deadlocks.
  the code is/was around 2.5k lines of c++ code and a bloody mess :)
  so, i don't know much about the internals of pd but it'd be probably
  possible.

Could I see your code?  I am not so literate with threading or
scheduling, so I would like to see if I can read it and follow along
with you.

 
 
  detaching ffts (i.e. canvases with larger blocksizes than 64) should be
  rather trivial ...
 
  distributing a synchronous dsp graph to several threads is not trivial,
  especially when it comes to a huge number of nodes. for small numbers of
  nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
  probably usable, but when it comes to huge dsp graphs, the
  synchronization overhead is probably to big, so the graph would have to
  be split to parallel chunks which are then scheduled ...

This approach makes a lot of sense.  A lot of parts of the dsp graph
are written as parallel subroutines as shown.

 
 true, i didn't try big graphs, so i can't really say how it would behave.
 it was more a fun project to see if it was doable. at that time i had
 the impression that the locking and the re-assignment of threads
 was quite efficient and done only on demand, if the graph
 has more sequential parts than the number of created threads
 ; i am curious how it can be achieved in a lock-free way.

Well, some kinds of serial processing could be made parallel What
comes to mind is a topic in cognitive psychology.  Early models
assumed that processing was sequential, discrete, and serial.  A
hypothetical model of word recognition might include stages such as
perception, encoding, and identification.  But in fact, the processes
proceed continuously and in parallel using partial information from
preceding and following stages.
Or another analogy, when playing arpeggios on guitar, you don't have
to put all of your left fingers in place before playing the notes with
the right hand.  You only have to put on finger down at a time, before
playing the corresponding string.

Timing without locks would be very tricky, and would be analogous to
continuous processes.  You could run into problems where not enough
information is present for the next stage to run.  Plus, there are
some types of processing (like fft's) that rely on having the whole
block in order to run.

 about the issues of explicitely threading parts of the graph (that came
 up in the
 discussion lateron), i must say i don't get why you would want to do it.
  seeing how the numbers of cores are about
 to increase, i'd say that it is contraproductive in relation to the
 technological
 development of hardware and the software running on top of it lagging
 behind as well
 as the steady implicit maintenance of the software involved. from my
 point of view
 a graphical dataflow language has the perfect semantics to express the
 parallelisms
 of a program in an intuitive way. therefore i'd say that rather than
 adding constructs
 for explicit parallelism to the language that is able to express them anyhow
 adding constructs for explicit serialization of a process makes more sense.
 maybe i'm talking nonsense here, please correct me.

I thought that pdsend and pdrecieve could be used to run pd in a
separate thread (a sub-process) and send data in between.  What
Mathieu suggested is a bit simpler, but is really the same,
functionally.

Later,
Chuck


 so long...
 Niklas


 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-31 Thread Mathieu Bouchard


Niklas Klügel wrote:

about the issues of explicitely threading parts of the graph (that came 
up in the discussion lateron), i must say i don't get why you would want 
to do it.


Because it's cheaper to implement.

If well done, it's also an intermediate step towards automatic threading. 
It's important to cut hard goals into easier goals, because it reduces 
required investment and gives quicker returns.


Also, I wouldn't trust automatic threading to make use of the CPUs in the 
best possible way all of the time, *especially* for real-time.


 _ _ __ ___ _  _ _ ...
| Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-31 Thread Charles Henry
 Because it's cheaper to implement.

 If well done, it's also an intermediate step towards automatic threading.
 It's important to cut hard goals into easier goals, because it reduces
 required investment and gives quicker returns.

I think that's a very good point.  It could also lead to some new
insights into the problem as a whole, during testing.  Top-down design
is usually difficult because of misc problems you will find later on.
I am curious... what kind of changes do you think would have to be
made to allow this function?

I can imagine this explicit threading as a new type of sub-patch,
which could be invoked in the same manner as [pd new_subpatch].  You
could let the original process handle all the memory allocation, and
switch on the new thread once its dependencies are satisfied.

 Also, I wouldn't trust automatic threading to make use of the CPUs in the
 best possible way all of the time, *especially* for real-time.


I would have to say... there's just no replacement for actually
measuring the performance and making adjustments.  but you'll always
be limited by the rate you can make the modifications yourself.  So,
some kind of algorithm could be used to optimize performance, say,
genetic algorithm style, or heuristic search.  So that you would
create a patch which is intended to be used in a parallel arch and
then you just sit back and let the computer try to optimize it by
actually computing a bunch of cycles and taking measurements.
Given that it's just a far off idea (to me), it's too soon to really
discuss optimization :)  but if the computer were to actually take
measurements and choose the best, I would trust the computer to do it
faster/better than I could

   _ _ __ ___ _  _ _ ...
 | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada
 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-31 Thread Niklas Klügel
Tim Blechmann wrote:
 hi niklas,

 i'm curious about your implementation:
 - have you been doing some profiling of the scheduling overhead?
 - which latency settings have you been using? it would be great to know
 the worst-case response times of the locking synchronization ...
   
Hey timchuck,

as i said i just did it for fun, so i did no profiling after everything 
looked
promising enough to become uninteresting again.
i hope to get some time the next week(s) to write the basic working down,
the idea behind it isnt really complicated which might imply several 
inconsistencies.
i dont think sharing the code will do any good since it is is a complete 
mess,
but i do think, that i can write it down in a more formal way. this will
also allow for a better discussion and analysis.

so long...
Niklas

 in general, the expressive power of dataflow languages in terms of
 parallelism is really amazing, however neither pd nor nova are
 general-purpose programming languages, but low-latency soft-realtime
 audio programming languages, which makes a usable implementation rather
 complex ...

 cheers, tim

 On Thu, 2007-05-31 at 02:19 +0200, Niklas Klügel wrote:
   
 Tim Blechmann wrote:
 
 On Wed, 2007-05-30 at 12:13 +0200, Niklas Klügel wrote:
   
   
 I think it depends on the application for the most part, we
   
   
 can't
 
 
 get a generic speedup from using multiple cores (forgive me if
   
   
 wrong)
 
 
 that would apply to every single pd program. but some types of
 computations such as large ffts can be performed faster when
 distributed to different cores, in which case, the code for the fft
 has to be parallelized a priori.  Plus, the memory is tricky.  You
   
   
 can
 
 
 have a memory access bottleneck, when using a shared memory resource
 between multiple processors.
 It's definitely a problem that is worth solving, but I'm not
 suggesting to do anything about it soon.  It sounds like something
 that would require a complete top-down re-design to be successful.
 yikes

 Chuck

   
   
   
 I once wrote such a toolset that does automatically scale up
 with multiple threads throughout the whole network. it worked
 by detecting cycles in the graph and splits of the signals while
 segmenting the graph in autonomous sequential parts and essentially
 adding some smart and lightweight locks everyhwere the signals
 split or merged. it even reassigned threats on the lock-level to
 balance the workload in the graph and preventing deadlocks.
 the code is/was around 2.5k lines of c++ code and a bloody mess :)
 so, i don't know much about the internals of pd but it'd be probably
 possible. 
 
 
 detaching ffts (i.e. canvases with larger blocksizes than 64) should be
 rather trivial ... 

 distributing a synchronous dsp graph to several threads is not trivial,
 especially when it comes to a huge number of nodes. for small numbers of
 nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
 probably usable, but when it comes to huge dsp graphs, the
 synchronization overhead is probably to big, so the graph would have to
 be split to parallel chunks which are then scheduled ...
   
   
 true, i didn't try big graphs, so i can't really say how it would behave.
 it was more a fun project to see if it was doable. at that time i had
 the impression that the locking and the re-assignment of threads
 was quite efficient and done only on demand, if the graph
 has more sequential parts than the number of created threads
 ; i am curious how it can be achieved in a lock-free way.

 about the issues of explicitely threading parts of the graph (that came 
 up in the
 discussion lateron), i must say i don't get why you would want to do it.
  seeing how the numbers of cores are about
 to increase, i'd say that it is contraproductive in relation to the 
 technological
 development of hardware and the software running on top of it lagging 
 behind as well
 as the steady implicit maintenance of the software involved. from my 
 point of view
 a graphical dataflow language has the perfect semantics to express the 
 parallelisms
 of a program in an intuitive way. therefore i'd say that rather than 
 adding constructs
 for explicit parallelism to the language that is able to express them anyhow
 adding constructs for explicit serialization of a process makes more sense.
 maybe i'm talking nonsense here, please correct me.

 so long...
 Niklas


 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list
 
 --
 [EMAIL PROTECTED]ICQ: 96771783
 http://tim.klingt.org

 Every word is like an unnecessary stain on silence and nothingness
   Samuel Beckett
   


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 

Re: [PD] puredata evolution

2007-05-31 Thread Niklas Klügel


Mathieu Bouchard wrote:


Niklas Klügel wrote:

about the issues of explicitely threading parts of the graph (that 
came up in the discussion lateron), i must say i don't get why you 
would want to do it.


Because it's cheaper to implement.

If well done, it's also an intermediate step towards automatic 
threading. It's important to cut hard goals into easier goals, because 
it reduces required investment and gives quicker returns.
yes, I totally agree but I was curious about the technical aspects and 
not necessarily about the development process that naturally has to

obey these rules.


Also, I wouldn't trust automatic threading to make use of the CPUs in 
the best possible way all of the time, *especially* for real-time.
well, afair an algorithm for the optimal solution would be in NP anyway. 
if a suboptimal solution is enough, i think you can use it in a realtime

system very well; ableton live for example scales with multiple cores/cpus.

so long...
Niklas


 _ _ __ ___ _  _ _ ...
| Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list
  


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-31 Thread Miller Puckette
Just to offer my two cents...

The great majority of DSP objects are side-effect-free and thread-safe.
In the base Pd distribution, I believe the main ones which are not are 
delread~/write~ (etc), tabread~/write~ (etc), send~/receive~, throw~/catch~, 
expr~, and dac~.  If these objects were avoided (or threadsafe versions 
written), then DSP networks could be parallelized at will.

Unfortunately, I have no idea what other objects there are (in the
many externs and libraries available) that might
be thread-unsafe, and what's more, I don't think there's any reliable way
to determine that an object is threadsafe.  So a parallelized version of
Pd would, in practice, occasionally crash mysteriously.  Furthermore, as
new DSP objects get written new sources of crashes would appear, leaving us
in all liklihood in a situation where no version of Pd ever emerged that was
entirely free of thread-related crashes.  Not a real pretty sight.

Another possibility would be to make Pd open up several address spaces and
run portions of the patch in then.  This was how Max/FTS worked on the ISPW.
It wasn't pleasant to use, though; for instance, a table on one processor
could easily get out of sync with one of the same name on another.

So it's hard to figure out what to do that would really help...

cheers
Miller

On Thu, May 31, 2007 at 06:49:44PM -0500, Charles Henry wrote:
  Because it's cheaper to implement.
 
  If well done, it's also an intermediate step towards automatic threading.
  It's important to cut hard goals into easier goals, because it reduces
  required investment and gives quicker returns.
 
 I think that's a very good point.  It could also lead to some new
 insights into the problem as a whole, during testing.  Top-down design
 is usually difficult because of misc problems you will find later on.
 I am curious... what kind of changes do you think would have to be
 made to allow this function?
 
 I can imagine this explicit threading as a new type of sub-patch,
 which could be invoked in the same manner as [pd new_subpatch].  You
 could let the original process handle all the memory allocation, and
 switch on the new thread once its dependencies are satisfied.
 
  Also, I wouldn't trust automatic threading to make use of the CPUs in the
  best possible way all of the time, *especially* for real-time.
 
 
 I would have to say... there's just no replacement for actually
 measuring the performance and making adjustments.  but you'll always
 be limited by the rate you can make the modifications yourself.  So,
 some kind of algorithm could be used to optimize performance, say,
 genetic algorithm style, or heuristic search.  So that you would
 create a patch which is intended to be used in a parallel arch and
 then you just sit back and let the computer try to optimize it by
 actually computing a bunch of cycles and taking measurements.
 Given that it's just a far off idea (to me), it's too soon to really
 discuss optimization :)  but if the computer were to actually take
 measurements and choose the best, I would trust the computer to do it
 faster/better than I could
 
_ _ __ ___ _  _ _ ...
  | Mathieu Bouchard - t?l:+1.514.383.3801, Montr?al QC Canada
  ___
  PD-list@iem.at mailing list
  UNSUBSCRIBE and account-management - 
  http://lists.puredata.info/listinfo/pd-list
 
 
 
 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread shift8
hey chun - all true.  and i'm maybe not the best person to respond to
this one seeing as it's been months since my last dd test build but, not
that i've interjected :)

building pd can run into the same problems i described for building
desiredata because of various distro variances, i would guess (or that's
my memory playing tricks on me.  hey - it happens :)

i think my point is that compiling code from new source bases all share
the same basic issues, and if you want to be able to test out dd (or
self compile vanilla pd for that matter) you need to first figure out
the debugging methods for compiling under linux before bagging on dd. 

there is always the possibility of the latest sources checked out of the
repo having errors accidental introduced that have not been fixed b4 the
developers submits the changes and the time that the code is checked
out, but are usually still things that you can work around if you learn
the build process. 

even though the dd devs are ridiculously ninja skilled (one look at the
source of desire.c give a clue here :) it can still happen - just one of
the (albeit unlikely and mostly self resolving) pitfalls of
team-oriented development.  you can also just wait for a bit and try
again w/ a fresh checkout. 

no offense meant and good luck!
star

On Wed, 2007-05-30 at 04:17 +0200, [*~] wrote:
 hi all:
 
 as far as compiling desiredata goes (on linux), it should require just the 
 same dependencies as building Pd. 
 atm, errors are mostly coming from running it, simply because its still very 
 much of a work in progress. 
 
 shift8 said :
  it works, but you need to be able to recognize what additional
  dependencies are needed for your machine, or code modifications for your
  distro (different versions of gcc have different ideas of what
  constitutes a build error, diferent versions of link-in external shared
  libs are a big one too - generally this is ether discovered by through
  examining compile-time errors and runtime errors...
  
  it takes some work to get a functional build, but that is the nature of
  deve code, especially dev code from source repositories under active
  development.
  
  the currently implemented features are very compelling if you can get
  past the hurdles of getting a build, and all of the built-in objects are
  functional so you can do some patching with it.
  
 
 yes, once its built, all objects/externals should work, as they are 
 compatible with Pd. excepts those involving GUI/tk. as far as patching goes, 
 there are still a few main problems that needs to be solved. namely GOP and 
 optimized patch loading/updating.  
 
  i'd say give it another try - good compelling and way to get knowledge
  of gcc, linking, etc. etc. too.
  
  the fine folks on #desiredata are very helpful for people attempting
  builds.
  
 
 the problem with desiredata so far is that both matju and i have been on and 
 off with its development (because of other commitments), so it has been very 
 slow at times. however, we seen to be around these days, so hopefully 
 we will make some good progress on it again soon. 
 
  regards - 
  star
  
  On Tue, 2007-05-29 at 10:35 +0200, Damian Stewart wrote:
   Chris McCormick wrote:
   
Yeah, I agree completely on all counts. Sometimes really great software
comes out of forks. DesireData looks really interesting, and I know that
nova isn't a fork, but it looks interesting too. Can't wait until some
of these cool bits of software reach maturity (same goes for Pd)!
   
   i've never been able to get DesireData to work...
 
 yeah, me too;)
 
 
   
  -- 
  Mechanize something idiosyncratic.
  
  
  
  ___
  PD-list@iem.at mailing list
  UNSUBSCRIBE and account-management - 
  http://lists.puredata.info/listinfo/pd-list
  
 
-- 
Mechanize something idiosyncratic.



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread [*~]
shift8 said :
 hey chun - all true.  and i'm maybe not the best person to respond to
 this one seeing as it's been months since my last dd test build but, not
 that i've interjected :)

sure, its also been months since i worked on dd too:) but i have started 
working on it again these days and have added a few things. one of which is 
adjustable mouse pointer sensitivity, so one can quickly change how far 
away, or how accurate can a outlet/inlet be hilited. another thing i am working 
on right now is keyboard controlled patching, so that i don't have to reach for 
the mouse one million times a minute;)

 
 building pd can run into the same problems i described for building
 desiredata because of various distro variances, i would guess (or that's
 my memory playing tricks on me.  hey - it happens :)
 
 i think my point is that compiling code from new source bases all share
 the same basic issues, and if you want to be able to test out dd (or
 self compile vanilla pd for that matter) you need to first figure out
 the debugging methods for compiling under linux before bagging on dd. 
 

yes, another problem is that i might be thinking its easy to compile because it 
works on my laptop, whereas, like you say, every distro are different to some 
degree. so, i guess unless more people starts to try it out, 
then we can have a more objective view on things. 

 there is always the possibility of the latest sources checked out of the
 repo having errors accidental introduced that have not been fixed b4 the
 developers submits the changes and the time that the code is checked
 out, but are usually still things that you can work around if you learn
 the build process. 
 

yes, i guess once a person starts to follow any kind of experimental 
code/project, keeping up to date would be essential. 

 even though the dd devs are ridiculously ninja skilled (one look at the
 source of desire.c give a clue here :) it can still happen - just one of
 the (albeit unlikely and mostly self resolving) pitfalls of
 team-oriented development.  you can also just wait for a bit and try
 again w/ a fresh checkout. 
 

i don't know much about desire.c myself, my part of dd so far has been on 
desire.tk mostly.

 no offense meant and good luck!

sure, thanks!

chun

 star
 
 On Wed, 2007-05-30 at 04:17 +0200, [*~] wrote:
  hi all:
  
  as far as compiling desiredata goes (on linux), it should require just the 
  same dependencies as building Pd. 
  atm, errors are mostly coming from running it, simply because its still 
  very much of a work in progress. 
  
  shift8 said :
   it works, but you need to be able to recognize what additional
   dependencies are needed for your machine, or code modifications for your
   distro (different versions of gcc have different ideas of what
   constitutes a build error, diferent versions of link-in external shared
   libs are a big one too - generally this is ether discovered by through
   examining compile-time errors and runtime errors...
   
   it takes some work to get a functional build, but that is the nature of
   deve code, especially dev code from source repositories under active
   development.
   
   the currently implemented features are very compelling if you can get
   past the hurdles of getting a build, and all of the built-in objects are
   functional so you can do some patching with it.
   
  
  yes, once its built, all objects/externals should work, as they are 
  compatible with Pd. excepts those involving GUI/tk. as far as patching 
  goes, there are still a few main problems that needs to be solved. namely 
  GOP and 
  optimized patch loading/updating.  
  
   i'd say give it another try - good compelling and way to get knowledge
   of gcc, linking, etc. etc. too.
   
   the fine folks on #desiredata are very helpful for people attempting
   builds.
   
  
  the problem with desiredata so far is that both matju and i have been on 
  and off with its development (because of other commitments), so it has been 
  very slow at times. however, we seen to be around these days, so hopefully 
  we will make some good progress on it again soon. 
  
   regards - 
   star
   
   On Tue, 2007-05-29 at 10:35 +0200, Damian Stewart wrote:
Chris McCormick wrote:

 Yeah, I agree completely on all counts. Sometimes really great 
 software
 comes out of forks. DesireData looks really interesting, and I know 
 that
 nova isn't a fork, but it looks interesting too. Can't wait until some
 of these cool bits of software reach maturity (same goes for Pd)!

i've never been able to get DesireData to work...
  
  yeah, me too;)
  
  

   -- 
   Mechanize something idiosyncratic.
   
   
   
   ___
   PD-list@iem.at mailing list
   UNSUBSCRIBE and account-management - 
   http://lists.puredata.info/listinfo/pd-list
   
  
 -- 
 Mechanize something idiosyncratic.
 
 
 
 ___
 

Re: [PD] puredata evolution

2007-05-30 Thread Niklas Klügel
Charles Henry wrote:
 I think it depends on the application for the most part, we can't
 get a generic speedup from using multiple cores (forgive me if wrong)
 that would apply to every single pd program. but some types of
 computations such as large ffts can be performed faster when
 distributed to different cores, in which case, the code for the fft
 has to be parallelized a priori.  Plus, the memory is tricky.  You can
 have a memory access bottleneck, when using a shared memory resource
 between multiple processors.
 It's definitely a problem that is worth solving, but I'm not
 suggesting to do anything about it soon.  It sounds like something
 that would require a complete top-down re-design to be successful.
 yikes

 Chuck

   
I once wrote such a toolset that does automatically scale up
with multiple threads throughout the whole network. it worked
by detecting cycles in the graph and splits of the signals while
segmenting the graph in autonomous sequential parts and essentially
adding some smart and lightweight locks everyhwere the signals
split or merged. it even reassigned threats on the lock-level to
balance the workload in the graph and preventing deadlocks.
the code is/was around 2.5k lines of c++ code and a bloody mess :)
so, i don't know much about the internals of pd but it'd be probably
possible.

so long...
Niklas

 On 5/29/07, Hans-Christoph Steiner [EMAIL PROTECTED] wrote:
   
 That is a tough problem.  On a basic level, Pd is ready right now for
 two processors since it uses two separate processes: pd and pd-gui.
 But the pd process does the heavy lifting, so it's a bit uneven.

 As for taking advantage of multiple cores, that is a lot more
 complicated.  Max/MSP does have some support for threading
 (Overdrive mode), but it seems to me that it is a hack.  It does
 work, but it often leads to programs that are very hard to debug
 since it is difficult to handle the non-determinancy of the situation.

 .hc

 On May 29, 2007, at 1:32 PM, Phil Stone wrote:

 
 This has been a fascinating thread about the direction of PD.

 I've been thinking about parallelism and PD as multi-core processors
 become common.  How hard would it be to make PD able to take advantage
 of parallel architecture?  I'm guessing that it is decidedly
 non-trivial, as lack of threading is already an issue in contention
 between the GUI and audio processing.

 Without some support for parallelism, PD could be going as fast as it
 will ever go -- the trend seems to be that CPU speeds will not be
 climbing much (at least not dramatically like they have until now),
 and
 increasing numbers of cores will be the path to greater speed and
 power.

 Is there any hope in this direction?


 Phil Stone



 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - http://lists.puredata.info/
 listinfo/pd-list
   

 
 

 If you are not part of the solution, you are part of the problem.



 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list

 

 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list
   


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Tim Blechmann
On Wed, 2007-05-30 at 12:13 +0200, Niklas Klügel wrote:
  I think it depends on the application for the most part, we
 can't
  get a generic speedup from using multiple cores (forgive me if
 wrong)
  that would apply to every single pd program. but some types of
  computations such as large ffts can be performed faster when
  distributed to different cores, in which case, the code for the fft
  has to be parallelized a priori.  Plus, the memory is tricky.  You
 can
  have a memory access bottleneck, when using a shared memory resource
  between multiple processors.
  It's definitely a problem that is worth solving, but I'm not
  suggesting to do anything about it soon.  It sounds like something
  that would require a complete top-down re-design to be successful.
  yikes
 
  Chuck
 

 I once wrote such a toolset that does automatically scale up
 with multiple threads throughout the whole network. it worked
 by detecting cycles in the graph and splits of the signals while
 segmenting the graph in autonomous sequential parts and essentially
 adding some smart and lightweight locks everyhwere the signals
 split or merged. it even reassigned threats on the lock-level to
 balance the workload in the graph and preventing deadlocks.
 the code is/was around 2.5k lines of c++ code and a bloody mess :)
 so, i don't know much about the internals of pd but it'd be probably
 possible. 

detaching ffts (i.e. canvases with larger blocksizes than 64) should be
rather trivial ... 

distributing a synchronous dsp graph to several threads is not trivial,
especially when it comes to a huge number of nodes. for small numbers of
nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
probably usable, but when it comes to huge dsp graphs, the
synchronization overhead is probably to big, so the graph would have to
be split to parallel chunks which are then scheduled ...
and of course, an implementation has to be lockfree, if it should be
usable in low-latency environments ...

cheers, tim

--
[EMAIL PROTECTED]ICQ: 96771783
http://tim.klingt.org

Your mind will answer most questions if you learn to relax and wait
for the answer.
  William S. Burroughs


signature.asc
Description: This is a digitally signed message part
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Damian Stewart
Charles Henry wrote:
 I think it depends on the application for the most part, we can't
 get a generic speedup from using multiple cores (forgive me if wrong)
 that would apply to every single pd program. but some types of
 computations such as large ffts can be performed faster when
 distributed to different cores, in which case, the code for the fft

in the general case of this example (forking threads when appropriate to 
calculate single dsp units) you end up having to have a barrier at the end 
of each fork to collect up all the results and then pass them on to the 
next unit in the dsp chain.

now, i don't know any of the details of the pd rendering engine, but it 
would seem to me that the best way to deal with multithreading would be 
based around a particular algorithm for sorting the dsp chain. we calculate 
some kind of dependency tree with nodes at each point where two or more 
audio lines meet, then, in some kind of a reverse breadth-first traversal, 
process non-dependent edges in parallel.

say we have a patch like this:

   [phasor~]
   |
[sig~ 1] [osc~ 20][*~ 200]
|||
[+~ 5]   [osc~] [sig~ 10] [osc~]
|  __|  |  ___|
|  ||  |
[*~][*~]
|  _|
|  |
[+~]
|\
| [*~ 2]
|/
[dac 1]

so we build an acyclic dependency graph like this, branching wherever a 
signal line splits or merges:

[dac]
|\
| 0
|/
1
|  \
2   3
|\  |\
4 5 6 7

in this case the edge 2-4 is the partial chain [sig~ 1]--[+~ 5], edge 2-5 
is the partial chain [osc~ 20]--[osc~], and so on.

so: we need a queue of partial chains, represented by edges of the 
dependency tree, and a collection of worker threads which pulls partial 
chains off the queue and processes them.

using the example above, we push edges 2-4, 2-5, 3-6, and 3-7 on to our 
process queue and process them in parallel. once both 2-4 and 2-5 have been 
processed, ie all dependencies for for node 2 have been satisfied, we can 
push 1-2 on to our process queue. once both 3-6 and 3-7 have been 
processed, we can push 1-3 on to our process queue. once both 1-2 and 1-3 
have been processed, we push [dac]-1 and 0-1 on to our queue. once 0-1 is 
done we push [dac]-0 on to our queue.

the only bits of shared memory here are the partial chain queue and the 
buffers at each node where splitting or merging has to happen.

does this make sense?

hmm. methinks this should be on pd-dev. someone care to cc? i'm not 
subscribed to pd-dev atm.

-- 
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Damian Stewart
Niklas Klügel wrote:

 I once wrote such a toolset that does automatically scale up
 with multiple threads throughout the whole network. it worked
 by detecting cycles in the graph and splits of the signals while
 segmenting the graph in autonomous sequential parts and essentially
 adding some smart and lightweight locks everyhwere the signals
 split or merged. it even reassigned threats on the lock-level to

yeah, so kind of what i said...

-- 
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Damian Stewart
Tim Blechmann wrote:

 and of course, an implementation has to be lockfree, if it should be
 usable in low-latency environments ...

mm? why so?

-- 
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Tim Blechmann
On Wed, 2007-05-30 at 13:59 +0200, Damian Stewart wrote:
 Tim Blechmann wrote:
 
  and of course, an implementation has to be lockfree, if it should be
  usable in low-latency environments ...
 
 mm? why so?

if you work with 64 samples at 44100 Hz, the audio hardware will send
you an interrupt each 1.4 ms ... on modern operating systems, you have a
scheduler granularity from 1000 Hz, so depending on the dispatcher of
your os, a suspend/resume cycle can take up to 1ms ... 

basically, blocking synchronization and low-latency audio are mutual
exclusive ...

tim 

--
[EMAIL PROTECTED]ICQ: 96771783
http://tim.klingt.org

A paranoid is a man who knows a little of what's going on.
  William S. Burroughs


signature.asc
Description: This is a digitally signed message part
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Tim Blechmann
 say we have a patch like this:
 
[phasor~]
|
 [sig~ 1] [osc~ 20][*~ 200]
 |||
 [+~ 5]   [osc~] [sig~ 10] [osc~]
 |  __|  |  ___|
 |  ||  |
 [*~][*~]
 |  _|
 |  |
 [+~]
 |\
 | [*~ 2]
 |/
 [dac 1]
 
 so we build an acyclic dependency graph like this, branching wherever a 
 signal line splits or merges:
 
 [dac]
 |\
 | 0
 |/
 1
 |  \
 2   3
 |\  |\
 4 5 6 7
 
 in this case the edge 2-4 is the partial chain [sig~ 1]--[+~ 5], edge 2-5 
 is the partial chain [osc~ 20]--[osc~], and so on.
 
 so: we need a queue of partial chains, represented by edges of the 
 dependency tree, and a collection of worker threads which pulls partial 
 chains off the queue and processes them.
 
 using the example above, we push edges 2-4, 2-5, 3-6, and 3-7 on to our 
 process queue and process them in parallel. once both 2-4 and 2-5 have been 
 processed, ie all dependencies for for node 2 have been satisfied, we can 
 push 1-2 on to our process queue. once both 3-6 and 3-7 have been 
 processed, we can push 1-3 on to our process queue. once both 1-2 and 1-3 
 have been processed, we push [dac]-1 and 0-1 on to our queue. once 0-1 is 
 done we push [dac]-0 on to our queue.
 
 the only bits of shared memory here are the partial chain queue and the 
 buffers at each node where splitting or merging has to happen.
 
 does this make sense?

more or less:
both pd and nova compile the dsp graph to a chain (topological sorted
graph), which is then executed by a rate-monotonic scheduler driven by
the audio hardware. this is done for performance reasons (especially
caching!) ... afaict, the performance benefits over a dynamic scheduling
are quite significant ... for a dual-processor architecture, maintaining
a process queue is probably too much of an overhead ...

and of course you instead of using shared memory, the target port would
have to fetch the data from the source ports in order in order to avoid
locks ...

tim 

--
[EMAIL PROTECTED]ICQ: 96771783
http://tim.klingt.org

The aim of education is the knowledge, not of facts, but of values
  William S. Burroughs


signature.asc
Description: This is a digitally signed message part
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Chris McCormick
On Wed, May 30, 2007 at 01:45:00PM +0200, Tim Blechmann wrote:
 On Wed, 2007-05-30 at 12:13 +0200, Niklas Klügel wrote:
 detaching ffts (i.e. canvases with larger blocksizes than 64) should be
 rather trivial ... 
 
 distributing a synchronous dsp graph to several threads is not trivial,
 especially when it comes to a huge number of nodes. for small numbers of
 nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
 probably usable, but when it comes to huge dsp graphs, the
 synchronization overhead is probably to big, so the graph would have to
 be split to parallel chunks which are then scheduled ...
 and of course, an implementation has to be lockfree, if it should be
 usable in low-latency environments ...

The logical extension of the current trend in processors (Core 2 duo,
Cell processor) seems to be greater and greater parallelism. Maybe this
is completely naïve, but once we reach the situation where we have more
cores/vector processors than we do nodes in our graph, can't we just
assign one node to each processor and wait for the slowest one to
finish? Given that currently the slowest one is much much faster than
it takes to compute our smallest blocksize. I don't think we are too
far off from having 4096 processors on one chip.

Speaking of that; sequential programming is going to start sucking really
badly when that happens. Concurrent languages like Erlang, Parallel
Haskell, and Puredata (if it adapts fast enough) are going to rule.

Best,

Chris.

---
http://mccormick.cx


signature.asc
Description: Digital signature
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Niklas Klügel
Tim Blechmann wrote:
 On Wed, 2007-05-30 at 12:13 +0200, Niklas Klügel wrote:
   
 I think it depends on the application for the most part, we
   
 can't
 
 get a generic speedup from using multiple cores (forgive me if
   
 wrong)
 
 that would apply to every single pd program. but some types of
 computations such as large ffts can be performed faster when
 distributed to different cores, in which case, the code for the fft
 has to be parallelized a priori.  Plus, the memory is tricky.  You
   
 can
 
 have a memory access bottleneck, when using a shared memory resource
 between multiple processors.
 It's definitely a problem that is worth solving, but I'm not
 suggesting to do anything about it soon.  It sounds like something
 that would require a complete top-down re-design to be successful.
 yikes

 Chuck

   
   
 I once wrote such a toolset that does automatically scale up
 with multiple threads throughout the whole network. it worked
 by detecting cycles in the graph and splits of the signals while
 segmenting the graph in autonomous sequential parts and essentially
 adding some smart and lightweight locks everyhwere the signals
 split or merged. it even reassigned threats on the lock-level to
 balance the workload in the graph and preventing deadlocks.
 the code is/was around 2.5k lines of c++ code and a bloody mess :)
 so, i don't know much about the internals of pd but it'd be probably
 possible. 
 

 detaching ffts (i.e. canvases with larger blocksizes than 64) should be
 rather trivial ... 

 distributing a synchronous dsp graph to several threads is not trivial,
 especially when it comes to a huge number of nodes. for small numbers of
 nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
 probably usable, but when it comes to huge dsp graphs, the
 synchronization overhead is probably to big, so the graph would have to
 be split to parallel chunks which are then scheduled ...
   
true, i didn't try big graphs, so i can't really say how it would behave.
it was more a fun project to see if it was doable. at that time i had
the impression that the locking and the re-assignment of threads
was quite efficient and done only on demand, if the graph
has more sequential parts than the number of created threads
; i am curious how it can be achieved in a lock-free way.

about the issues of explicitely threading parts of the graph (that came 
up in the
discussion lateron), i must say i don't get why you would want to do it.
 seeing how the numbers of cores are about
to increase, i'd say that it is contraproductive in relation to the 
technological
development of hardware and the software running on top of it lagging 
behind as well
as the steady implicit maintenance of the software involved. from my 
point of view
a graphical dataflow language has the perfect semantics to express the 
parallelisms
of a program in an intuitive way. therefore i'd say that rather than 
adding constructs
for explicit parallelism to the language that is able to express them anyhow
adding constructs for explicit serialization of a process makes more sense.
maybe i'm talking nonsense here, please correct me.

so long...
Niklas


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-30 Thread Phil Stone
Niklas,

If it could be done the way you suggest (automatically segmenting the 
DSP graph), there would be many advantages, the biggest of which is, the 
user gets full DSP scalability for free -- no maintenance, not even any 
need to be aware of the parallelism that is taking place.

The only disadvantage I can see is that it would only apply to DSP 
tasks.  That's not much of a disadvantage, considering that PD is 
generally DSP-bound.

But, can it really be done?  Explicit parallelism is a hack compared to 
what you and Tim describe.


Phil




Niklas Klügel wrote:
 Tim Blechmann wrote:
   
 On Wed, 2007-05-30 at 12:13 +0200, Niklas Klügel wrote:
   
 
 I think it depends on the application for the most part, we
   
 
 can't
 
   
 get a generic speedup from using multiple cores (forgive me if
   
 
 wrong)
 
   
 that would apply to every single pd program. but some types of
 computations such as large ffts can be performed faster when
 distributed to different cores, in which case, the code for the fft
 has to be parallelized a priori.  Plus, the memory is tricky.  You
   
 
 can
 
   
 have a memory access bottleneck, when using a shared memory resource
 between multiple processors.
 It's definitely a problem that is worth solving, but I'm not
 suggesting to do anything about it soon.  It sounds like something
 that would require a complete top-down re-design to be successful.
 yikes

 Chuck

   
   
 
 I once wrote such a toolset that does automatically scale up
 with multiple threads throughout the whole network. it worked
 by detecting cycles in the graph and splits of the signals while
 segmenting the graph in autonomous sequential parts and essentially
 adding some smart and lightweight locks everyhwere the signals
 split or merged. it even reassigned threats on the lock-level to
 balance the workload in the graph and preventing deadlocks.
 the code is/was around 2.5k lines of c++ code and a bloody mess :)
 so, i don't know much about the internals of pd but it'd be probably
 possible. 
 
   
 detaching ffts (i.e. canvases with larger blocksizes than 64) should be
 rather trivial ... 

 distributing a synchronous dsp graph to several threads is not trivial,
 especially when it comes to a huge number of nodes. for small numbers of
 nodes the approach of jackdmp, using a dynamic dataflow scheduling, is
 probably usable, but when it comes to huge dsp graphs, the
 synchronization overhead is probably to big, so the graph would have to
 be split to parallel chunks which are then scheduled ...
   
 
 true, i didn't try big graphs, so i can't really say how it would behave.
 it was more a fun project to see if it was doable. at that time i had
 the impression that the locking and the re-assignment of threads
 was quite efficient and done only on demand, if the graph
 has more sequential parts than the number of created threads
 ; i am curious how it can be achieved in a lock-free way.

 about the issues of explicitely threading parts of the graph (that came 
 up in the
 discussion lateron), i must say i don't get why you would want to do it.
  seeing how the numbers of cores are about
 to increase, i'd say that it is contraproductive in relation to the 
 technological
 development of hardware and the software running on top of it lagging 
 behind as well
 as the steady implicit maintenance of the software involved. from my 
 point of view
 a graphical dataflow language has the perfect semantics to express the 
 parallelisms
 of a program in an intuitive way. therefore i'd say that rather than 
 adding constructs
 for explicit parallelism to the language that is able to express them anyhow
 adding constructs for explicit serialization of a process makes more sense.
 maybe i'm talking nonsense here, please correct me.

 so long...
 Niklas


 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list

   


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-29 Thread Damian Stewart
Chris McCormick wrote:

 Yeah, I agree completely on all counts. Sometimes really great software
 comes out of forks. DesireData looks really interesting, and I know that
 nova isn't a fork, but it looks interesting too. Can't wait until some
 of these cool bits of software reach maturity (same goes for Pd)!

i've never been able to get DesireData to work...

-- 
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-29 Thread Phil Stone
This has been a fascinating thread about the direction of PD.

I've been thinking about parallelism and PD as multi-core processors 
become common.  How hard would it be to make PD able to take advantage 
of parallel architecture?  I'm guessing that it is decidedly 
non-trivial, as lack of threading is already an issue in contention 
between the GUI and audio processing.

Without some support for parallelism, PD could be going as fast as it 
will ever go -- the trend seems to be that CPU speeds will not be 
climbing much (at least not dramatically like they have until now), and 
increasing numbers of cores will be the path to greater speed and power. 

Is there any hope in this direction?


Phil Stone



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-29 Thread shift8
it works, but you need to be able to recognize what additional
dependencies are needed for your machine, or code modifications for your
distro (different versions of gcc have different ideas of what
constitutes a build error, diferent versions of link-in external shared
libs are a big one too - generally this is ether discovered by through
examining compile-time errors and runtime errors...

it takes some work to get a functional build, but that is the nature of
deve code, especially dev code from source repositories under active
development.

the currently implemented features are very compelling if you can get
past the hurdles of getting a build, and all of the built-in objects are
functional so you can do some patching with it.

i'd say give it another try - good compelling and way to get knowledge
of gcc, linking, etc. etc. too.

the fine folks on #desiredata are very helpful for people attempting
builds.

regards - 
star

On Tue, 2007-05-29 at 10:35 +0200, Damian Stewart wrote:
 Chris McCormick wrote:
 
  Yeah, I agree completely on all counts. Sometimes really great software
  comes out of forks. DesireData looks really interesting, and I know that
  nova isn't a fork, but it looks interesting too. Can't wait until some
  of these cool bits of software reach maturity (same goes for Pd)!
 
 i've never been able to get DesireData to work...
 
-- 
Mechanize something idiosyncratic.



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-29 Thread Hans-Christoph Steiner

That is a tough problem.  On a basic level, Pd is ready right now for  
two processors since it uses two separate processes: pd and pd-gui.   
But the pd process does the heavy lifting, so it's a bit uneven.

As for taking advantage of multiple cores, that is a lot more  
complicated.  Max/MSP does have some support for threading  
(Overdrive mode), but it seems to me that it is a hack.  It does  
work, but it often leads to programs that are very hard to debug  
since it is difficult to handle the non-determinancy of the situation.

.hc

On May 29, 2007, at 1:32 PM, Phil Stone wrote:

 This has been a fascinating thread about the direction of PD.

 I've been thinking about parallelism and PD as multi-core processors
 become common.  How hard would it be to make PD able to take advantage
 of parallel architecture?  I'm guessing that it is decidedly
 non-trivial, as lack of threading is already an issue in contention
 between the GUI and audio processing.

 Without some support for parallelism, PD could be going as fast as it
 will ever go -- the trend seems to be that CPU speeds will not be
 climbing much (at least not dramatically like they have until now),  
 and
 increasing numbers of cores will be the path to greater speed and  
 power.

 Is there any hope in this direction?


 Phil Stone



 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - http://lists.puredata.info/ 
 listinfo/pd-list



 


If you are not part of the solution, you are part of the problem.



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-29 Thread Charles Henry
I think it depends on the application for the most part, we can't
get a generic speedup from using multiple cores (forgive me if wrong)
that would apply to every single pd program. but some types of
computations such as large ffts can be performed faster when
distributed to different cores, in which case, the code for the fft
has to be parallelized a priori.  Plus, the memory is tricky.  You can
have a memory access bottleneck, when using a shared memory resource
between multiple processors.
It's definitely a problem that is worth solving, but I'm not
suggesting to do anything about it soon.  It sounds like something
that would require a complete top-down re-design to be successful.
yikes

Chuck

On 5/29/07, Hans-Christoph Steiner [EMAIL PROTECTED] wrote:

 That is a tough problem.  On a basic level, Pd is ready right now for
 two processors since it uses two separate processes: pd and pd-gui.
 But the pd process does the heavy lifting, so it's a bit uneven.

 As for taking advantage of multiple cores, that is a lot more
 complicated.  Max/MSP does have some support for threading
 (Overdrive mode), but it seems to me that it is a hack.  It does
 work, but it often leads to programs that are very hard to debug
 since it is difficult to handle the non-determinancy of the situation.

 .hc

 On May 29, 2007, at 1:32 PM, Phil Stone wrote:

  This has been a fascinating thread about the direction of PD.
 
  I've been thinking about parallelism and PD as multi-core processors
  become common.  How hard would it be to make PD able to take advantage
  of parallel architecture?  I'm guessing that it is decidedly
  non-trivial, as lack of threading is already an issue in contention
  between the GUI and audio processing.
 
  Without some support for parallelism, PD could be going as fast as it
  will ever go -- the trend seems to be that CPU speeds will not be
  climbing much (at least not dramatically like they have until now),
  and
  increasing numbers of cores will be the path to greater speed and
  power.
 
  Is there any hope in this direction?
 
 
  Phil Stone
 
 
 
  ___
  PD-list@iem.at mailing list
  UNSUBSCRIBE and account-management - http://lists.puredata.info/
  listinfo/pd-list



 
 

 If you are not part of the solution, you are part of the problem.



 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Kyle Klipowicz
I think that the benevolent dictatorship of Pd lends to so much
development anarchy, it's a wonder any coordination happens at all!

All jokes aside, I'm perplexed by the sociology/ethnology of open
source projects, because there is so little stratified structure.

In order to channel the decision-making process, there must be at
least SOME kind of hierarchy to get things moving. Maybe this (plus
tons of cash flow) is how corporations are able to leverage the
development that they command.

Just some pre-snooze thoughts mulling in my head...Thoughts?

~Kyle

On 5/27/07, Hans-Christoph Steiner [EMAIL PROTECTED] wrote:

 That's an interesting talk, they point out a number of key issues
 that have affected us as a community.  I think the idea of poisonous
 people is a useful one, but I would take the focus away from
 people and talk more about poisonous activities.  While there are
 definitely some people who have been more poisonous than others, I
 think that probably every single Pd developer at some point has been
 poisonous.  I know I have and I have been thinking a fair amount
 about how to stop myself from doing that.  I guess the easiest thing
 I have been trying to do is to spend less time arguing in email and
 more time coding.

 Another point they talk about is having a focus.  I think Pd suffers
 from the lack of focus.  As a software project, it has an unusual
 structure, which makes it difficult to clearly outline the focus.
 That's something I am planning on working on in the coming months,
 trying to figure out my own focus in all this.  Pd is quite a bit
 different here also since it covers a very wide range of topics, so
 there isn't really the possibility of a strong focus like with
 subversion.

 Hopefully we can discuss this more at the Pd Convention.  It's always
 many times more productive to speak face-to-face.  Plus I think it's
 many times more fun than sitting alone behind the laptop...

 .hc

 On May 25, 2007, at 6:37 PM, marius schebella wrote:

  victor wrote:
  summarizing: Which is the next future of PureData
 
  Before I try to answer that question, please keep in mind that you
  cannot compare an open source software project to any company driven
  proprietary software. Lots of people have been working very hard to
  bring Pd to the current state and propably 99 times more people use it
  in their work, and I saw some great artwork done with it. Seriously, I
  don't know so many other art software packages of that size. (I am
  thinking of maybe blender, processing, gimp, audacity...)
 
  But let's talk about the future. Btw there is a nice talk about open
  source software projects, which was given at google talks.
  http://video.google.com/videoplay?docid=-4216011961522818645
 
  To me it seems there is not really a clear direction of what Pd should
  be. Speaking as a user I only can speculate, but most probably none of
  the people who have contributed Pd code have ever agreed on a certain
  featurelist of a final version. Or whether there should be a v 1.0
  at all.
 
  But that does not mean that there is no progress. Regarding the social
  aspect there will be the second puredata conference in late August in
  Montreal. People are doing lot of work to get this running. Then there
  are the summer of code projects http://puredata.org/dev/summer-of-
  code.
  Other people are working on documentation and tutorials and others try
  to integrate all the different libraries into one release.
 
  btw. it is too late for this year's prix ars electronica, but since
  there is a category digital community Pd definitely should go for
  that
  next year...
 
  marius.
 
  ___
  PD-list@iem.at mailing list
  UNSUBSCRIBE and account-management - http://lists.puredata.info/
  listinfo/pd-list




 
 

 You can't steal a gift. Bird gave the world his music, and if you can
 hear it, you can have it. - Dizzy Gillespie




 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list



-- 
-

 -
  - --
http://perhapsidid.wordpress.com

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Tim Blechmann
On Sun, 2007-05-27 at 23:39 -0400, Chris McCormick wrote:
 Also, whenever somebody's patch
 is not accepted by Miller they often decide to fork Pd. In other open
 source projects it is very normal for the project maintainer to drop
 patches with very little info, or even completely silently. When this
 happens in Pd development, people sometimes get antagonistic and/or
 frustrated 

forking is something, that's happening in other free software
communities as well and is usually a way to speed up the development
process if the maintainer becomes too conservative ... the history of
gcc shows a successful fork ...

but there are always two sides ... the developer doing the fork is
loosing the support of the community and the community is loosing the
support of the developer ...

it's good to see this discussion on the pd-list, though ... 

tim

--
[EMAIL PROTECTED]ICQ: 96771783
http://tim.klingt.org

I had nothing to offer anybody except my own confusion
  Jack Kerouac


signature.asc
Description: This is a digitally signed message part
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Roman Haefeli
On Sun, 2007-05-27 at 23:39 -0400, Chris McCormick wrote:

 In the Pd sources under src/notes.txt there is a list of all of the
 things that Miller wants to add/change in Pd. 

thanks for mentioning this. i didn't know about this file. many things
of my desire are already in this list (whereas others are not, of
course :-) )

roman






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Roman Haefeli

On Sun, 2007-05-27 at 17:59 -0400, Hans-Christoph Steiner wrote:
 Another point they talk about is having a focus.  I think Pd suffers  
 from the lack of focus.  As a software project, it has an unusual  
 structure, which makes it difficult to clearly outline the focus.   
 That's something I am planning on working on in the coming months,  
 trying to figure out my own focus in all this.  Pd is quite a bit  
 different here also since it covers a very wide range of topics, so  
 there isn't really the possibility of a strong focus like with  
 subversion.

hi hans

yeah, a software project needs definitely a focus. for myself, i always
believed, that pd's focus is audio and that is also the reason, why i
came to pd. and when having a look at src/notes.txt i still think, pd's
focus is on audio. so, i am not quite sure, if pd really lacks a focus.
which, on the other hand, doesn't mean, that pd can be used in
outside-of-audio contexts, but then one needs to use externals (or a dev
needs to write an external first). so, externals are focussed on
'specialized' - non-audio - applications. this is my subjective view of
things. 
was it that, what you meant by a 'focus'?

roman  



___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Damian Stewart
Roman Haefeli wrote:
 was it that, what you meant by a 'focus'?

for me, the biggest issue is with the UI. almost none of the points in 
src/notes.txt deal with the GUI, and certainly there is no talk at all of 
gui improvements.

to me the engine works fine - i mean, pd crashes sometimes but this is 
usually due to missing or broken externals than to do with anything in the 
engine.

it is the gui that needs work. i would like to work on this but i don't 
know where to start.

-- 
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Hans-Christoph Steiner

On May 28, 2007, at 10:29 AM, Damian Stewart wrote:

 Roman Haefeli wrote:
 was it that, what you meant by a 'focus'?

 for me, the biggest issue is with the UI. almost none of the points  
 in src/notes.txt deal with the GUI, and certainly there is no talk  
 at all of gui improvements.

 to me the engine works fine - i mean, pd crashes sometimes but this  
 is usually due to missing or broken externals than to do with  
 anything in the engine.

 it is the gui that needs work. i would like to work on this but i  
 don't know where to start.

One thing that would be a great improvement, would not touch much  
other code, and be likely to be accepted would be an overhaul of the  
preference panels.  IMHO, there should be just one preference window,  
with separate tabs for each section.  While this is all Tcl/Tk code,  
you don't really need to know much Tcl in order to do this.  The Tk  
stuff works like most standard GUI layout stuff, and the list can  
help with the Tcl quirks.

Another thing that is always useful is making better help patches.   
Just submit them to the patch tracker so they can be reviewed.

.hc


 -- 
 damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
 frey | live art with machines | http://www.frey.co.nz



 


Computer science is no more related to the computer than astronomy is  
related to the telescope.  -Edsger Dykstra



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Hans-Christoph Steiner


A lot of GUI could be improved without major changes to all of Pd.   
At the very least, it could be included in Pd-extended.  Once this  
version of Pd-extended is released, I still have to turn all the GUI  
tweaks I did into patches and hopefully Miller will accept them.


.hc

On May 28, 2007, at 11:11 AM, Kevin McCoy wrote:

The gui needs work - do you mean we need more/better looking gui  
objects?  When I was working on OS X, I couldn't really use very  
many gui objects at once because of Apple's crappy closed  
implementation of tcl/tk; the lag was terrible.  Pd devs can't  
really do anything about that (though it is a huge problem).  A  
significant portion of pd users are on OS X.


Pd's gui definitely does need work, but without a clear roadmap it  
will be hard to say what priority that is, right?  Watching that  
google talk has me thinking about all kinds of things.


Kevin

On 5/28/07, Damian Stewart [EMAIL PROTECTED] wrote:
Roman Haefeli wrote:
 was it that, what you meant by a 'focus'?

for me, the biggest issue is with the UI. almost none of the points in
src/notes.txt deal with the GUI, and certainly there is no talk at  
all of

gui improvements.

to me the engine works fine - i mean, pd crashes sometimes but this is
usually due to missing or broken externals than to do with anything  
in the

engine.

it is the gui that needs work. i would like to work on this but i  
don't

know where to start.

--
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - http://lists.puredata.info/ 
listinfo/pd-list




--



http://pocketkm.blogspot.com
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - http://lists.puredata.info/ 
listinfo/pd-list




 



  http://at.or.at/hans/


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-28 Thread Chris McCormick
On Mon, May 28, 2007 at 11:44:22AM +0200, Tim Blechmann wrote:
 On Sun, 2007-05-27 at 23:39 -0400, Chris McCormick wrote:
  Also, whenever somebody's patch
  is not accepted by Miller they often decide to fork Pd. In other open
  source projects it is very normal for the project maintainer to drop
  patches with very little info, or even completely silently. When this
  happens in Pd development, people sometimes get antagonistic and/or
  frustrated 
 
 forking is something, that's happening in other free software
 communities as well and is usually a way to speed up the development
 process if the maintainer becomes too conservative ... the history of
 gcc shows a successful fork ...
 
 but there are always two sides ... the developer doing the fork is
 loosing the support of the community and the community is loosing the
 support of the developer ...
 
 it's good to see this discussion on the pd-list, though ... 

Yeah, I agree completely on all counts. Sometimes really great software
comes out of forks. DesireData looks really interesting, and I know that
nova isn't a fork, but it looks interesting too. Can't wait until some
of these cool bits of software reach maturity (same goes for Pd)!

Lovin' the Free world,

Chris.

---
http://mccormick.cx


signature.asc
Description: Digital signature
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-27 Thread Damian Stewart
victor wrote:

 summarizing: Which is the next future of PureData

this is an interesting question. it seems from my understanding that Miller 
has his own roadmap (hence pd 0.40 vs 0.39) and the development is not 
particularly 'open' in this sense.

i feel if there were some defined goals that everyone knew about then 
someone like myself could jump in and start programming something. but 
there doesn't seem to be such a roadmap, and so we put up with pd's 
'quirks' because there doesn't seem to be much point in fixing/changing 
them if our fixes/changes aren't going to make it into the next version 
anyway, since what is planned or desired for the next version is so closed.

-- 
damian stewart | +44 7854 493 796 | [EMAIL PROTECTED]
frey | live art with machines | http://www.frey.co.nz

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-27 Thread Hans-Christoph Steiner

That's an interesting talk, they point out a number of key issues  
that have affected us as a community.  I think the idea of poisonous  
people is a useful one, but I would take the focus away from  
people and talk more about poisonous activities.  While there are  
definitely some people who have been more poisonous than others, I  
think that probably every single Pd developer at some point has been  
poisonous.  I know I have and I have been thinking a fair amount  
about how to stop myself from doing that.  I guess the easiest thing  
I have been trying to do is to spend less time arguing in email and  
more time coding.

Another point they talk about is having a focus.  I think Pd suffers  
from the lack of focus.  As a software project, it has an unusual  
structure, which makes it difficult to clearly outline the focus.   
That's something I am planning on working on in the coming months,  
trying to figure out my own focus in all this.  Pd is quite a bit  
different here also since it covers a very wide range of topics, so  
there isn't really the possibility of a strong focus like with  
subversion.

Hopefully we can discuss this more at the Pd Convention.  It's always  
many times more productive to speak face-to-face.  Plus I think it's  
many times more fun than sitting alone behind the laptop...

.hc

On May 25, 2007, at 6:37 PM, marius schebella wrote:

 victor wrote:
 summarizing: Which is the next future of PureData

 Before I try to answer that question, please keep in mind that you
 cannot compare an open source software project to any company driven
 proprietary software. Lots of people have been working very hard to
 bring Pd to the current state and propably 99 times more people use it
 in their work, and I saw some great artwork done with it. Seriously, I
 don't know so many other art software packages of that size. (I am
 thinking of maybe blender, processing, gimp, audacity...)

 But let's talk about the future. Btw there is a nice talk about open
 source software projects, which was given at google talks.
 http://video.google.com/videoplay?docid=-4216011961522818645

 To me it seems there is not really a clear direction of what Pd should
 be. Speaking as a user I only can speculate, but most probably none of
 the people who have contributed Pd code have ever agreed on a certain
 featurelist of a final version. Or whether there should be a v 1.0  
 at all.

 But that does not mean that there is no progress. Regarding the social
 aspect there will be the second puredata conference in late August in
 Montreal. People are doing lot of work to get this running. Then there
 are the summer of code projects http://puredata.org/dev/summer-of- 
 code.
 Other people are working on documentation and tutorials and others try
 to integrate all the different libraries into one release.

 btw. it is too late for this year's prix ars electronica, but since
 there is a category digital community Pd definitely should go for  
 that
 next year...

 marius.

 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - http://lists.puredata.info/ 
 listinfo/pd-list




 


You can't steal a gift. Bird gave the world his music, and if you can  
hear it, you can have it. - Dizzy Gillespie




___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-27 Thread Chris McCormick
On Sat, May 26, 2007 at 04:53:14PM +0200, Damian Stewart wrote:
 summarizing: Which is the next future of PureData
 
 this is an interesting question. it seems from my understanding that Miller 
 has his own roadmap (hence pd 0.40 vs 0.39) and the development is not 
 particularly 'open' in this sense.
 
 i feel if there were some defined goals that everyone knew about then 
 someone like myself could jump in and start programming something. but 
 there doesn't seem to be such a roadmap, and so we put up with pd's 
 'quirks' because there doesn't seem to be much point in fixing/changing 
 them if our fixes/changes aren't going to make it into the next version 
 anyway, since what is planned or desired for the next version is so closed.

In the Pd sources under src/notes.txt there is a list of all of the
things that Miller wants to add/change in Pd. For some reason it seems
like almost nobody who contributes to Pd actually looks at this file
(sorry if is remark is inflamatory!). Also, whenever somebody's patch
is not accepted by Miller they often decide to fork Pd. In other open
source projects it is very normal for the project maintainer to drop
patches with very little info, or even completely silently. When this
happens in Pd development, people sometimes get antagonistic and/or
frustrated (and occasionally claim that there is a grand conspiracy
against them). It's a great pity because some really great code has never
become useful to all of us because of this. In any case I think that Hans'
solution might be the best one; more code, less talk.

If you're thinking of contributing code, one thing to notice is that
the patches that seem to get accepted are those that are conservative.
Patches that don't rock the Pd boat too much, change it too drastically,
or break backwards compatability. It seems like Miller favours incremental
change over drastic overhauls, which is probably a good thing for a
software project with so many active users.

If you want to influence the direction of Pd, then slow and steady wins
the race is the maxim of the day.

Best,

Chris.

---
http://mccormick.cx

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-26 Thread Thomas Mayer
marius schebella wrote:
 To me it seems there is not really a clear direction of what Pd should 
 be. Speaking as a user I only can speculate, but most probably none of 
 the people who have contributed Pd code have ever agreed on a certain 
 featurelist of a final version. Or whether there should be a v 1.0 at all.

Especially in free software, numbering is quite different from
proprietary software. Just think of anything without RC or testing or
unstable or experimental in the version as a reliable version despite of
its numbering. And read changelogs before updating the software, but
that applies to proprietary software as well (but with free software
it's usually easier to go back to former versions as there tend to be
older versions around somewhere).

 btw. it is too late for this year's prix ars electronica, but since 
 there is a category digital community Pd definitely should go for that 
 next year...

That's definitely an option. How about collecting ideas for the
appliance in a wiki starting now?

cu Thomas
-- 
Prisons are needed only to provide the illusion that courts and police
are effective. They're a kind of job insurance.
(Leto II. in: Frank Herbert, God Emperor of Dune)
http://thomas.dergrossebruder.org/

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-26 Thread Frank Barknecht
Hallo,
Kyle Klipowicz hat gesagt: // Kyle Klipowicz wrote:

 Pd 1.0 in 2010 maybe? Heh.

Taking into account that Pd was started around 1996, assuming a
version number of 0.00 then, it now has reached 0.41 in 2007 and thus
will reach 1.0 in 2022.

A different calculation is less optimistic: As the current speed of
version numbers is about 0.01 per annum, it will take another 59 years
to reach 1.0, which is 2066.

Or Pd may do as Ardour did: Just jump from 0.... to 2.0 and leave
out 1.0 entirely.

Ciao
-- 
 Frank Barknecht _ __footils.org_ __goto10.org__

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-26 Thread nick weldin
Maybe  someone could make a pd patch to dynamically calculate this so 
that we can be sure to be ready for the arrival of ver1.0  ;)

Taking into account that Pd was started around 1996, assuming a
version number of 0.00 then, it now has reached 0.41 in 2007 and thus
will reach 1.0 in 2022.

A different calculation is less optimistic: As the current speed of
version numbers is about 0.01 per annum, it will take another 59 years
to reach 1.0, which is 2066.

Or Pd may do as Ardour did: Just jump from 0.... to 2.0 and leave
out 1.0 entirely.

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-26 Thread Kyle Klipowicz
Could that be called a ProphePd?

~Kyle

On 5/26/07, nick weldin [EMAIL PROTECTED] wrote:
 Maybe  someone could make a pd patch to dynamically calculate this so
 that we can be sure to be ready for the arrival of ver1.0  ;)

 Taking into account that Pd was started around 1996, assuming a
 version number of 0.00 then, it now has reached 0.41 in 2007 and thus
 will reach 1.0 in 2022.
 
 A different calculation is less optimistic: As the current speed of
 version numbers is about 0.01 per annum, it will take another 59 years
 to reach 1.0, which is 2066.
 
 Or Pd may do as Ardour did: Just jump from 0.... to 2.0 and leave
 out 1.0 entirely.

 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list



-- 
-

 -
  - --
http://perhapsidid.wordpress.com

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-26 Thread Kyle Klipowicz
Very nice dream indeed! Thanks for sharing, that would be a great
future. I really am seeing game development being the new film in
terms of art and entertainment.

~Kyle

On 5/26/07, Andy Farnell [EMAIL PROTECTED] wrote:

 The little dog in the picture is saying woof..woof..  your card stacking
 method is suboptimally stable...woof

  do you think that gem is a mature proyect for video creation? why not? why
  the evolution is slow? or simply why there are not a  object for to save
  pretty videos in open formats? (same asks for pdp).

 Pd has grown big, sound, video, physical robotics,
 installations, with Pd as a general purpose cyberjunction. Big must
 move at a fairly slow pace to stay coherent.

  Which is the next future of PureData

 As a crazy hypothetical, hmmm, I wish for Blender to marry Puredata as their
 games sound engine and make set of [world] objects babies...
 ..that have hooks called from game object events.:) When you see a video link
 on a viewer in game Pd is handling all that, when two objects collide
 the sound you get is Pd handling it... Pd as an embedded realtime signals and
 interfacing layer in a games engine sort of thing. /end dream~~

  Maybe the extended version is a valuable effort for to have a open platform
  for creatives. Then which is the sense of desiredata? (
  https://devel.goto10.org/desiredata)

 Its way open yeah and such is its charm, a great dev environment for
 so many ideas, not necessarily a final component or end in
 iteself yet, for me. Beyond prototyping in Pd and rebuilding by
 hand I'd like to see Pd output more reusable code and integrate
 with other things as well as itself on a lower level. Like being
 able to compile new externals from inside Pd itself (segfault
 proof combo of compiler, text editor and active object), or converting
 the DSP parts of a patch to a plugin for another system LV2 or
 VST or something not requiring Pd any longer. So in summary I'd
 like to see Pd mature as a component in a wider development
 environment, which I think it surely will.


 ANdy











 On Fri, 25 May 2007 15:28:05 +0200
 victor  [EMAIL PROTECTED] wrote:

  Hi, I love pd :) but after many years from its creation, why some times it
  seem a casd's castle?
  http://teacher.scholastic.com/max/castle/img/cardcast.jpg
 
  After to read in that list the tipical question pix_video don't work. I'm
  thinking about pd
 
  do you think that gem is a mature proyect for video creation? why not? why
  the evolution is slow? or simply why there are not a  object for to save
  pretty videos in open formats? (same asks for pdp).
 
  Maybe the extended version is a valuable effort for to have a open platform
  for creatives. Then which is the sense of desiredata? (
  https://devel.goto10.org/desiredata)
 
  summarizing: Which is the next future of PureData
 
  thanks
 


 --
 Use the source

 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list



-- 
-

 -
  - --
http://perhapsidid.wordpress.com

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-25 Thread Charles Henry
 summarizing: Which is the next future of PureData

or How about this question:  what does the roadmap look like for Pd v 1.0?

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-25 Thread Kyle Klipowicz
Pd 1.0 in 2010 maybe? Heh.

~K

On 5/25/07, Charles Henry [EMAIL PROTECTED] wrote:
  summarizing: Which is the next future of PureData

 or How about this question:  what does the roadmap look like for Pd v 1.0?

 ___
 PD-list@iem.at mailing list
 UNSUBSCRIBE and account-management - 
 http://lists.puredata.info/listinfo/pd-list



-- 
-

 -
  - --
http://perhapsidid.wordpress.com

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-25 Thread Hans-Christoph Steiner


On May 25, 2007, at 9:28 AM, victor wrote:

Hi, I love pd :) but after many years from its creation, why some  
times it seem a casd's castle?

http://teacher.scholastic.com/max/castle/img/cardcast.jpg

After to read in that list the tipical question pix_video don't  
work. I'm thinking about pd


do you think that gem is a mature proyect for video creation? why  
not? why the evolution is slow? or simply why there are not a   
object for to save pretty videos in open formats? (same asks for pdp).


A lot of the time, the code is there, it's just hard to package it  
into something that works for everyone.  There are a lot of patent  
issues with the various codecs, and some of the libs can be difficult  
to work with.


Maybe the extended version is a valuable effort for to have a open  
platform for creatives.


My main goal with the Pd-extended was to make a common platform to  
distribute all of the great code for Pd.  By having a standard  
platform across all OS's, it means that we can all spend less time  
setting up Pd and wrestling with different configurations.  It is not  
meant to be a branch at all.  For example, it would be pretty  
straightforward to make a desiredata-extended, which would have all  
the same libs, but installed into desiredata instead of Pd.  The  
bottom line is that Pd-extended is a distro, not a branch/fork.  Like  
Debian is a distro of many packages, and you can use many different  
kernels with that distro (Linux, FreeBSD, HURD).



Then which is the sense of desiredata? (https://devel.goto10.org/ 
desiredata)


AFAICT, desiredata is focusing on the editing environment rather than  
new libraries or functionality to the language.  They have a bunch of  
interesting ideas that could make patching much more fluid and  
satisfying.


.hc



summarizing: Which is the next future of PureData

thanks
___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - http://lists.puredata.info/ 
listinfo/pd-list




 



As we enjoy great advantages from inventions of others, we should be  
glad of an opportunity to serve others by any invention of ours; and  
this we should do freely and generously. - Benjamin Franklin



___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-25 Thread marius schebella
victor wrote:
 summarizing: Which is the next future of PureData

Before I try to answer that question, please keep in mind that you 
cannot compare an open source software project to any company driven 
proprietary software. Lots of people have been working very hard to 
bring Pd to the current state and propably 99 times more people use it 
in their work, and I saw some great artwork done with it. Seriously, I 
don't know so many other art software packages of that size. (I am 
thinking of maybe blender, processing, gimp, audacity...)

But let's talk about the future. Btw there is a nice talk about open 
source software projects, which was given at google talks.
http://video.google.com/videoplay?docid=-4216011961522818645

To me it seems there is not really a clear direction of what Pd should 
be. Speaking as a user I only can speculate, but most probably none of 
the people who have contributed Pd code have ever agreed on a certain 
featurelist of a final version. Or whether there should be a v 1.0 at all.

But that does not mean that there is no progress. Regarding the social 
aspect there will be the second puredata conference in late August in 
Montreal. People are doing lot of work to get this running. Then there 
are the summer of code projects http://puredata.org/dev/summer-of-code. 
Other people are working on documentation and tutorials and others try 
to integrate all the different libraries into one release.

btw. it is too late for this year's prix ars electronica, but since 
there is a category digital community Pd definitely should go for that 
next year...

marius.

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-25 Thread Roman Haefeli
On Fri, 2007-05-25 at 15:53 -0500, Kyle Klipowicz wrote:
 Pd 1.0 in 2010 maybe? Heh.
 
 ~K
 

you are an optimist.

roman


 On 5/25/07, Charles Henry [EMAIL PROTECTED] wrote:
   summarizing: Which is the next future of PureData
 
  or How about this question:  what does the roadmap look like for Pd v 1.0?
 
  ___
  PD-list@iem.at mailing list
  UNSUBSCRIBE and account-management - 
  http://lists.puredata.info/listinfo/pd-list
 
 
 






___ 
Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: 
http://mail.yahoo.de


___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] puredata evolution

2007-05-25 Thread Andy Farnell

The little dog in the picture is saying woof..woof..  your card stacking
method is suboptimally stable...woof

 do you think that gem is a mature proyect for video creation? why not? why
 the evolution is slow? or simply why there are not a  object for to save
 pretty videos in open formats? (same asks for pdp).

Pd has grown big, sound, video, physical robotics, 
installations, with Pd as a general purpose cyberjunction. Big must
move at a fairly slow pace to stay coherent. 

 Which is the next future of PureData

As a crazy hypothetical, hmmm, I wish for Blender to marry Puredata as their
games sound engine and make set of [world] objects babies...
..that have hooks called from game object events.:) When you see a video link
on a viewer in game Pd is handling all that, when two objects collide
the sound you get is Pd handling it... Pd as an embedded realtime signals and
interfacing layer in a games engine sort of thing. /end dream~~

 Maybe the extended version is a valuable effort for to have a open platform
 for creatives. Then which is the sense of desiredata? (
 https://devel.goto10.org/desiredata)

Its way open yeah and such is its charm, a great dev environment for
so many ideas, not necessarily a final component or end in
iteself yet, for me. Beyond prototyping in Pd and rebuilding by
hand I'd like to see Pd output more reusable code and integrate
with other things as well as itself on a lower level. Like being
able to compile new externals from inside Pd itself (segfault
proof combo of compiler, text editor and active object), or converting
the DSP parts of a patch to a plugin for another system LV2 or
VST or something not requiring Pd any longer. So in summary I'd
like to see Pd mature as a component in a wider development 
environment, which I think it surely will. 
 

ANdy











On Fri, 25 May 2007 15:28:05 +0200
victor  [EMAIL PROTECTED] wrote:

 Hi, I love pd :) but after many years from its creation, why some times it
 seem a casd's castle?
 http://teacher.scholastic.com/max/castle/img/cardcast.jpg
 
 After to read in that list the tipical question pix_video don't work. I'm
 thinking about pd
 
 do you think that gem is a mature proyect for video creation? why not? why
 the evolution is slow? or simply why there are not a  object for to save
 pretty videos in open formats? (same asks for pdp).
 
 Maybe the extended version is a valuable effort for to have a open platform
 for creatives. Then which is the sense of desiredata? (
 https://devel.goto10.org/desiredata)
 
 summarizing: Which is the next future of PureData
 
 thanks
 


-- 
Use the source

___
PD-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list