Re: [PD] optimizing big patches

2011-01-12 Thread Mathieu Bouchard

On Wed, 12 Jan 2011, Frank Barknecht wrote:

On Tue, Jan 11, 2011 at 01:25:33PM -0500, Mathieu Bouchard wrote:

OTOH, another way to deal with a slow interpreter, is to pass fewer,
bigger messages, to objects that do more work at once. This is much of
the original idea for creating GridFlow.

It's also the idea behind the BSP-approach I described in my LAC2010 paper:


It would be appropriate to pick a more descriptive name than Blocked 
Signal Processing, because that sounds quite a lot more like what Pd 
already does in its dsp all of the time, and it doesn't say what you make 
it do that is any different from what it already does.


http://markmail.org/message/xnerkchl24j6p42k where calculations you'd 
typically do in message space are made with signal objects.


So, doesn't this mean that they have to be done at a rate that is a power 
of two times the audio sampling rate ?


 ___
| Mathieu Bouchard  tél: +1.514.383.3801  Villeray, Montréal, QC
___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-11 Thread Mathieu Bouchard

On Mon, 10 Jan 2011, Ludwig Maes wrote:

I always felt message passing was unnecessarily expensive but I didnt 
realise message passing was that expensive! I seriously think it would 
be good to have a pd front end for gcc, a few of us should take the time 
to learn GIMPLE and implement a compile menu item to compile 
patches/subpatches.


There are a lot of possible ways to compile patches without having to deal 
with machine code generation and use. I'm sure you can triple the speed of 
a lot of patches in this manner, and I wouldn't be surprised to get 
tenfold improvements in some cases.


OTOH, another way to deal with a slow interpreter, is to pass fewer, 
bigger messages, to objects that do more work at once. This is much of the 
original idea for creating GridFlow.


 ___
| Mathieu Bouchard  tél: +1.514.383.3801  Villeray, Montréal, QC
___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-11 Thread András Murányi
2011/1/11 Mathieu Bouchard ma...@artengine.ca

 On Mon, 10 Jan 2011, Ludwig Maes wrote:

  I always felt message passing was unnecessarily expensive but I didnt
 realise message passing was that expensive! I seriously think it would be
 good to have a pd front end for gcc, a few of us should take the time to
 learn GIMPLE and implement a compile menu item to compile
 patches/subpatches.


 There are a lot of possible ways to compile patches without having to deal
 with machine code generation and use. I'm sure you can triple the speed of a
 lot of patches in this manner, and I wouldn't be surprised to get tenfold
 improvements in some cases.


That sounds too cool! Is there a way which includes graphical objects as
well?

Andras



 OTOH, another way to deal with a slow interpreter, is to pass fewer, bigger
 messages, to objects that do more work at once. This is much of the original
 idea for creating GridFlow.


___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-11 Thread Mathieu Bouchard

On Tue, 11 Jan 2011, András Murányi wrote:

2011/1/11 Mathieu Bouchard ma...@artengine.ca
There are a lot of possible ways to compile patches without having to 
deal with machine code generation and use. I'm sure you can triple the 
speed of a lot of patches in this manner, and I wouldn't be surprised 
to get tenfold improvements in some cases.


That sounds too cool! Is there a way which includes graphical objects as 
well?


I'm only thinking about accelerating the message-passing (outlet_anything, 
typedmess, zgetfn, etc), that's all. Any graphical accelerations 
(sys_vgui) are a separate matter, and I think that a patch-compiler is 
quite irrelevant in that matter. The non-graphical part of the object 
would still be optimised to the same level though.


 ___
| Mathieu Bouchard  tél: +1.514.383.3801  Villeray, Montréal, QC
___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-11 Thread Frank Barknecht
Hi,

On Tue, Jan 11, 2011 at 01:25:33PM -0500, Mathieu Bouchard wrote:
 OTOH, another way to deal with a slow interpreter, is to pass fewer,  
 bigger messages, to objects that do more work at once. This is much of 
 the original idea for creating GridFlow.

It's also the idea behind the BSP-approach I described in my LAC2010 paper:
http://markmail.org/message/xnerkchl24j6p42k where calculations you'd typically
do in message space are made with signal objects.

Ciao
-- 
 Frank BarknechtDo You RjDj.me?  _ __footils.org__

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-10 Thread Ludwig Maes
wow,
I always felt message passing was unnecessarily expensive but I didnt
realise message passing was that expensive!
I seriously think it would be good to have a pd front end for gcc, a
few of us should take the time to learn GIMPLE and implement a
compile menu item to compile patches/subpatches.

2011/1/10 Mathieu Bouchard ma...@artengine.ca:
 On Sun, 9 Jan 2011, Pedro Lopes wrote:

 i Guess the question could go a bit further, how can we devise a profiling
 system for a dataflow programming environment?

 I made two or three of those... GridFlow had several incarnations of such a
 thing but it only worked for GridFlow's own objects.

 Then I made one for the whole of Pd, and it's somewhere in the DesireData
 branch, but it caused occasional crashes for mysterious reasons, and no-one
 else looked at the code.

 Here's a screenshot of the latter :

  http://artengine.ca/desiredata/gallery/simple-benchmark.png

 You see that [cos] is over twice slower than [*] but [t f f] minus those two
 is also a lot, but that's the cost of message-passing, because [t f f]
 doesn't do any processing. And so on... the top number is the total time for
 the first message to return (every message-passing down a wire is
 accompanied later by an opposite movement once the job is done... that's
 the stack).

 GridFlow's profiler instead had a menu in Pd's main window which had a
 reset and a dump and the latter would print non-cumulative measurements
 (i.e. it doesn't include time in objects sent to) in the console (or in the
 terminal back when there wasn't a console) sorted by decreasing importance.

 Ideally we would have both cumulative and non-cumulative figures, because
 neither is nearly as useful as both together.

  ___
 | Mathieu Bouchard  tél: +1.514.383.3801  Villeray, Montréal, QC
 ___
 Pd-list@iem.at mailing list
 UNSUBSCRIBE and account-management -
 http://lists.puredata.info/listinfo/pd-list



___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


[PD] optimizing big patches

2011-01-09 Thread ronni montoya
Hello, one question, if i have a big patch with a lot of subpatches,
is it possible to know which parts of the whole patch are consuming
more cpu resources?

Is something like this possible?


thanks


R.

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-09 Thread João Pais
take them out and see the changes. or copy them into a new patch and close  
the rest.



Hello, one question, if i have a big patch with a lot of subpatches,
is it possible to know which parts of the whole patch are consuming
more cpu resources?

Is something like this possible?


thanks


R.

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management -  
http://lists.puredata.info/listinfo/pd-list



--
Friedenstr. 58
10249 Berlin (Deutschland)
Tel +49 30 42020091 | Mob +49 162 6843570
Studio +49 30 69509190
jmmmp...@googlemail.com | skype: jmmmpjmmmp

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-09 Thread Michael Matthews

On 1/9/2011 2:09 PM, ronni.mont...@gmail.com wrote:

Hello, one question, if i have a big patch with a lot of subpatches,
is it possible to know which parts of the whole patch are consuming
more cpu resources?

Is something like this possible?


   One way I can think of that you could do this would be to open each
   subpatch individually and then use the load meter (Media  Load
   Meter) to see what the cpu load for each subpatch is, though this
   may take a while for a lot of subpatches.  You could also estimate
   it by looking at where most of your audio/DSP processing is
   happening.  These are most likely to be the most cpu intensive parts
   of your patch.


thanks


R.

___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-09 Thread Bastiaan van den Berg
 On 1/9/2011 2:09 PM, ronni.mont...@gmail.com ronni.mont...@gmail.comwrote:

 Hello, one question, if i have a big patch with a lot of subpatches,
 is it possible to know which parts of the whole patch are consuming
 more cpu resources?

 Is something like this possible?


This sounds great, what about spawning each subpatch in a new thread, and
have a simple processmonitor in the gui?
That would also take a big advantage in the newer processor designs.

--
buZz
___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-09 Thread Mathieu Bouchard

On Sun, 9 Jan 2011, Pedro Lopes wrote:

i Guess the question could go a bit further, how can we devise a 
profiling system for a dataflow programming environment?


I made two or three of those... GridFlow had several incarnations of such 
a thing but it only worked for GridFlow's own objects.


Then I made one for the whole of Pd, and it's somewhere in the DesireData 
branch, but it caused occasional crashes for mysterious reasons, and 
no-one else looked at the code.


Here's a screenshot of the latter :

  http://artengine.ca/desiredata/gallery/simple-benchmark.png

You see that [cos] is over twice slower than [*] but [t f f] minus those 
two is also a lot, but that's the cost of message-passing, because [t f f] 
doesn't do any processing. And so on... the top number is the total time 
for the first message to return (every message-passing down a wire is 
accompanied later by an opposite movement once the job is done... that's 
the stack).


GridFlow's profiler instead had a menu in Pd's main window which had a 
reset and a dump and the latter would print non-cumulative 
measurements (i.e. it doesn't include time in objects sent to) in the 
console (or in the terminal back when there wasn't a console) sorted by 
decreasing importance.


Ideally we would have both cumulative and non-cumulative figures, because 
neither is nearly as useful as both together.


 ___
| Mathieu Bouchard  tél: +1.514.383.3801  Villeray, Montréal, QC___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list


Re: [PD] optimizing big patches

2011-01-09 Thread Mathieu Bouchard

On Mon, 10 Jan 2011, Bastiaan van den Berg wrote:

This sounds great, what about spawning each subpatch in a new thread, 
and have a simple processmonitor in the gui? That would also take a big 
advantage in the newer processor designs.


Pd's execution order model forbids launching threads transparently in any 
sort of meaningful way at the patching level.


 ___
| Mathieu Bouchard  tél: +1.514.383.3801  Villeray, Montréal, QC___
Pd-list@iem.at mailing list
UNSUBSCRIBE and account-management - 
http://lists.puredata.info/listinfo/pd-list