Hi all, On Tue, May 10, 2011 at 5:51 PM, Derek M Jones <[email protected]> wrote: > Markus > >> I imagine that your tool can benefit from distribution of tasks to various >> threads more than it might lose because of synchronisation costs for >> parallelisation. >> How are the chances to reuse any thread pool implementation? > > I think Coccinelle should concentrate on doing one thing well > and not get side tracked into parallelization of workloads. > > What is needed is a tool that divides up the work and dispatches > it to multiple instances of Coccinelle.
There is some option for that. concurrency ----------------------------------------------------------------------- -index the processor to use for this run of spatch -max the number of processors available You can look at tools/distributed/spatch_linux.c for an example on how to use it. Alternatevely, I use these options in the Makefile generated by Herodotos. That's an other way to dispatch work. http://coccinelle.lip6.fr/herodotos/herodotos.php I then run make -j X to get X jobs in parallel. > > I have been reading about how people have been experimenting with > Amazon's cloud computing service. It looks relatively cheap and > at some point I will investigate it. > > I am wondering whether to split my eventual 100'ish scripts > across multiple nodes or to split the source across the nodes. > I guess the 'traditional' Google approach is to split up the data. > > -- > Derek M. Jones tel: +44 (0) 1252 520 667 > Knowledge Software Ltd mailto:[email protected] > Source code analysis http://www.knosof.co.uk > _______________________________________________ > Cocci mailing list > [email protected] > http://lists.diku.dk/mailman/listinfo/cocci > (Web access from inside DIKUs LAN only) > -- Nicolas Palix http://sardes.inrialpes.fr/~npalix/ _______________________________________________ Cocci mailing list [email protected] http://lists.diku.dk/mailman/listinfo/cocci (Web access from inside DIKUs LAN only)
