On Tuesday 28 of August 2012 16:34:10 erik quanstrom wrote:
> my knee-jerk reaction to my own question is that making it easier
> and more natural to parallelize dataflow.  a pipeline is just a really
> low-level way to talk about it.  the standard
>       grep x *.[ch]
> forces all the *.[ch] to be generated before 1 instance of grep runs on
> whatever *.[ch] evaluates to be.
> 
> but it would be okay for almost every use of this if *.[ch] were generated
> in parallel with any number of grep's being run.


(in Linux terms, sorry!)

you can get close with find|xargs -- it runs the command for every -L <number> 
lines of input. AFAIK xargs does not parallelize the execution itself.


find -name '*.[ch]' | xargs -L 8 grep REGEX


-- 
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write 
you a short one. -- Bla­ise Pasc­al

Reply via email to