Andrei:

Overall it looks nice enough.

> * I think it does make sense to evaluate a parallel map lazily by using 
> a finite buffer. Generally map looks the most promising so it may be 
> worth investing some more work in it to make it "smart lazy".

I agree.

Some more notes:

Later a parallel max() and min() (that work on sequences) may be good.

The syntax for the parallel foreach is not the best conceivable, but for now 
it's OK:
foreach(i; pool.parallel( iota(squares.length), 100)) {

This is about reduce, unfortunately there is no @associative annotations and 
it's not easy for a compiler to enforce the operator to be associative:
> Because this operation is being carried out in parallel, fun must be 
> associative. 

I think a parallel map() has to require the mapping function to be pure. It's 
not a strong requirement (and you can't add debug prints or logging), but 
logically it looks better.

A more general note regarding map() of std.algorithm too: 
pool.map!("a * a", "-a")(numbers, 100, results);
In general the idea of multiple functions in map() doesn't look like a good 
idea. Instead of multiple functions I prefer (and I think it's more useful) a 
single function and optionally multiple iterables, as in Python and dlibs1, 
useful for functions that take more than one argument:
>>> s = [1, 2, 3]
>>> p = [2, 4, 6]
>>> map(pow, s, p)
[1, 16, 729]

Bye,
bearophile

Reply via email to