On Saturday, 22 September 2018 at 02:26:41 UTC, Chris Katko wrote:
On Saturday, 22 September 2018 at 02:13:58 UTC, Chris Katko wrote:
On Friday, 21 September 2018 at 12:15:59 UTC, Ali Çehreli wrote:
On 09/21/2018 12:25 AM, Chris Katko wrote:
[...]

You can use a free-standing function as a workaround, which is included in the following chapter that explains most of std.parallelism:

  http://ddili.org/ders/d.en/parallelism.html

That chapter is missing e.g. the newly-added fold():

  https://dlang.org/phobos/std_parallelism.html#.TaskPool.fold

Ali

Okay... so I've got it running. The problem is, it uses tons of RAM. In fact, proportional to the working set.

T test(T)(T x, T y)
        {
        return x + y;
        }
        
double monte(T)(T x)
        {
        double v = uniform(-1F, 1F);
        double u = uniform(-1F, 1F);
        if(sqrt(v*v + u*u) < 1.0)
                {
                return 1;
                }else{
                return 0;
                }
        }

        auto taskpool = new TaskPool();
        sum = taskpool.reduce!(test)(
        taskpool.amap!monte(
                iota(num)
                )       );      
        taskpool.finish(true);

1000000 becomes ~8MB
10000000 becomes 80MB
100000000, I can't even run because it says "Exception: Memory Allocation failed"

Also, when I don't call .finish(true) at the end, it just sits there forever (after running) like one of the threads won't terminate. Requiring a control-C. But the docs and examples don't seem to indicate I should need that...

So I looked into it. It's amap that explodes in RAM.

Per the docs, amap has "less overhead but more memory usage." While map has more overhead but less memory usage and "avoids the need to keep all results in memory."

But, if I make a call to map... it doesn't compile! I get:

Error: no [] operator overload for type std.parallelism.TaskPool.map!(monte).map!(Result).map.Map

Simply changing amap to map here:

    sum = taskPool.reduce!(test)
        (
        taskPool.map!(monte)(range)
        );

Reply via email to