On 11/14/2012 10:17 PM, Jonathan M Davis wrote:
I would point out though that given how expensive disk writes are, unless
you're doing a lot of work within the parallel foreach loop, there's a good
chance that it would be more efficient to use std.concurrency and pass the
writes to another thread to do the writing. And the loop itself should still
be able to be a parallel foreach, so you wouldn't have to change much
otherwise. But with the synchronized block, you'll probably end up with each
thread spending a lot of its time waiting on the lock, which will end up
making the whole thing effectively single-threaded.

Do you mean that the synchronized {} blocks have to all be completed before the threads can all be terminated?

In the end the solution I came to was something like this:

    enum N = 16;   // number of cases
    shared real[N+1] results;

    foreach(i; parallel(iota(0, N+1)))
    {
        // ... do a lot of calculation ...
        results[i] = // result of calculation
    }

    // and now at the end we write out all the data

... which seems to work, although I'm not 100% confident about its safety.

Reply via email to