I think we are in the usual situation where determinism is clearly
possible but has a cost.
Parallel line-delta will work something like this:
- 2 processors independently remove some chunk of the file
- we merge the results and check if the result is still interesting
The overhead of verifying the merged outputs is the problem; tuning will
be needed to make this work well.
I was planning to just use the UNIX merge utility, to keep things simple.
John
On 06/27/2012 08:35 AM, Eric Eide wrote:
John> But I might try to parallelize at least the line_reduce pass
John> across cores. This doesn't sound hard at all and speedup should
John> be not a lot worse than linear.
I wonder if this inherently converges to a unique solution, or if one would
would need to treat all of the parallelism as speculative.
High-quality output is good. Deterministic output is also good :-).
Eric.