Hi Keean,
Keean Schupke wrote:
Jost Berthold wrote:
In order to force the *complete* evaluation of your result, you could use Evaluation Strategies. Strategies are a concept introduced for increasing parallelism in Glasgow parallel Haskell. Parallelism and lazy evaluation are in a way contrary aims, since you want your parallel evaluation to start ASAP, even if there is no actual demand on its result.
I think this is not quite right... surely you don't want to run the function
(even in parallel) if there is _no_ demand on its result.
In the given context, I just thought I could point the readers to a nice solution for a concrete problem (not related to parallelism at all) with the cited paper.
Just to explain a bit further what I wanted to summarize in one sentence in the first mail (thoughts behind the word "actual" above):
You are right: Of course, a sub-result which is not needed at all ("no demand") should not be evaluated. However, there are cases where a programmer wants to point at some sub-result to say "Hey, you will need this in a minute"/"Please compute this sub-result on a different machine [if any is idle]" (semi-explicit or explicit parallelism), which is the state-of-the-art of today's parallel Haskell extensions.
Simplistic classical example: naive parallel map, one process per element. If the main function needs the list, it will create demand for *one element after the other*, which leads to purely sequential execution.
let neededList = map f input -- computation, looks sequential `using` parList rnf -- strategy: completely evaluate elements in parallel in mungeForResult neededList
Without the strategy, the list would surely be processed element by element, since this is the way you access lists in Haskell. Strategies are a way to encode "evaluate this in parallel, to this degree" without interfering with the computation you specify. The outcome of the cited paper is exactly this point.
The compiler will know at compile time whether the result is required for a lot of cases... the only ones that cause problems are where program flow depends on IO actions. In which case you implement speculative execution (like a modern CPU does) - if you don't know whether a function result will be required and you have a free execution unit, you run the function. If at a later time it becomes apparent the results of some running funtions will not be required, you kill them, throw away the data, and free the execution unit to do something more useful.
Yes: the compiler could do a strictness analysis and hopefully (safe analysis) tell wether "neededList" is needed by "mungeForResult". In the case of algebraic data structures (like lists), things get a bit more complex (different degrees of strictness); Special data-parallel language concepts weave an automatism into the data structures they aim at. But apparently, the programmer should know very well if this is the case, and she may explicitly opt for speculative evaluation, or not. Explicit or "implemented" (which means in a way automatic): Garbage collection in a parallel system is able to detect unneeded results and will stop the computation in this case.
Cheers Jost _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users