The conclusions were:
- in many grammars sufficient paralellism can be detected using global flow anaysis techniques
- when building paralell implementations we get nice speedups based on the number of processors
- unfortunately the communication overhead makes you loose speed, so in the end you are not much better off, and actually you have
nice speedups on the introduced overhead, which brings you back where you started
Another point to bear in mind is that the Dutch government 20 years ago funded a quite large project aiming at parallell implementation of functional programms. The main result of this project is a quite good sequential implementation of the functional language Clean.
Even further back, in the sixties, Burrough's (yes, the companyy with this beautiful Algol-60 based specilaised hardware and operating system) has spent a lot of money on trying to construct paralle reduction machines (and many others later too, especially in Japan) trying to exploit all this implicitly available paralellism. All these projects failed beacuse Moore's law overtook their results.
Doaitse Swierstra
On 2005 jan 20, at 3:13, Ben Lippmeier wrote:
I thought the "lazy functional languages are great for implicit parallelism" thing died out some time ago - at least as far as running the programs on conventional hardware is concerned.
Designing an algorithm that breaks apart a "sequential" lazy program into parallel chunks of the appropriate size is **HARD** (with double asterixes). The time and space behavior of a lazy program is complex enough for the _programmer_ to reason about, let alone an automated analysis - which has no knowledge of what the program is actually trying to do.
I think a more likely approach lies in the direction of the so called "parallel strategies". If you haven't already, I would strongly suggest reading: Algorithm + Strategy = Parallelism, 1998, PW Trinder, et al.
You can get this paper from Simon Peyton Jones's homepage.
Also, at the end of Hans Wolfgang-Loidl's thesis he develops a granularity analysis for a Haskell subset - one of the first steps in any kind of implicit parallelism. It's a pretty good effort, but at the end of it all it still relies on a pre-existing table of information about recursive functions. I think that these kind of analyses tend suffer from uncomputability problems more than most.
If you've still got your heart set on implicit parallelism, then there's a (very slow) simulator you might want to poke around with. I wrote it based around Clem Baker-Finch's "Abstract machine for parallel lazy evaluation", which supports fully speculative implicit parallelism.
There's a link to it on my homepage at http://cs.anu.edu.au/people/Ben.Lippmeier
Keean Schupke wrote:I have to say I disagree... I feel Haskell is highly suited to implicit parallel execution... The key to "implicit" parallelisation is that it is implicit - not explicit, so the programmer should feel like they are programming a sequential language. If we can assume little memory access penalties for threads running on other CPUs (shared cache model), it seems to be a matter of putting locks on the right structures, and allowing any worker-thread to take the next function ready to run from the scheduler.
Keean.
_______________________________________________ Haskell mailing list [email protected] http://www.haskell.org/mailman/listinfo/haskell
_______________________________________________ Haskell mailing list [email protected] http://www.haskell.org/mailman/listinfo/haskell
