"Satnam Singh" <[EMAIL PROTECTED]> writes:
> I'm trying to find out about existing work on implicit parallel functional
> programming. I see that the Glasgow Haskell compiler has a parallel mode
> which can be used with PVM and there is interesting work with pH at MIT. Does
> anyone know of any ot
Bjorn Lisper wrote:
It depends on what you compare with. Multicore CPU:s will probably have
cores that are simpler than current processor cores, which means you will
want to have some parallelism. Cf. a superscalar processor, which really in
a sense is a parallel machine but where you add some comp
[EMAIL PROTECTED] wrote:
Yes, today we have two-processors on a core, and uni-processor
speed bump is unlikely to overshadow the effort of parallelism
like it did 20 years ago. But we are also beginning to see
applications requiring thousands of machines to run. The so
called grid computing maybe
Keean Schupke:
>>A guess is that the first generation will support a shared memory model much
>>like SMP:s of today (shared main memory with on-chip cache(s), or some other
>>kind of local memory (-ies)). Here, I think implicit parallelism in
>>functional languages can be a win in some situations.
I'd like to add another meaning to running things in a distributed
way, i.e., scalability. Implicit parallelism should help the
application to scale itself automatically with the increase of
the number of nodes in the cluster.
Yes, today we have two-processors on a core, and uni-processor
speed
> But in the worst case its just a sequential computation, so any gain
from
> parallelism is still a gain...
The trade-offs involved look like they'd be very complicated in
practice. For instance, considering speculative execution on SMT /
multi-core environments:
- The mechanisms used to enab
> First, there is a claim that functional languages facilitate parallel
> execution, which is undeniably true if the implementation is something
> like that of Haskell (no assignment statements mean no memory contention,
> etc.).
Careful here... no assignments in the source language doesn't
trans
Bjorn Lisper wrote:
A guess is that the first generation will support a shared memory model much
like SMP:s of today (shared main memory with on-chip cache(s), or some other
kind of local memory (-ies)). Here, I think implicit parallelism in
functional languages can be a win in some situations. Thi
I'd like to add my two cents worth in this debate...
I think the original poster considered the standard multicore processors
soon to come, and which can be expected to eventually overtake the processor
market. The answer relies a lot on what shape these processors will have:
A guess is that the
| > I thought the "lazy functional languages are great for implicit
| > parallelism" thing died out some time ago - at least as far as
running
| > the programs on conventional hardware is concerned.
Some quick thoughts.
1. Like Ben L, I don't believe in totally-automated parallelism from
lazy FP
Lennart Augustsson and Thomas Johnsson got some encouraging results
fifteen years ago with their nu-G-machine. They compiled Lazy ML for a
shared memory multiprocessor, and benchmarked against the sequential LML
compiler, the precursor of hbc and at that time the best compiler for a
lazy functi
In 1989 my first Ph.D. student. matthijs kuiper, defended his thesis
"Paralell Attribute Gramar Evaluation". On emight see ag's as a limited
form of functional programming.
The conclusions were:
- in many grammars sufficient paralellism can be detected using global
flow anaysis techniques
- w
Ben Lippmeier wrote:
I thought the "lazy functional languages are great for implicit
parallelism" thing died out some time ago - at least as far as running
the programs on conventional hardware is concerned.
Designing an algorithm that breaks apart a "sequential" lazy program
into parallel chun
13 matches
Mail list logo