Re: [Haskell-cafe] Re: Hardware

2007-06-05 Thread Dan Piponi
On 6/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: >> it seems that now we move right into this direction with GPUs They are no good. GPU's have no synchronisation between them which is needed for graph reduction. GPUs are intrinsically parallel devices and might work very well for par

Re: [Haskell-cafe] Re: Hardware

2007-06-05 Thread [EMAIL PROTECTED]
it seems that now we move right into this direction with GPUs I was just thinking that GPUs might make a good target for a reduction language like Haskell. They are hugely parallel, and they have the commercial momentum to keep them current. It also occurred to me that the cell processor (

Re: [Haskell-cafe] Re: Hardware

2007-06-05 Thread [EMAIL PROTECTED]
but more efficient computational model exists. if cpu consists from huge amount of execution engines which synchronizes their operations only when one unit uses results produces by another then we got processor with huge level of natural parallelism and friendlier for FP programs. it seems that

[Haskell-cafe] Re: Hardware

2007-06-04 Thread Al Falloon
Bulat Ziganshin wrote: it seems that now we move right into this direction with GPUs I was just thinking that GPUs might make a good target for a reduction language like Haskell. They are hugely parallel, and they have the commercial momentum to keep them current. It also occurred to me that

[Haskell-cafe] Re: Hardware

2007-06-02 Thread Jon Fairbairn
"Neil Davies" <[EMAIL PROTECTED]> writes: > Bulat > > That was done to death as well in the '80s - data flow architectures > where the execution was data-availability driven. The issue becomes > one of getting the most of the available silicon area. Unfortunately > with very small amounts of comp

[Haskell-cafe] Re: Hardware

2007-06-02 Thread Jon Fairbairn
"Claus Reinke" <[EMAIL PROTECTED]> writes: > > either be slower than mainstream hardware or would be > > overtaken by it in a very short space of time. > > i'd like to underline the last of these two points, and i'm > impressed that you came to that conclusion as early as the > eighties. Well, S

Re: [Haskell-cafe] Re: Hardware

2007-06-02 Thread Neil Davies
Bulat That was done to death as well in the '80s - data flow architectures where the execution was data-availability driven. The issue becomes one of getting the most of the available silicon area. Unfortunately with very small amounts of computation per work unit you: a) spend a lot of time/ar

Re: [Haskell-cafe] Re: Hardware

2007-06-02 Thread Bulat Ziganshin
Hello Jon, Friday, June 1, 2007, 11:17:07 PM, you wrote: > (we had the possiblity of funding to make something). We > had lots of ideas, but after much arguing back and forth the > conclusion we reached was that anything we could do would > either be slower than mainstream hardware or would be

Re: [Haskell-cafe] Re: Hardware

2007-06-01 Thread Claus Reinke
either be slower than mainstream hardware or would be overtaken by it in a very short space of time. i'd like to underline the last of these two points, and i'm impressed that you came to that conclusion as early as the eighties. i'm not into hardware research myself, but while i was working

[Haskell-cafe] Re: Hardware

2007-06-01 Thread Jon Fairbairn
Andrew Coppin <[EMAIL PROTECTED]> writes: > OK, so... If you were going to forget everything we humans > know about digital computer design - the von Neuman > architecture, the fetch/decode/execute loop, the whole > shooting match - and design a computer *explicitly* for the > purpose of executing