There's an effort going on to use techniques from Lava (the Haskell-
based hardware description language) to target GPUs. Joel Svensson
[1] has written his Master's thesis on this and is now working on
this for his PhD, so if you ask kindly he might tell you more about
this or send you the thesis.
[1] .. http://www.chalmers.se/cse/EN/people/svensson-joel
On 12 mar 2008, at 22.54, Dan Piponi wrote:
On Wed, Mar 12, 2008 at 2:33 PM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
Hanging around here, you really feel like you're at the cutting edge
of... something... heh.
Another approach isn't to target a CUDA back end for Haskell but to
write an array library that builds computations that can target a CUDA
(or other) back end. My first real world job that involved programming
was APL [1] based. APL (and its offspring) is a functional-ish
programming language that manipulates arrays using a relatively small
number of primitives, most of which probably map nicely to CUDA
hardware because of the potential for data parallelism. Despite the
write-only nature of APL source code, and the negative comments about
it by Dijkstra, the expressivity of APL for numerical work is
unbelievable. I would love to see some of those ideas somehow brought
into Haskell as a library.
[1] http://en.wikipedia.org/wiki/APL_%28programming_language%29
--
Dan
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe