Hello Manuel,
Monday, February 8, 2010, 4:21:59 AM, you wrote:
> I had a student implementing a LLVM backend for GHC last year. You can find
> the details at
what's the status of port? can it compile full-scale programs, like
darcs/ghc?
--
Best regards,
Bulatmail
Scott Michel wrote,
> Are you also planning a LLVM backend for ghc, in a general sense, or just for
> the accelerated work you're doing? It seems to me that ghc itself could be
> well served with a LLVM backend, especially if one relies on the JIT mode.
> That could help identify code paths in t
Are you also planning a LLVM backend for ghc, in a general sense, or just
for the accelerated work you're doing? It seems to me that ghc itself could
be well served with a LLVM backend, especially if one relies on the JIT
mode. That could help identify code paths in the core and runtime that are
in
Felipe Lessa:
>> I would suggest that any GSoC project in this space should be based
>> on D.A.Accelerate (rather than DPH), simply because the code base is
>> much smaller and more accessible. There is not much point in
>> writing a CUDA backend, as we already have a partially working one
>> that
On Thu, Feb 04, 2010 at 11:31:53AM +1100, Manuel M T Chakravarty wrote:
> It's really only two things, as the GPU monad from the cited paper
> has been superseded by Data.Array.Accelerate — ie, the latter is a
> revision of the former. So, the code from the cited paper will
> eventually be release
06:02
To: Brad Larsen
Cc: GHC Users
Subject: Re: DPH and CUDA status
You're quite correct. I should say supplant for normal uses. The
OpenCL drivers are built on top of CUDA, and they intend CUDA to
continue to be available, but OpenCL is more portable and thus
something that we should pro
Felipe Lessa:
> On Wed, Feb 03, 2010 at 11:37:09AM -0600, Donnie Jones wrote:
>> Hello Felipe,
>>
>> I copied this email to Sean Lee & Manuel M T Chakravarty as they
>> worked on Haskell+CUDA, maybe they can comment on the current status?
>>
>> Here's their paper...
>> GPU Kernels as Data-Paralle
You're quite correct. I should say supplant for normal uses. The
OpenCL drivers are built on top of CUDA, and they intend CUDA to
continue to be available, but OpenCL is more portable and thus
something that we should probably target at some point.
On Wed, Feb 3, 2010 at 1:54 PM, Brad Larsen wr
On Wed, Feb 03, 2010 at 11:37:09AM -0600, Donnie Jones wrote:
> Hello Felipe,
>
> I copied this email to Sean Lee & Manuel M T Chakravarty as they
> worked on Haskell+CUDA, maybe they can comment on the current status?
>
> Here's their paper...
> GPU Kernels as Data-Parallel Array Computations in H
On Wed, Feb 3, 2010 at 12:34 PM, Jeff Heard wrote:
> [...] One thing though, is that CUDA is being
> supplanted by OpenCL in the next few years, and OpenCL can handle data
> parallelism on multicore CPUs as well as GPUs with the same code.
> It's a little more flexible overall than CUDA, and will
Hello Felipe,
I copied this email to Sean Lee & Manuel M T Chakravarty as they
worked on Haskell+CUDA, maybe they can comment on the current status?
Here's their paper...
GPU Kernels as Data-Parallel Array Computations in Haskell
http://www.cse.unsw.edu.au/~chak/papers/gpugen.pdf
Hope that helps
IIRC, there has been woirk done by Manuel and his team. I'm sure
he'll chime in on that. One thing though, is that CUDA is being
supplanted by OpenCL in the next few years, and OpenCL can handle data
parallelism on multicore CPUs as well as GPUs with the same code.
It's a little more flexible ove
Hello,
Recently I tried to look for the status of Data Parallel Haskell
with a CUDA backend. I've found [1] which mentions [2] saying
that this is difficult and work was being done. That was almost
two years ago. Was any progress made since then or is the work
stalled?
About GSoC, I wonder if
13 matches
Mail list logo