I'm not currently a PDL (or any vector-numeric) user, but I'm
sometimes in discussions where number crunching & dynamic languages
are discussed, and I point people to PDL.

Recently I was looking to see how for ways to exploit a GPU from
Perl,and I spent some time searching to find out how PDL can do that,
and also how PDL uses threading and multi-machine clusters. What I'd
like is for you knowledgeable users to check my "research" and point
me to state-of-the-art PDL, & I can pass it on next time someone asks.

I found this post about tying CUDA into PDL via CUDA::Minimal
http://blogs.perl.org/users/david_mertens/2011/06/cuda-and-the-perl-data-language.html
- and I wondered how well PDL can divide a task among workstations,
CPU cores, and GPU's.

It seems that you can feed PDL (or slices of your PDL) to the GPU
using CUDA::Minimal, and let the GPU work on that data, via Inline C.
CUDA::Minimal is still being updated, at least as recently as a month
ago. On the other hand it's still listed as "version 0.01" and
PDL->GPU isn't handled by the PDL module- programmer has to explicitly
move PDL data to & from the graphics card, and write C to manipulate
it there. Is there a "more transparent" way to have PDL use a GPU?

As for multi-threading, that seems to be built-in, from at least 2000
via add_threading_magic if you don't mind adding it manually where you
want it. In 2011 there's a development branch to add threading
automatically- is auto-threading part of the main distribution now?
http://pdl.perl.org/PDLdocs/ParallelCPU.html shows how to get
auto-threading, if I were to install PDL on a linux box would it "just
work" as it describes?

Lastly, are there PDL setups that submit large tasks to a network of
servers? I'm not sure how that would look and work, but if you do
that, let me know.

Thanks

-y

_______________________________________________
Perldl mailing list
[email protected]
http://mailman.jach.hawaii.edu/mailman/listinfo/perldl

Reply via email to