Hi Petr,
 
Your question is completely justified! Yes, I beleive GPipe is a good 
foundation for such GPGPU-computations (general purpose GPU-programming), 
giving you easy access to data parallelism.
 
A way of doing this is to start with one or more equally sized textures that is 
your input (1 component depth textures perserves most floating point precision) 
. Then you rasterize a quad that covers the entire FrameBuffer, and for each 
fragment point sample these input textures and perform some computation on the 
samples. The resulting framebuffer can then be converted to a normal list in 
main memory, or converted to another texture to be used as input in another 
pass. 
 
To gather data, e.g. sum up all values in a texture, you could make a fragment 
program that uses a half-sized quad in which you sample two parts of the 
texture which you add and return in a half-sized texture. If you repeat this 
log2 texturesize times, you end up with one single value that is the sum of all 
values.
 
All texture loading and retrieving functions in GPipe are in IO, but safe to 
wrap in unsafePerformIO as long as you guarantee that the Ptr's are safe. So I 
think its easy to modularize GPGPU computations in GPipe and for instance 
create something like:
> simpleGPUmap :: (Fragment Float -> Fragment Float) -> [Float] -> [Float]
 
So, why dont you have a go on it? It might just turn out to be a pretty useful 
library... ;)
 
 
Cheers
/Tobias
 
 
 
> Date: Wed, 7 Oct 2009 16:47:15 +0200
> From: d...@pudlak.name
> To: tobias_bexel...@hotmail.com
> CC: haskell-cafe@haskell.org
> Subject: Re: [Haskell] ANNOUNCE: GPipe-1.0.0: A functional graphics API for 
> programmable GPUs
> 
> Hi Tobias,
> 
> (I'm completely new to GPU programming, so my question may be completely
> stupid or unrelated. Please be patient :-).)
> 
> Some time ago I needed to perform some large-scale computations
> (searching for first-order logic models) and a friend told me that GPUs
> can be used to perform many simple computations in parallel. Could GPipe
> be used for such a task? I.e. to program some non-graphical,
> parallelized algorithm, which could be run on a GPU cluster?
> 
> Thanks for your answer,
> 
> Petr
> 
> On Sun, Oct 04, 2009 at 08:32:56PM +0200, Tobias Bexelius wrote:
> > I'm proud to announce the first release of GPipe-1.0.0: A functional 
> > graphics
> > API for programmable GPUs.
> > 
> > GPipe models the entire graphics pipeline in a purely functional, immutable
> > and typesafe way. It is built on top of the programmable pipeline (i.e.
> > non-fixed function) of OpenGL 2.1 and uses features such as vertex buffer
> > objects (VBO's), texture objects and GLSL shader code synthetisation to 
> > create
> > fast graphics programs. Buffers, textures and shaders are cached internally 
> > to
> > ensure fast framerate, and GPipe is also capable of managing multiple 
> > windows
> > and contexts. By creating your own instances of GPipes classes, it's 
> > possible
> > to use additional datatypes on the GPU.
> > 
> > You'll need full OpenGL 2.1 support, including GLSL 1.20 to use GPipe. 
> > Thanks
> > to OpenGLRaw, you may still build GPipe programs on machines lacking this
> > support.
> > 
> > The package, including full documentation, can be found at:
> > http://hackage.haskell.org/package/GPipe-1.0.0
> > 
> > Of course, you may also install it with:
> > cabal install gpipe
> > 
> > 
> > Cheers!
> > Tobias Bexelius
> > 
> > ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
> > kolla in resten av Windows LiveT. Inte bara e-post - Windows LiveT är mycket
> > mer än din inkorg. Mer än bara meddelanden







Windows Live: Håll dina vänner uppdaterade om vad du gör online.
                                          
_________________________________________________________________
Hitta kärleken nu i vår!
http://dejting.se.msn.com/channel/index.aspx?trackingid=1002952
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to