On Mon, Jun 16, 2008 at 03:05:53PM -0500, Jeremy Blosser wrote: > I'm sure that a good FFT library already exists for GPGPU applications, so > it should be a matter of leveraging the existing FFT libraries in the LL > tests right?
You're pretty much wrong here, TTBOMK. There is CUDA code for doing FFTs (or at least nVidia claim they're written such code; I haven't checked if it's public yet) but nothing that is tuned for multi-megabyte FFTs at all. > Assuming a double-precision GPU FFT is written, then you could at least take > advantage of the GPU along with the CPUs with a minimal amount of effort. > Even if the GPU FFT library isn't very efficient and gets half the speed of > the theoretical output of the GPU -- well, that is still equivalent to a > quad-core Xeon (assuming the 8-core Xeon quote from nVidia is accurate) for > a fairly minimal programming effort. You'd probably be better off testing separate numbers on the GPU and the CPU; getting the data flow right here without having one side ending up as the bottleneck is typically a hard problem. /* Steinar */ -- Homepage: http://www.sesse.net/ _______________________________________________ Prime mailing list [email protected] http://hogranch.com/mailman/listinfo/prime
