Tracy Harms wrote:
These, and the courting of the scientific market, all
suggest to me that NVIDIA recognizes the value of such
floating-point support.  I conjecture that the answer
to your question is 'yes'.

Few years ago, I read an article saying graphic cards gpu use single precision because double precision is not required for their applications. Their main concern is how many polygons can be drawn per second. Therefore I'm curious if things has changed.

Just now I also googled for "NVIDIA support IEEE double precision" and found a reference "A Detailed Study of the Numerical Accuracy of GPU-Implemented Math Functions" at
http://rogue.colorado.edu/draco/abstract.php?pub=gpgpu06-math.pub&paper_dir=papers

<quote>
Modern programmable GPUs have demonstrated their ability to significantly accelerate certain important classes of non-graphics applications; however, GPUs' slipshod support for floating-point arithmetic severely limits their usefulness for general-purpose computing. Current GPUs do not support double-precision computation and their single-precision support glosses over important aspects of the IEEE-754 floating-point standard.
</quote>

--
regards,
bill
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to