On Thu, 3 Jun 2010, Ravi Varadhan wrote:
Hi All,
I have been reading about general purpose GPU (graphical processing units)
computing for computational statistics. I know very little about this, but
I read that GPUs currently cannot handle double-precision floating points
Not so for a while, and the latest ones are quite fast at it.
and also that they are not necessarily IEEE compliant. However, I am not
sure what the practical impact of this limitation is likely to be on
computational statistics problems (e.g. optimization, multivariate analysis,
MCMC, etc.).
What are the main obstacles that are likely to prevent widespread use of
this technology in computational statistics?
Developing highly parallel algorithms that can exploit the
architectures. That's not just in statistics, see e.g.
http://www.microway.com/pdfs/TeslaC2050-Fermi-Performance.pdf
(A Tesla C2050 is the latest generation GPU -- shipping within the
last month.)
Can algorithms be coded in R to take advantage of the GPU
architecture to speed up computations? I would appreciate hearing
from R sages about their views on the usefulness of general purpose
GPU (graphical processing units) computing for computational
statistics. I would also like to hear about views on the future of
GPGPU - i.e. is it here to stay or is it just a gimmick that will
quietly disappear into the oblivion.
They need a lot of programming work to use, and the R packages
currently attempting to use them (cudaBayesreg and gputools) are very
specialized. It seems likely that they will remain a niche area, In
much the same way that enhanced BLAS are -- there are problems for
which the latter can make a big difference, but they are far from
universally useful.
We've been here several times before: when I was on UK national
supercomputing committees in the 1980s and 90s there were several
similar contenders (SIMD arrays, Inmos Transputers ...) and all faded
away.
That is not to say that general purpose parallelism is not going to be
central, as we each get (several) machines with many CPU cores. But
that sort of parallelism is likely to be exploited in different ways
from that of GPUs.
Thanks very much.
Best regards,
Ravi.
----------------------------------------------------------------------------
------------------------------
Ravi Varadhan, Ph.D.
Assistant Professor,
Center on Aging and Health,
Johns Hopkins University School of Medicine
(410)502-2619
rvarad...@jhmi.edu
http://www.jhsph.edu/agingandhealth/People/Faculty_personal_pages/Varadhan.h
tml
[[alternative HTML version deleted]]
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
--
Brian D. Ripley, rip...@stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.