Hi, all--
 
I wanted to start a (new) thread on R speed/benchmarking.  There is a
nice R benchmarking overview at
http://www.sciviews.org/other/benchmark.htm, along with a free script so
you can see how your machine stacks up.
 
Looks like R is substantially faster than S-plus.
 
My problem is this: with 512Mb and an overclocked AMD Athlon XP 1800+,
running at 588 SPEC-FP 2000, it still takes me 30 minutes to analyze 4Mb
.cel files x 120 files using affy (expresso).  Running svm takes a
mighty long time with more than 500 genes, 150 samples.
 
Questions:
1) Would adding RAM or processing speed improve performance the most?
2) Is it possible to run R on a cluster without rewriting my high-level
code?  In other words,
3) What are we going to do when we start collecting terabytes of array
data to analyze?  There will come a "breaking point" at which desktop
systems can't perform these analyses fast enough for large quantities of
data.  What then?
 
Michael Benjamin, MD
Winship Cancer Institute
Emory University,
Atlanta, GA
 
 

        [[alternative HTML version deleted]]

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help

Reply via email to