Discussion about R capabilities seem to suggest that for optimization one may want to use some other software. As I've been working (with Ravi Varadhan mainly) to try to improve what is now called optim(), I needed to test some new methods, including a variant of CG. Coding only in R on a pretty vanilla 3Ghz PC, I was surprised that a generalized Rosenbrock function in n=50000 parameters solved in less than 2 minutes. It seems a lot of my experiences in building routines that appear in optim() -- which were written on a machine with 8K (that's K) bytes for program AND data -- are really not valid any more. This has prompted us to try (and it not simple) to set up some infrastructure to get good (well, better?) measures of performance for optimization and nonlinear parameter estimation. Complications involve the variety of environments and configurations, as well as codings of test functions, choices of how to provide gradient information etc. Contact me off-line if you are interested in this, as we would like it to be relatively easy to use and share. That can come only by communication, and we want to have lots of "real" tests, and they take a fair bit of effort to set up in a standardized way.

However, the main message here is to ask, as I was reasonably asked by another R worker, that people be much more cautious with conjectures. We can and should check our opinions by measuring, no matter what tasks or tools, when we give advice.

JN

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to