On Aug 4, 2010 8:13pm, Chris Howden <ch...@trickysolutions.com.au> wrote:
> Hi Chris,

> If u want good predictive ability, which is exactly what u do want when
> using a model for prediction, then why not use its predictive ability as a
> model selection criteria?

Because this will typically lead to overfitting the data, ie getting a great
fit to the 'training' set but then doing miserably on future data? Unless  
you do
something like split the data set into a training and a validation set, or
use cross-validation (which is a more sophisticated version of the same  
idea),
just finding the model with the best predictive capability on a specified
data set will *not* give you a good model in general. That's why approaches
such as AIC, corrected R^2, and so forth, include a penalty for model
complexity.

Unless I'm missing something really obvious, in which case I apologize.

Ben Bolker

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-ecology mailing list
R-sig-ecology@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-ecology

Reply via email to