Mats, Leonid,

Thanks for your definitions. I think I prefer that provided by Mats but he doesn't say what his test for goodness-of-fit might be.

Leonid already assumes that convergence/covariance are diagnostic so it doesnt help at all with an independent definition of overparameterization. Correlation of random effects is often a very important part of a model -- especially for future predictions -- so I dont see that as a useful test -- unless you restrict it to pathological values eg. |correlation|>0.9?. Even with very high correlations I sometimes leave them in the model because setting the covariance to zero often makes quite a big worsening of the OBJ.

My own view is that "overparameterization" is not a black and white entity. Parameters can be estimated with decreasing degrees of confidence depending on many things such as the design and the adequacy of the model. Parameter confidence intervals (preferably by bootstrap) are the way i would evaluate how well parameters are estimated. I usually rely on OBJ changes alone during model development with a VPC and boostrap confidence interval when I seem to have extracted all I can from the data. The VPC and CIs may well prompt further model development and the cycle continues.

Nick


Leonid Gibiansky wrote:
Hi Nick,

I am not sure how you build the models but I am using convergence, relative standard errors, correlation matrix of parameter estimates (reported by the covariance step), and correlation of random effects quite extensively when I decide whether I need extra compartments, extra random effects, nonlinearity in the model, etc. For me they are very useful as diagnostic of over-parameterization. This is the direct evidence (proof?) that they are useful :)

For new modelers who are just starting to learn how to do it, or have limited experience, or have problems on the way, I would advise to pay careful attention to these issues since they often help me to detect problems. You seem to disagree with me; that is fine, I am not trying to impose on you or anybody else my way of doing the analysis. This is just an advise: you (and others) are free to use it or ignore it :)

Thanks
Leonid


Mats Karlsson wrote:
<<I would say that if you can remove parameters/model components without
detriment to goodness-of-fit then the model is overparameterized. >>

--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holf...@auckland.ac.nz tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Reply via email to