Nick,

I too would use OFV as the most important goodness-of-fit diagnostic when
comparing models, especially when deeming something to be redundant. If
adding a component doesn't reduce OFV, I see no reason to include it (I
think we're agreeing on something!). However, you write 

" Small (5-10) changes in OBJ are not of much interest. A change of OBJ of
at least 50 is usually needed to detect anything of practical importance."

Today we use population methods for everything from very rich pop pk
meta-analyses to very sparsely informative data sets on survival. To use OFV
as a measure of goodness-of-fit is central and look at the risk something
improved the fit by chance, but I would not use it as measure of clinical
importance. 

Best regards,
Mats

Mats Karlsson, PhD
Professor of Pharmacometrics
Dept of Pharmaceutical Biosciences
Uppsala University
Box 591
751 24 Uppsala Sweden
phone: +46 18 4714105
fax: +46 18 471 4003


-----Original Message-----
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On
Behalf Of Nick Holford
Sent: Tuesday, August 25, 2009 12:14 AM
To: nmusers
Subject: Re: [NMusers] What does convergence/covariance show?

Mats, Leonid,

Thanks for your definitions. I think I prefer that provided by Mats but 
he doesn't say what his test for goodness-of-fit might be.

Leonid already assumes that convergence/covariance are diagnostic so it 
doesnt help at all with an independent definition of 
overparameterization. Correlation of random effects is often a very 
important part of a model -- especially for future predictions -- so I 
dont see that as a useful test -- unless you restrict it to pathological 
values eg. |correlation|>0.9?. Even with very high correlations I 
sometimes leave them in the model because setting the covariance to zero 
often makes quite a big worsening of the OBJ.

My own view is that "overparameterization" is not a black and white 
entity. Parameters can be estimated with decreasing degrees of 
confidence depending on many things such as the design and the adequacy 
of the model. Parameter confidence intervals (preferably by bootstrap) 
are the way i would evaluate how well parameters are estimated. I 
usually rely on OBJ changes alone during model development with a VPC 
and boostrap confidence interval when I seem to have extracted all I can 
from the data. The VPC and CIs may well prompt further model development 
and the cycle continues.

Nick

 
Leonid Gibiansky wrote:
> Hi Nick,
>
> I am not sure how you build the models but I am using convergence, 
> relative standard errors, correlation matrix of parameter estimates 
> (reported by the covariance step), and correlation of random effects 
> quite extensively when I decide whether I need extra compartments, 
> extra random effects, nonlinearity in the model, etc. For me they are 
> very useful as diagnostic of over-parameterization. This is the direct 
> evidence (proof?) that they are useful :)
>
> For new modelers who are just starting to learn how to do it, or have 
> limited experience, or have problems on the way, I would advise to pay 
> careful attention to these issues since they often help me to detect 
> problems. You seem to disagree with me; that is fine, I am not trying 
> to impose on you or anybody else my way of doing the analysis. This is 
> just an advise: you (and others) are free to use it or ignore it :)
>
> Thanks
> Leonid 


Mats Karlsson wrote:
> <<I would say that if you can remove parameters/model components without
> detriment to goodness-of-fit then the model is overparameterized. >>
>   

-- 
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holf...@auckland.ac.nz tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Reply via email to