Chuanpu,

I notice your email later that had a reference to your work. I have not had a chance to read it. When I have done so I will see whether it changes my viewpoint and get back to you via nmusers.

Best wishes,

Nick

Hu, Chuanpu [CNTUS] wrote:
Hi Nick,

With respect to this:
"If you choose an Emax model you may still have a biased prediction but
it will be a better prediction than one from a linear model. In the
interpolation range of predictions the Emax model will still do better.
I cannot see how it can do worse than the linear model (assuming the
model passes other tests of plausibility and the VPC looks OK)."

Our previously mentioned simulations showed exactly the opposite in
certain situations - i.e., when the power is low. The Emax model
predicted worse because of instability, even though it was the "true"
model.
Chuanpu


-----Original Message-----
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Nick Holford
Sent: Tuesday, August 25, 2009 5:02 PM
To: nmusers
Subject: Re: [NMusers] What does convergence/covariance show?

Ken,

I seem to be having trouble explaining myself these days. I dont usually

have to start every email with "I did not say" but here we go again!

I did not say that I recognize models are overparameterised. That implies a dichotomy between well parameterised and overparameterised. I tried to say earlier that there is a continuous scale of 'goodness' of estimation (usually quantified by the standard error). So I dont accept the notion of a model being overparameterised or not when one is talking

about estimability of identifiable parameters.

Similarly there is no dichotomy between the responses that a linear and an Emax pharmacodynamic model are trying to describe. Pharmacology and biology tell us that the linear model is just an approximation to an Emax model. If the OFV drops 'reasonably' and at least some of the concs

are close to or greater than the EC50 with an Emax model then I would stick with it. It doesn't matter to me that the Emax and EC50 are individually poorly estimated (it is rather rare to be interested in the

parameter by itself).

The usual purpose of the model is to predict the effect over a range of concentrations. If you choose a linear model because your subjective impression is that the model is "overparameterised" due to large standard errors then you can be certain that any extrapolation will overpredict the size of the effect. If you choose an Emax model you may still have a biased prediction but it will be a better prediction than one from a linear model. In the interpolation range of predictions the Emax model will still do better. I cannot see how it can do worse than the linear model (assuming the model passes other tests of plausibility and the VPC looks OK).

Thanks for your 2c!

Nick

Ken Kowalski wrote:
Nick,

It sounds like you do recognize that models are often
over-parameterized by
your statements:

" It is quite common to find that the estimates EC50 and Emax are highly correlated (I assume
SLOP=EMAX/EC50).
It would also be common to find that the random effects of EMAX and
EC50
are also correlated. That is expected given the limitations of most pharmacodynamic designs."


When EC50 and Emax are highly correlated I think you will find that a
simplified linear model will fit the data just as well with no real
impact
on goodness-of-fit (e.g., OFV).  If we only observe concentrations in
the
linear range of an Emax curve because of a poor design then it is no
surprise that a linear model may perform as well as an Emax model
within the
range of our data.  If the design is so poor in information content
regarding the Emax relationship because of too narrow a range of
concentrations this will indeed lead to convergence and COV step
failures in
fitting the Emax model.

Your statement that you would be unwilling to accept the linear model
in
this setting really speaks to the plight of the mechanistic modeler.
It is
important to note that an over-parameterized model does not mean that
the
model is miss-specified.  A model can be correctly specified but still
be
over-parameterized because the data/design simply will not support
estimation of all the parameters in the correctly specified model.
The
mechanistic modeler who has a strong biological prior favoring the
more
complex model is reluctant to accept a simplified model that he/she
knows
has to be wrong (e.g., we would not expect that the linear model would
hold
up at considerably higher concentrations than those observed in the
existing
data).  The problem with accepting the more complex model in this
setting is
that we can't really trust the estimates we get (when the model has
convergence difficulties and COV step failures as a result of
over-parameterization) because there may be an infinite set of
solutions to
the parameters that give the same goodness-of-fit (i.e., a very flat
likelihood surface).  You can do all the bootstrapping you want but it
is
not a panacea for the deficiencies of a poor design.

While I like to fit mechanistic models just as much as the next guy, I
also
like my models to be stable (not over-parameterized).  In this
setting, the
pragmatist in me would accept the simpler model, acknowledge the
limitations
of the design and model, and I would be very cautious not to
extrapolate my
model too far from the range of my existing data.  More importantly, I
would
advocate improving the situation by designing a better study so that
we can
get the information we need to support a more appropriate model that
will
put us in a better position to extrapolate to new experimental
conditions.
We review the COV step output (looking for high correlations such as
between
the estimates of EC50 and Emax) and fit simpler models not because we
prefer
simpler models per se, but because we want to fully understand the
limitations of our design.  Of course this simple example of a poor
design
with too narrow a concentration and/or dose range to estimate the Emax
relationship can be easily uncovered in a simple plot of the data,
however,
for more complex models the nature of the over-parameterization and
the
limitations of the design can be harder to detect which is why we need
a
variety of strategies and diagnostics including plots, COV step
output,
fitting alternative simpler models, etc. to fully understand these
limitations.

Just my 2 cents. :)

Ken

-----Original Message-----
From: owner-nmus...@globomaxnm.com
[mailto:owner-nmus...@globomaxnm.com] On
Behalf Of Nick Holford
Sent: Tuesday, August 25, 2009 1:09 AM
To: nmusers
Subject: Re: [NMusers] What does convergence/covariance show?

Leonid,

I did not say NONMEM stops at random. Whether or not the stopping
point
is associated with convergence or a successful covariance step appears

to be at random. The parameter values at the stopping point will typically be negligibly different. Thus the stopping point is not at random. You can easily observe this in your bootstrap runs. Compare
the
parameter distribution for runs that converge with those that dont and

you will find there are negligible differences in the distributions.

I did not say that I ignore small changes in OFV but my decisions are guided by the size of the change.

I do not waste much time modelling absorption. It rarely is of any relevance to try to fit all the small details.

I dont see anything in the plot of SLOP vs EC50 that is not revealed
by
R=0.93. If the covariance step ran you would see a similar number in
the
correlation matrix of the estimate. It is quite common to find that
the
estimates EC50 and Emax are highly correlated (I assume
SLOP=EMAX/EC50).
It would also be common to find that the random effects of EMAX and
EC50
are also correlated. That is expected given the limitations of most pharmacodynamic designs. However, I would not simplify the model to a linear model just because of these correlations. I would pay much more

attention to the change in OFV comparing an Emax with a linear model plus whatever was known about the studied concentration range and the
EC50.
I do agree that bootstraps can be helpful for calculating CIs on secondary parameters.

Nick

Leonid Gibiansky wrote:
Nick,
Concerning "random stops at arbitrary point with arbitrary error" I was referring to your statement: "NONMEM VI will fail to converge or not complete the covariance step more or less at random"

For OFV, you did not tell the entire story. If you would look only on

OF, you would go for the absolute minimum of OF. If you ignore small changes, it means that you use some other diagnostic to (possibly) select a model with higher OFV (if the difference is not too high, within 5-10-20 units), preferring that model based on other signs (convergence? plots? number of parameters?). This is exactly what I was referring to when I mentioned that OF is just one of the
criteria.
One common example where OF is not the best guide is the modeling of absorption. You can spend weeks building progressively more and more complicated models of absorptions profiles (with parallel,
sequential,
time-dependent, M-time-modeled absorption etc.) with large drop in OF

(that corresponds to minor improvement for a few patients), with no gain in predictive power of your primary parameters of interest, for example, steady-state exposure.

To provide example of the bootstrap plot, I put it here:

http://quantpharm.com/pdf_files/example.pdf

For 1000 bootstrap problems, parameter estimates were plotted versus parameter estimates. You can immediately see that SLOP and EC50 are strongly correlated while all other parameters are not correlated. CI

and even correlation coefficient value do not tell the whole story about the model. You can get similar results from the covariance-step

correlation matrix of parameter estimates but it requires simulations

to visualize it as clearly as from bootstrap results. Advantage of bootstrap plots is that one can easily study correlations and variability of not only primary parameters (such as theta, omega, etc), but also relations between derived parameters.

Leonid

--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:    www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:    (301) 767 5566




Nick Holford wrote:
Leonid,

I do not experience "random stops at arbitrary point with arbitrary error" so I don't understand what your problem is.

The objective function is the primary metric of goodness of fit. I agree it is possible to get drops in objective function that are associated with unreasonable parameter estimates (typically an OMEGA

estimate). But I look at the parameter estimates after each run so that I can detect this kind of problem. Part of the display of the parameter estimates is the correlation of random effects if I am using OMEGA BLOCK. This is also a weaker secondary tool. By
exploring
different models I can get a feel for which parts of the model are informative and which are not by looking at the change in OBJ. Small

(5-10) changes in OBJ are not of much interest. A change of OBJ of
at
least 50 is usually needed to detect anything of practical
importance.
I don't understand what you find of interest in the correlation of bootstrap parameter estimates. This is really nothing more than you would get from looking at the correlation matrix of the estimate
from
the covariance step. High estimation correlations point to poor estimability of the parameters but I think they are not very helpful

for pointing to ways to improve the model.

Nevertheless I can agree to disagree on our modelling art :-)

Nick

Leonid Gibiansky wrote:
Nick,

I think it is dangerous to rely heavily on the objective function (let alone on ONLY objective function) in the model development process. I am very surprised that you use it as the main
diagnostic.
If you think that nonmem randomly stops at arbitrary point with arbitrary error, how can you rely on the result of this random process as the main guide in the model development? I pay attention

to the OF but only as one of the large toolbox of other diagnostics

(most of them graphics). I routinely see examples when over-parametrized unstable models provide better objective function

values, but this is not a sufficient reason to select those. If you

reject them in favor of simpler and more stable models, you would see less random stops and more models with convergence and successful covariance steps.

Even with bootstrap, I see the main real output of this procedure
in
revealing the correlation of the parameter estimates rather then in

computation of CI. CI are less informative, while visualization of correlations may suggest ways to improve the model.

Any way, it looks like there are at least the same number of modeling methods as modelers: fortunately for all of us, this is still art, not science; therefore, the time when everything will be

done by the computers is not too close.

Leonid

--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:    www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:    (301) 767 5566




Nick Holford wrote:
Mats, Leonid,

Thanks for your definitions. I think I prefer that provided by
Mats
but he doesn't say what his test for goodness-of-fit might be.

Leonid already assumes that convergence/covariance are diagnostic so it doesnt help at all with an independent definition of overparameterization. Correlation of random effects is often a
very
important part of a model -- especially for future predictions -- so I dont see that as a useful test -- unless you restrict it to pathological values eg. |correlation|>0.9?. Even with very high correlations I sometimes leave them in the model because setting the covariance to zero often makes quite a big worsening of the
OBJ.
My own view is that "overparameterization" is not a black and
white
entity. Parameters can be estimated with decreasing degrees of confidence depending on many things such as the design and the adequacy of the model. Parameter confidence intervals (preferably by bootstrap) are the way i would evaluate how well parameters are

estimated. I usually rely on OBJ changes alone during model development with a VPC and boostrap confidence interval when I
seem
to have extracted all I can from the data. The VPC and CIs may
well
prompt further model development and the cycle continues.

Nick


Leonid Gibiansky wrote:
Hi Nick,

I am not sure how you build the models but I am using
convergence,
relative standard errors, correlation matrix of parameter estimates (reported by the covariance step), and correlation of random effects quite extensively when I decide whether I need extra compartments, extra random effects, nonlinearity in the model, etc. For me they are very useful as diagnostic of over-parameterization. This is the direct evidence (proof?) that they are useful :)

For new modelers who are just starting to learn how to do it, or have limited experience, or have problems on the way, I would advise to pay careful attention to these issues since they often help me to detect problems. You seem to disagree with me; that is

fine, I am not trying to impose on you or anybody else my way of doing the analysis. This is just an advise: you (and others) are free to use it or ignore it :)

Thanks
Leonid
Mats Karlsson wrote:
<<I would say that if you can remove parameters/model components without
detriment to goodness-of-fit then the model is overparameterized.


--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holf...@auckland.ac.nz tel:+64(9)923-6730 fax:+64(9)373-7090
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Reply via email to