RE: [NMusers] Issues with VPC

2014-10-06 Thread Ribbing, Jakob
Dear Xinting,

I think you should try to change the nonmem control stream name for the 
dependent variable, from DV=LNDV (which is what I assume you have right now on 
$INPUT?) to just read DV.
The name for the DV column in the datafile can remain LNDV, so this is just a 
matter of changing on $INPUT in the control stream. Then, in the call to psn 
you change to -dv=DV (which is the default argument to this option).

In general PsN has problems when one variable is assigned two different names 
(like DV=LNDV), but I can not say for sure this is what happened in your case.
However, it is a quick change, so suggest you try it.

Best regards

Jakob

PS. Technical questions, which are more related to PsN than nonmem; these are 
better held on the PsN-users list, rather than the nonmem-users list. DS.



RE: [NMusers] backward integration from t-a to t

2014-01-15 Thread Ribbing, Jakob
Hi Pavel,

I agree with you it is not uncommon to have AUC drive efficacy or safety 
endpoints.
However, you seem to have the impression this is commonly done using cumulative 
AUC and I can assure you that is rarely the case.
I have only seen that for safety endpoints where it has been justified 
(treatment is limited to a few cycles due to accumulation of side effect which 
for practical purposes can be regarded as irreversible).
Even for cases where treatment/disease is completely curative it is not a 
standard approach to use cumulative AUC to drive efficacy (e.g. antibiotics, 
where infection may be eradicated, but the bacterial-killing effect wears off 
after the drug has been eliminated; so even if disease does not come back the 
actual drug effect has worn off).

At steady state multiple dosing, AUC over a dosing interval (or Cav,ss) can 
sometimes be used to drive steady-state efficacy or safety.
However, it seems in your case you have fluctuations in drug response even at 
steady state?
Otherwise, this AUC can be expressed as an analytical solution or added as an 
input variable in your dataset, in case you are concerned about run times.
But with that approach you would not see a fluctuation in drug response at 
steady state, so in your case maybe better to use concentrations to drive 
efficacy?

For a “moving average” it would sometimes be possible to calculate AUC 
analytically.
However, a moving average AUC would rarely be a mechanistic description of 
effect delay. Leonid provide one possible solution (like an effect compartment).
However, there are many alternatives and it is not possible to say which is the 
best in your specific case(s), without more information, e.g.

· Are you thinking about single dose, multiple dosing, and in the 
latter case is it sufficient to describe your endpoint at stead state?

· And is the effect appearing with great delay over many days/weeks or 
it rather fluctuates with fluctuating concentrations? (e.g. at multiple dosing 
for a low dose, do you have fluctuations over a dosing interval in your 
efficacy endpoint that are due fluctuations in PK, i.e. aside from any 
circadian variation?)

· Does a higher dose reach its efficacy-steady state faster than a 
lower dose (time to efficacy-steady state; not the level of response which 
should be different)?

· What is the mechanisms for effect delay (i.e. the delay in on and 
offset of effect that is not due to accumulation of PK at start of treatment)

Are you aware of the standard models for effect delay that one would commonly 
consider and why did you dismiss these?

Best regards

Jakob

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Pavel Belo
Sent: 14 January 2014 18:45
To: Bauer, Robert
Cc: nmusers@globomaxnm.com
Subject: [NMusers] backward integration from t-a to t

Dear Robert,

Efficacy is frequently considered a function of AUC.  (AUC is just an integral. 
It is obvious how to calculate AUC any software which can solve ODE.)  A 
disadvantage of this model of efficacy is that the effect is irreversable 
because AUC of concentration can only increase; it cannot decrease.  In many 
cases, a more meaningful model is a model where AUC is calculated form time t 
-a to t (kind of "moving average"), where t is time in the system of 
differential equations (variable T in NONMEM).   There are 2 obvious ways to 
calculate AUC(t-a, t).  The first is to do backward integration, which looks 
like a hard and resource consuming way for NONMEM.  The second one is to keep 
in memory AUC for all time points used during the integration and calculate 
AUC(t-a,t) as AUC(t) - AUC(t-a), there AUC(t-a) can be interpolated using two 
closest time points below and above t-a.

Is there a way to access AUC for the past time points (

RE: [NMusers] Getting rid of correlation issues between CL and volume parameters

2013-11-26 Thread Ribbing, Jakob
Dear all,

Apologies, I just noticed that Leonid made this point long ago!
Several postings from different people conveying the same message does often 
occur because of the lag time before messages are distributed.
In this case, however, it was only because I had not read all previous postings 
to the thread before sending a reply!

Jakob

-Original Message-
From: Ribbing, Jakob
Sent: 26 November 2013 10:46
To: Mueller-Plock, Nele; Leonid Gibiansky; 'nmusers'
Cc: Ribbing, Jakob
Subject: RE: [NMusers] Getting rid of correlation issues between CL and volume 
parameters

Hi Nele,

I believe Matt's point was more to the situation where any remaining 
correlation between CL and V random components can not be accounted for by 
covariates, so that both eta on F and block2 on CL and V is used?

If eta on F and covariates takes care of the correlation between CL and V: I 
would say that you may get even more informative diagnostics with this 
implementation.
For example, if you have not yet taken dose/formulation into account and this 
affects only F, it would come out as a clearer trend on the eta1 (relative F). 
This would help in interpretation (but I would highlight Nick's earlier point 
that eta on F may capture other nonlinearities that are shared between CL and 
V; like degree of protein binding for a low-extraction drug).

Best

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Mueller-Plock, Nele
Sent: 26 November 2013 08:21
To: Leonid Gibiansky; 'nmusers'
Subject: RE: [NMusers] Getting rid of correlation issues between CL and volume 
parameters

Dear all,

Thanks for picking up this discussion, and bringing in so many points of view. 
When I started the discussion I had in mind the physiological viewpoint, from 
which we know that if there is between-subject variability in F1, this must 
result in a correlation between volume and CL parameters. From the discussions 
I would conclude that the group would favor to account for this correlation via 
inclusion of ETA on F1 and then a coding of
FF1=EXP(ETA(1))
CL=THETA()*EXP(ETA())/FF1
V=THETA()*EXP(ETA())/FF1

whereas this does not mean that there is no additional correlation between the 
parameters which needs to be accounted for in the off-diagonal OMEGA BLOCK 
structure? Also, I am afraid I was not able to completely follow Matt's 
argumentation, but would also be interested to hear if implementing the code 
above might lead to misleading plots.

Thanks and best
Nele
__
 
Dr. Nele Mueller-Plock, CAPM
Principal Scientist Modeling and Simulation
Global Pharmacometrics
Therapeutic Area Group
 
Takeda Pharmaceuticals International GmbH
Thurgauerstrasse 130
8152 Glattpark-Opfikon (Zürich)
Switzerland

Visitor address:
Alpenstrasse 3
8152 Glattpark-Opfikon (Zürich)
Switzerland

Phone: (+41) 44 / 55 51 404 
Mobile: (+41) 79 / 654 33 99
 
mailto: nele.mueller-pl...@takeda.com
http://www.takeda.com

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Leonid Gibiansky
Sent: Dienstag, 26. November 2013 00:51
To: 'nmusers'
Subject: Re: [NMusers] Getting rid of correlation issues between CL and volume 
parameters

Another argument in favor of using F1 ~ EXP(ETA(1)) instead of block OMEGA 
matrix is the covariate modeling. In cases where variability in apparent CL and 
V is due to the F1 variability, this formulation allows for more mechanistic 
interpretation of the covariate effects and ETA dependencies on  covariates. 
For example, one can easily explain why
ETA_F1 may depend on food while it is less straightforward to interpret ETA_V 
dependence on food. So while these models (with F1=1 and OMEGA block versus 
F1=EXP(ETA(1)) and diagnonal OMEGA), may be numerically similar if not 
equivalent, it could be better to use more mechanistically relevant model and 
put the variability where it would be expected from the mechanistic point of 
view.
Regards,
Leonid


--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566



On 11/25/2013 1:43 PM, Nick Holford wrote:
> Bob,
>
> You use an estimation method justification for choosing between 
> estimating the covariance of CL and V and estimating the variance of F.
>
> An alternative view is to apply a fixed effect assumption based on 
> pharmacokinetic theory. The fixed effect assumption is that some of 
> the variation in CL and V is due to differences in bioavailability and 
> other factors such as linear plasma protein binding and differences in 
> the actual amount of drug in the oral formulation. This fixed effect 
> assumption is described in the model by the variance of F.
>
> It is quite plausible to imagine that there

RE: [NMusers] Getting rid of correlation issues between CL and volume parameters

2013-11-26 Thread Ribbing, Jakob
Hi Nele,

I believe Matt's point was more to the situation where any remaining 
correlation between CL and V random components can not be accounted for by 
covariates, so that both eta on F and block2 on CL and V is used?

If eta on F and covariates takes care of the correlation between CL and V: I 
would say that you may get even more informative diagnostics with this 
implementation.
For example, if you have not yet taken dose/formulation into account and this 
affects only F, it would come out as a clearer trend on the eta1 (relative F). 
This would help in interpretation (but I would highlight Nick's earlier point 
that eta on F may capture other nonlinearities that are shared between CL and 
V; like degree of protein binding for a low-extraction drug).

Best

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Mueller-Plock, Nele
Sent: 26 November 2013 08:21
To: Leonid Gibiansky; 'nmusers'
Subject: RE: [NMusers] Getting rid of correlation issues between CL and volume 
parameters

Dear all,

Thanks for picking up this discussion, and bringing in so many points of view. 
When I started the discussion I had in mind the physiological viewpoint, from 
which we know that if there is between-subject variability in F1, this must 
result in a correlation between volume and CL parameters. From the discussions 
I would conclude that the group would favor to account for this correlation via 
inclusion of ETA on F1 and then a coding of
FF1=EXP(ETA(1))
CL=THETA()*EXP(ETA())/FF1
V=THETA()*EXP(ETA())/FF1

whereas this does not mean that there is no additional correlation between the 
parameters which needs to be accounted for in the off-diagonal OMEGA BLOCK 
structure? Also, I am afraid I was not able to completely follow Matt's 
argumentation, but would also be interested to hear if implementing the code 
above might lead to misleading plots.

Thanks and best
Nele
__
 
Dr. Nele Mueller-Plock, CAPM
Principal Scientist Modeling and Simulation
Global Pharmacometrics
Therapeutic Area Group
 
Takeda Pharmaceuticals International GmbH
Thurgauerstrasse 130
8152 Glattpark-Opfikon (Zürich)
Switzerland

Visitor address:
Alpenstrasse 3
8152 Glattpark-Opfikon (Zürich)
Switzerland

Phone: (+41) 44 / 55 51 404 
Mobile: (+41) 79 / 654 33 99
 
mailto: nele.mueller-pl...@takeda.com
http://www.takeda.com

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Leonid Gibiansky
Sent: Dienstag, 26. November 2013 00:51
To: 'nmusers'
Subject: Re: [NMusers] Getting rid of correlation issues between CL and volume 
parameters

Another argument in favor of using F1 ~ EXP(ETA(1)) instead of block OMEGA 
matrix is the covariate modeling. In cases where variability in apparent CL and 
V is due to the F1 variability, this formulation allows for more mechanistic 
interpretation of the covariate effects and ETA dependencies on  covariates. 
For example, one can easily explain why
ETA_F1 may depend on food while it is less straightforward to interpret ETA_V 
dependence on food. So while these models (with F1=1 and OMEGA block versus 
F1=EXP(ETA(1)) and diagnonal OMEGA), may be numerically similar if not 
equivalent, it could be better to use more mechanistically relevant model and 
put the variability where it would be expected from the mechanistic point of 
view.
Regards,
Leonid


--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566



On 11/25/2013 1:43 PM, Nick Holford wrote:
> Bob,
>
> You use an estimation method justification for choosing between 
> estimating the covariance of CL and V and estimating the variance of F.
>
> An alternative view is to apply a fixed effect assumption based on 
> pharmacokinetic theory. The fixed effect assumption is that some of 
> the variation in CL and V is due to differences in bioavailability and 
> other factors such as linear plasma protein binding and differences in 
> the actual amount of drug in the oral formulation. This fixed effect 
> assumption is described in the model by the variance of F.
>
> It is quite plausible to imagine that there is still some covariance 
> between CL and V that is not related to the differences in F. For 
> example, if you did not know the subject's weights and therefore could 
> not account for the correlated effects of weight on CL and V. The 
> estimation of the variance of F would only partly account for this 
> because of the non-linear correlation of weight with CL and V. Another 
> non-linear correlation would occur if plasma protein binding was 
> non-linear in the range of measured total concentrations.
>
> In such case one might propose trying to estimate the covariance of CL 
> and V as well as including F as a fixed effect and estimating the 
> variance of

RE: [NMusers] 2 cmax after a single SC dose

2013-11-04 Thread Ribbing, Jakob
Hi Pavel,

Since you say that IPRED shows two Cmax this could not be residual error.
Like Leonid I would believe this is rather an error in setup (control stream or 
dataset) rather than an error with nonmem.
However, since you are using an ADVAN13 model, I suppose if the dose is split 
into absorption into two compartments; if you have zero order absorption into a 
deep compartment you may see this kind of behaviour. Since you only see 
duplicate Cmax in some subjects, I would suggest you look at these subjects to 
find out how they are different. Is it dose/formulation, the observation time 
points, or eta values that are different from the other subjects?

Best

Jakob


-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Leonid Gibiansky
Sent: 03 November 2013 02:49
Cc: nmusers
Subject: Re: [NMusers] 2 cmax after a single SC dose

Is this a simulation run? May be residual error? I never seen integration 
problem to result in incorrect profiles. Also check the data file, may be extra 
dose was given?
Leonid

> On Nov 2, 2013, at 8:01 PM, non...@optonline.net wrote:
> 
> Hello NONMEM Users,
>  
>  NONMEM outputs for few subjects look strange.  Some subjects have 2 Cmax 
> (IPRED) after a single SC dose.  One Cmax looks normal.  Another one is a 
> smaller bump before the normal Cmax.  Otherwise, the problem runs OK.  Can it 
> be an integration error?  I use ADVAN13. 
>  
> Thank you,
> Pavel


RE: [NMusers] OMEGA - CORR MATRIX FOR RANDOM EFFECTS - ETAS

2013-09-17 Thread Ribbing, Jakob
Dear Jules,
You are correct in pointing out where the problem lies.
However, the covariance matrix is not fine since it is at the boundary just as 
much as the translation into correlation.
Correlation is calculated by this equation:
Cor(eta1,eta2)=OM1,2 /(sqrt(OM1,1)*sqrt(OM2,2))
Where:
OM1,1 and OM2,2, represents eta variance (as reported by nonmem output in the 
OMEGA cov matrix, i.e. on var scale and not the sd scale that is reported in 
the OMEGA corr matrix) and
OM1,2 represents Cov(eta1,eta2), i.e. as reported for the off diagonal on the 
OMEGA cov matrix

I did not check your control stream thoroughly, but apparently, with the 
current data and model baseline (E0) and Emax are completely correlated on the 
individual level, maybe because of lack of information, maybe because of 
physiology – you currently have an additive effect, but maybe Emax would be 
bringing the PD endpoint all the way to zero or reducing the endpoint by a 
certain fraction (i.e. a multiplicative model), or maybe healthy can not be 
reduced at all, wheras elevated values can be reduced (almost) down to healthy 
values? If you get that “structural” part of your drug-effects model right, you 
may then be able to estimate correlation (or conclude with the new structure 
these two etas are no longer correlated).
Best regards

Jakob
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Jules Heuberger
Sent: 17 September 2013 11:42
To: nmusers@globomaxnm.com
Subject: [NMusers] OMEGA - CORR MATRIX FOR RANDOM EFFECTS - ETAS

Dear NM Users,

I ran into a problem with a NM 7.2 run with an OMEGA BLOCK (2), receiving an 
error message saying the parameter estimate is near its boundary (See also the 
discussion in [NMusers] How to solve it? ERROR: PARAMETER ESTIMATE IS NEAR ITS 
BOUNDARY ). After digging through the documentation I am still unsure where the 
problem is originating from. What I figured out is that the problem lies in the 
OMEGA - CORR MATRIX FOR RANDOM EFFECTS - ETAS  ***, a matrix I believe is 
only introduced since NM7.2. The estimate in this matrix that is giving 
problems is the one for the OMEGA BLOCK, which is estimated to be 1 (a 
boundary).

Now, as far as I know, the OMEGA - CORR MATRIX is related to the OMEGA - COV 
MATRIX as the sqrt of its estimate. However, it is unclear to me how the OMEGA 
BLOCK estimate (COV MATRIX), which gives the correlation between the two ETAs, 
is related to the OMEGA - CORR MATRIX value. What does this value actually 
mean, and what does it mean for my error message, as the correlation estimate 
itself seems to be fine? In the print below you can see the model and resulting 
estimates, the important ones being 5.26E-02 (OMEGA - COV MATRIX of ETA1-ETA2) 
and 1.00E+00 (OMEGA - CORR MATRIX of ETA1-ETA2). The model has two ETAs and an 
proportional error model,

Thanks in advance for any insights and help,

Best,

Jules Heuberger




RE: [NMusers] Unable to post to nmusers

2013-08-16 Thread Ribbing, Jakob
Deleting the original question from my reply made the fourth attempt 
successful, so it must have been the sheer length that prevented previous 
attempts to reach nmusers.

Thanks!

Jakob

From: Martin Bergstrand [mailto:martin.bergstr...@farmbio.uu.se]
Sent: 16 August 2013 14:21
To: Ribbing, Jakob; nmusers@globomaxnm.com; robert.ba...@iconplc.com
Subject: RE: [NMusers] Unable to post to nmusers

Have a look on this old post...

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Ribbing, Jakob
Sent: den 16 augusti 2013 14:37
To: nmusers@globomaxnm.com; robert.ba...@iconplc.com
Subject: [NMusers] Unable to post to nmusers

Dear all,

Since Monday I have made three attempts to post a reply on the thread "question 
in Box-Cox Transformations in K-PD model", initiated by Kehua.
None of these attempts have been successful, at the same time I have noticed 
that other topics have successfully reached out via the distribution list.
Do you have any ideas why some e-mails are not distributed on the list?

I did not include any attachments to the e-mail, and did not use any kind of 
wording that could be seen as offensive...

Best regards

Jakob





RE: [NMusers] question in Box-Cox Transformations in K-PD model

2013-08-16 Thread Ribbing, Jakob
From: Ribbing, Jakob
Sent: 12 August 2013 23:45
To: 'kehua wu'; nmusers
Cc: Ribbing, Jakob
Subject: RE: [NMusers] Fwd: question in Box-Cox Transformations in K-PD model

Hi Kehua,

When you say that you did not get the estimate of TH2 in the output file, but 
you got the initial estimate. Did you mean that the model failed termination or 
that it minimised, but that TH2 did not move from its final estimate? I think 
we need more information from the control stream.
Also, the part of the control stream that you shared did not include initial 
estimates. Did you start with a negative initial estimate for TH2? I would add 
an upper boundary at zero as well.

For alternative statistical models with FOCE (or FOCEI, where appropriate) I 
have seen a couple cases were likelihood profiling indicates that there is 
information on the parameter, but where the estimate did not move from its 
initial estimate (to describe shape of individual parameter distribution or 
residual-error distribution) - These models were often complex or at least over 
parameterized in some regards - In your case; do you have enough information to 
estimate etas on KIN, KDE, EKD50 and EMAX, or are some of these omegas fixed? 
In addition, estimating EKD50 (theta and omega) often is very helpful to avoid 
correlation between the estimates (which is why this parameterisation was 
suggested in the first place). However, there are also cases where this 
parameterisation induces a correlation between the estimates and in that case 
estimating EA50 may be more useful.

For the limited number of cases where I have tested different "semi-parametric" 
distributions for individual parameters; Box-cox transformation I have found to 
be one of the more stable alternatives.

Best regards

Jakob



[NMusers] Unable to post to nmusers

2013-08-16 Thread Ribbing, Jakob
Dear all,

Since Monday I have made three attempts to post a reply on the thread "question 
in Box-Cox Transformations in K-PD model", initiated by Kehua.
None of these attempts have been successful, at the same time I have noticed 
that other topics have successfully reached out via the distribution list.
Do you have any ideas why some e-mails are not distributed on the list?

I did not include any attachments to the e-mail, and did not use any kind of 
wording that could be seen as offensive...

Best regards

Jakob





RE: [NMusers] Simulation with uncertainty

2013-08-02 Thread Ribbing, Jakob
Hi Dinko,

I focused the answer on uncertainty in population parameters, but obviously 
there are other uncertainties like uncertainty in covariate values (in the same 
population, or in a new and often wider/more severe population of a prospective 
study), uncertainty in model space and how the model(s) will work for 
extrapolation into a different population, duration, or other study setting. 
Likewise, there are many ways of deriving a var-covar matrix (most commonly 
from the nonmem covstep, but there are other methods, like sse (in PsN), 
multivariate llp[1], SIR[2], etc. - Simulation using $PRIOR NWPRI in nonmem 
generally not recommended before nm8, but TNPRI is OK and the most automatic 
way of using the covmatrix). I try to keep the answer within scope of the 
question, though.

With regards to uncertainty in parameter space my opinion is that most often 
both var-covar matrix (e.g. from nonmem covstep) and (non-parametric) bootstrap 
work fine.

· The bootstrap is more computer intensive, but often requires less 
work for the analyst.

· The bootstrap requires sufficient subjects speaking to each 
parameter. There were some preliminary results on this for population models, 
presented at this year's PAGE[3]

· Among the bootstrap samples there are sometimes a few that are WAY 
OUT, and in that cause you may need to deal with it (if the parameter in 
question would highly affect the outcome you are after with your simulations). 
In many cases it would be necessary to scrutinize what parameter values you are 
getting from the var-covar matrix in much the same way as for the 
(non-parametric) bootstrap, but for simpler cases it may be sufficient to look 
at the numerics of point estimates and the var-covar matrix to get the picture.

For a more elaborate model, or where uncertainty is high (maybe for several 
parameters of interest), using var-covar matrix becomes more cumbersome for the 
analyst. If still pursuing this approach I would generally run the bootstrap to 
understand how the model must be re-parameterised for the var-covar matrix to 
be useful e.g.:

· A parameter with a lower boundary of zero and that has high 
uncertainty generally should be estimated on log scale.

o   However, a caution on that if you log-transform, assuming that Emax is 
higher than zero, then obviously any dose will produce an effect that is 
statistically different from zero, because of this assumption.

o   Notice that we estimate on the transformed scale to handle uncertainty in 
population parameters appropriately, and that the model fit should otherwise be 
identical (i.e. identical OFV for point estimates, but changes in the nonmem 
covmatrix).

· I have seen a few examples where the drug effect model has been a 
rather simplistic Emax model, but where ED50 (or EC50) was highly uncertain and 
with high correlation between the estimates of ED50 and Emax. In these 
situations for the var-covar matrix to be useful it may not be enough only to 
log transform and one may have to re-parameterise, e.g.  so that primary 
parameters are ED50 and TVEfficacy for the reference dose (instead of TVEmax as 
primary parameter - a primary parameter is what is actually estimated e.g. 
represented as a theta). With this parameterisation, the median (across the 
draws from var-covar matrix) of mean effect of the reference dose has agreed 
with the mean effect based on point estimates (of course, simulations based on 
point estimates still includes IIV, residual errors, etc.). I am not saying 
these two would always have to agree, but for these cases the agreement has 
been there for the non-parametric bootstrap (both before and after the 
re-parameterisation). In such a situation I say that the initial results from 
the var-covar matrix were not reliable.

o   This is the major benefit with the bootstrap; that it avoids the assumption 
that comes with the multi-variate normal and therefore does not require these 
types of re-parameterisations for simulations with uncertainty (in population 
parameters)

o   Notice that even for these examples of re-parameterisation, I am not 
suggesting that you change the actual model. If you previously had IIV on Emax, 
then keep it like that, even though TVEmax now is a secondary parameter (i.e. a 
parameter that is not estimated directly, since the theta now represent the 
efficacy for the reference dose)

§  OFV for point estimates will not change with these types of 
re-parameterisations, since the model is the same, much like estimating CL and 
V, instead of K and V - it is the same model, just re-parameterised (if, on the 
other hand you move IIV from acting on K and V,  to acting on CL and V, you 
would get different parameter values and different OFV)

§  The distribution of e.g. Emax and ED50 based on (non-parametric) bootstrap 
will not change with these re-parameterisation, since bootstrap samples are 
without assumption of multi-variate norma

RE: [NMusers] simulation with uncertainty of THETA

2013-07-08 Thread Ribbing, Jakob
Hi Ying,

You mention that you have internal (individual level?) data but also literature 
data to complement this information.
One way to combine the two in nonmem is via the $PRIOR functionality.
If your literature source includes uncertainty of the estimated parameters it 
would be far less arbitrary to include this as a prior and that way you may not 
have to fix parameters from literature.
The nonmem covariance matrix would then contain uncertainty and correlation 
between the estimates for all population parameters.

Are you saying that your internal data does not contain enough information to 
estimate IIV on any PK or PD parameter?

With regards to PsN: Simulations with uncertainty in population parameters can 
be incorporated according to three different approaches.
These are available for the PsN programs vpc/npc and sse (see PsN 
documentation).
If you decide to use fixed parameters and to arbitrarily add some uncertainty: 
You will need to create a parameter table similar to the bootstrap raw_results 
file and plug that into the subsequent simulation. In the normal case bootstrap 
followed by vpc or sse is dead easy to perform in PsN (and the bootstrap would 
create that table for you to plug into simulations). In your case you may find 
that the approach with $PRIOR is more efficient (and somewhat less arbitrary).

Finally, just a caution to seeing the THETA as representing the mean. Often 
random inter-individual variability (IIV) is added as a log normal distribution 
around the typical value (defined by a single theta, assuming no covariates in 
model). The typical value in this case is the median, but not the mean. For a 
single parameter, IIV of 40%-approx. CV translates into mean only 8% above the 
median, whereas IIV of 80%-approx CV translates into mean 38% higher than the 
median (i.e. mean parameter value is 38% higher than the theta). Sometimes 
these biases stack up, so treating the mean curve from literature as the PRED 
curve in the individual model may heavily bias your results. Likewise, if IIV 
is high, literature values based on a naive pooled approach (representing a 
mean curve) would be far away from the typical values in the population model. 
Depending on what information you are combining (from aggregate data and 
individual "patient" data) and how thorough you want to be there is a range of 
different approaches for dealing with this.

Best wishes

Jakob

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Ying Zhang
Sent: 08 July 2013 21:18
To: nmusers@globomaxnm.com
Subject: [NMusers] simulation with uncertainty of THETA

Dear all,

When do the simulation, if I have all of the parameter information from 
literature and internal data, I will fix the PKPD parameter and add about 30% 
for PK and 40% for PD as IIV. But my question is do we need to think about the 
CV% of THETA, since we fixed it, we could not add it in one run. if it is 
necessary, for instance, we will add 20% of CV to one mean THETA, how to do it, 
or does PsN can implement this issue?

Best
Ying


RE: [NMusers] Right skewness in bootstrap distribution

2013-04-23 Thread Ribbing, Jakob
Dear Felipe,

The distribution obtained from the (nonparametric) bootstrap represents 
uncertainty in the population parameters, and the histogram for V1 should not 
be interpreted as a distribution of individual parameter values. There are 
issues with relying on the nonparametric distribution based on only eight 
subjects. The tail to the right may be just due to one or two subjects with a 
larger central volume.
Otherwise (disregarding too few subjects in this specific example); there is 
nothing wrong with a right-tailing uncertainty distribution. In fact, it may 
even be expected when uncertainty is high and parameter is restricted to 
positive values. You would obtain a similar uncertainty distribution from the 
nonmem covmatrix by estimating (typical) central volume on log scale. This 
should not change OFV, but will alter the covmatrix.

It is difficult to comment on whether the Vc estimate is unreasonable or not. 
If early observations are well predicted by the model, then what amount is 
located in central compartment, and what amount is available in the two 
peripheral compartment at these early time points? If you do not understand how 
the model may describe the observed data you could output these amounts in a 
table and investigate disposition at these early time points. NCA 
extrapolations to time zero may not agree, but that to me is mostly a 
theoretical issue - it would be pointless to measure concentrations at the same 
time as a (bolus) dose.

Best regards

Jakob

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Felipe Hurtado
Sent: 23 April 2013 19:57
To: nmusers@globomaxnm.com
Subject: [NMusers] Right skewness in bootstrap distribution

Dear NONMEM users,

I am modeling some PK data using a linear 3-compartment model, in which drug 
concentrations were measured in two of these compartments simultaneously after 
i.v. dose. The model fits the data reasonably well, and all parameters seem 
reasonable except for V1 (volume of the central compartment, which occurs to be 
the dosing compartment). Estimate for V1 is very small, what does not make 
sense considering the average dose given and the mean Cp0 calculated by NCA. 
This result suggests drug distribution is restricted to plasma, however it was 
observed extensive distribution to tissues. IIV for V1 is relatively small 
(19.6%, n=8 subjects). The histogram for V1 (nonparametric bootstrap with 100 
replicates) shows a right skewed distribution with the presence of a 
subpopulation and broad confidence interval (5th percentile tends to zero).
I tried to solve this by fixing V1 to a reasonable value, running the model to 
calculate all other parameters, and then changing the initial estimates to 
these parameters in order to recalculate V1, but it turns out to the same small 
estimate.

Any suggestions will be appreciated! Thanks in advance.

Felipe


[NMusers] RE: Simulation settgin in the precence of Shrinkage in PK when doing PK-PD analysis

2013-02-18 Thread Ribbing, Jakob
Resending, since my posting from this morning (below) has not yet appeared on 
nmusers.
Apologies for any duplicate postings!

From: Ribbing, Jakob
Sent: 18 February 2013 09:59
To: "Kågedal, Matts"; nmusers@globomaxnm.com
Cc: Ribbing, Jakob
Subject: RE: Simulation settgin in the precence of Shrinkage in PK when doing 
PK-PD analysis

Hi Matts,

I think you are correct; the problem you describe has not had much (public) 
discussion.
It is also correct like you say that this is mostly a problem in case of all of 
the below

· sequential PK-PD analysis is applied (IPP approach, Zang et al)

· non-ignorable degree of shrinkage in PK parameters of relevance

o   Of relevance: the PK parameters effectively driving PD for the mechanism, 
e.g. CL/F if AUC is driving. In addition, if PD response develops over several 
weeks/months then shrinkage in  IOV may be ignored even for relevant PK 
parameters

· I would also like to add that for this to be an issue individual PK 
parameters must explain a fair degree of the variability in PD, which is not 
always the case

o   If driving PD with typical PK parameters (along with dose and other PD 
covariates) does not increase PD omegas, compared to IPP, then either PK 
shrinkage is already massive, or else it is not an issue for the IPP-PD model

If only the IPP approach is possible/practical a simplistic approach to 
simulate PD data is as follows:

· sample (with replacement) the individual PK parameters along with any 
potential covariates (maintaining correlation between IIP and covariates, i.e. 
whole subject vectors for these entities, but generally not for dose since 
generally should only have only random association with IPP or PK/PD-covariates)

· then use the re-sampled datasets for simulating PD according to the 
PD model (driven by IPP, covariates, dose, etc). The degree of shrinkage is 
then the same for PD estimation and simulation.
This approach may for example allow to simulate realistic PD response at 
multiple dosing, based on only single dose PD. When the MD data becomes 
available then one may find that variabilities shift between PK and PD due to 
different PK shrinkage, but I would argue the simulated PD responses still were 
realistic. This approach is useful for predictions into the same population 
(especially if sufficient number of subjects available for re-sampling), but 
may not allow extrapolation into other populations where PK is projected to be 
different.

When possible the obvious solution is to apply one of the alternative 
approaches to simultaneous PK-PD fit; after you have arrived at a final-IPP 
model.
If a simultaneous fit is obtainable/practical this is the best option, but 
notice that e.g. if you have rich PK data in healthy and no PK data in patients 
(plus PD data in both populations): You can estimate separate omegas for PD 
parameters in healthy vs. patients, but it may be difficult to tell whether 
patients higher PD variability is due to PK shrinkage, or due to the actual PD 
variability being higher in this population (or both). PD variability may be 
confounded by a number of other factors that are actually variability in PK 
(fu, active metabolites and bio phase distribution, just to mention a few where 
information may be absent on the individual level). Depending on the purpose of 
the modelling this often not an issue, however.

As you suggest there may be rare situations with IPP where a more complicated 
approach is needed, with a) simulation and re-estimation of PK model, to obtain 
Empirical-Bayes Estimates based on simulated data, and then feed these into the 
subsequent PD model. I would see this as a last resort. There are pitfalls in 
that if PD parameters have been estimated under one degree of PK shrinkage, 
then applying these estimates to a simulated example with different PK 
shrinkage requires adjustment of PD variability. I am not sure anyone has had 
to go down that route before and if not I hope you do not have to either. Maybe 
others can advice on this?

Best regards

Jakob


Two methodological references:

Simultaneous vs. sequential analysis for population PK/PD data II: robustness 
of methods.
Zhang L, Beal SL, Sheiner LB.
J Pharmacokinet Pharmacodyn. 2003 Dec;30(6):405-16.

Simultaneous vs. sequential analysis for population PK/PD data I: best-case 
performance.
Zhang L, Beal SL, Sheiner LB.
J Pharmacokinet Pharmacodyn. 2003 Dec;30(6):387-404.




RE: [NMusers] question about incorporting genotyping data in disease progression model

2012-08-29 Thread Ribbing, Jakob
Hi Kehua,

If I understand you correctly you screened thousands of genotypes to find those 
that appeared to be the (60 most) promising predictors?
Were the asthma patients in your nonmem analysis part of the material you used 
for GWAS screen, or is the nonmem analysis based on external data from other 
patients that were not part of the initial screen?
I also was not quite clear on whether the subsequent nonmem analysis was based 
on genotype or gene expression?
Either way, if an external set of patients were not used in the nonmem 
analysis; that would be the reason you find so many significant covariates.

Apologies if this was a trivial answer that was not relevant for your work, but 
there are many examples of this in the field of data mining, where the multiple 
testing has not been taken into account when declaring significance or claiming 
that a highly predictive model has been established.

Best regards

Jakob


From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of kehua wu
Sent: 29 August 2012 17:21
To: nmusers@globomaxnm.com
Subject: [NMusers] question about incorporting genotyping data in disease 
progression model

Dear NONMEM users,
I am working on a model in asthma patients and trying to build a model of FEV1, 
which is the evaluation of lung function.

I have 500,000 genotyping data. First, I screened the genotyping data by 
running GWAS to find out the potential genotyping data, which gave me about 60 
genotypes. Then, I tried to add these 60 genotyping data into model to find out 
if the progress of FEV1 is related with gene expression.

But the problem is that too many genotypes were related with a significant 
change in OFV, which does not sound reasonable to me. I was hoping to find few 
(2-3) genotypes are associated with the progress in lung function.


I have tried to include the genotyping data as discrete covariate (if 
genotyping =1 then parameter=theta(1);  if genotyping =2 then 
parameter=theta(2);  if genotyping=3 then parameter =theta(3)), and power 
function (genotype**(theta)).

Did I do something wrong when including the genotyping data in the model as 
covariate?

Thanks a lot in advance!

Kehua



Re: [NMusers] Priors and covariate model building

2012-06-23 Thread Ribbing, Jakob
Dear Palang and Martin,

For the published analysis; do you have any information on the covariates that 
you would like to investigate? (mean and sd or range). Another factor weighting 
in the approach you take may be what functional form(s) you consider for 
continuous covariates (e.g. Linear vs. power).

If you have the means for the previous analysis then one simple solution may be 
to centre any investigated covariates around these (prior) covariate means. If 
you find any highly important covariates, you may additionally consider a lower 
omega on that parameter since the prior did not take this covariate into 
account. (with a linear cov model and in the simplest case: based  on covariate 
sd in the previous study and the estimated covariate coefficient - this 
correction could be implemented on the fly, but is only important if you study 
pop has any very important cov effects beyond the allometry correction).

Best regards

Jakob

Skickat från min iPhone

22 jun 2012 kl. 19:39 skrev "Palang Chotsiri" :

> Dear NMusers,
> 
> I am trying to model a sparse dataset by using the benefit of previously 
> published parameter estimates (based on rich data sampling). When applying 
> the $PRIOR subroutine, the THETAs and ETAs estimates of the new dataset are 
> reasonable and the model fit satisfactory.
> 
> My question now relates to covariate modeling when a prior is applied. No 
> significant covariate relationships are included in my prior model (apart 
> from allometric scaling). The prior was derived based on rich PK sampling but 
> a fairly small sample size. The later sparse sampling study is conducted in a 
> larger group compare to the previous study. This might render us a greater 
> power to detect covariate relationships based on this dataset.
> 
> Or problem lies in that we do not know how we can correctly conduct a 
> covariate model search with this model? The parameter estimates of the prior 
> are conditioned on the covariate distribution in the dataset on which it was 
> derived and are not necessarily relevant when a covariate relationship is 
> included.
> 
> Perhaps there is no ideal solution but we would be grateful for any ideas on 
> how to best conduct covariate model building when a prior is used.
> 
> Best regards,
> Palang Chotsiri & Martin Bergstrand
> 
> Mahidol-Oxford Tropical Medicine Research Unit,
> Bangkok 10400, THAILAND
> 
> 
> Ps. Ideal is of course to model both datasets together but that might not 
> always be possible for practical reasons.


RE: [NMusers] VPC results using PsN and Xpose

2012-05-10 Thread Ribbing, Jakob
All,

I think Leonid and Neil have pointed out two plausible explanations, I just 
wanted to highlight that these are two separate issues:

* If you have an additive component in your error model a graph on log 
scale would appear to widen at the end. This is fine. In this particular case 
since observations also widen at the end this may be a likely explanation, with 
the limited information nmusers have

* If you have implemented a translation of proportional + additive on 
the log scale; this is only an approximation and in particular for simulations 
it may fall over. This occurs when IPRED is VERY close to zero. Typically this 
occurs around the time when drug is first absorbed (e.g. towards the end of lag 
time), but if you have rapid elimination I guess it can happen at the end of 
the time interval. This error model is not suitable when IPRED is very small, 
and then issue only appears during simulation. As a result, one may simulate 
odd observations with concentrations towards the infinite at the time of lag. 
If this is affecting you VPC you certainly need to deal with it and one 
solution would be to change your error model, as Neil suggests. As an 
alternative, putting a very low cutoff to the IPRED used in weighting the error 
would help. Since the cutoff is very low it will not affect estimation, but 
will remove unreasonable values during simulation.

Best regards

Jakob

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Indranil Bhattacharya
Sent: 10 May 2012 19:39
To: Toufigh Gordi
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] VPC results using PsN and Xpose

Hi Toufigh, I saw exactly the same scenario when using the proportional + 
additive model in the log domain. It probably has to do with the residual error 
as Leonid suggested. Converting the residual model to just additive and 
estimating the error as a THETA (by fixing SIGMA =1, W=THETA) removed the 
widening in the terminal part of the VPCs.

Neil
On Thu, May 10, 2012 at 12:49 PM, Toufigh Gordi 
mailto:tgo...@rosaandco.com>> wrote:
Dear all,

We have performed a VPC using PsN and the results were plotted using Xpose 4. 
An interesting feature of the graph is that the outer limits of the interval do 
not follow the typical curve smoothly but change with the observed data. As an 
example, toward the end of the time interval, where we have a larger 
variability in the observations, the lines widen and capture most of the data. 
I have difficulties understanding why the prediction lines behave this way. Any 
comments?

Toufigh



--
Indranil Bhattacharya


Re: [NMusers] Sensitive analysis

2012-02-19 Thread Ribbing, Jakob
Hi Norman,

If you have the most recent PsN version, then there is functionality to 
simulate with uncertainty in population parameters. PsN can use either the 
nonmem cov matrix or (non-parametric) bootstrap for that purpose. Each 
replicate dataset that is simulated would be simulated with a different set of 
population parameters.

Unless you want to do a sensitivity analysis on parameters that you have fixed 
in your analysis, I think this is exactly what you need.

Best regards

Jakob

Sent from my iPhone

On 19 Feb 2012, at 18:42, "Norman Z"  wrote:

> Hi Bill and Joe,
>  
> Thank you very much for your suggestions.
>  
> What I am asking is "How sensitive is the simulated PK profiles (summerized 
> by AUC or Cmax) to changes in parameter y?".
>  
> So instead of looking at the OFV change with different parameters, I want to 
> summarize the result of the sensitive analysis. My goal is to obtain the AUC 
> and Cmax for the predicted PK profiles (the model have multiple compartments, 
> and I need to extract the AUC and Cmax for several compartments) with 
> different parameter values.
> Does bootstrap or LLP in PsN out put all the intermediate simulation results?
>  
> Kind regards,
>  
> Norman
>  
>  


RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-11 Thread Ribbing, Jakob
Hi Matt,

OK, I can certainly see that transformations will be helpful in
bootstrapping; for those persons that throw away samples with
unsuccessful termination or cov step. They would otherwise discard all
bootstrap estimates that indicate Emax is close to zero. Since I most
often use all bootstrap samples that terminate at a minimum I guess in
practice I would virtually have the same distribution of Emax,
regardless of transformation or not?

I fully agree transformations are useful to get convergence and
successful covstep on the original dataset (and I tend to keep the same
transformation when bootstrapping, but only for simplicity). However, I
sometimes use the bootstrap results to which parameters should be
transformed in the first place. From what I have seen, bootstrapping the
transformed model again has never changed the (non-parametric bootstrap)
distribution when boundaries were the same (e.g. both models bound to
positive values of Emax).

Cheers

Jakob

-Original Message-
From: Matt Hutmacher [mailto:matt.hutmac...@a2pg.com] 
Sent: 11 July 2011 17:39
To: Ribbing, Jakob; 'nmusers'
Subject: RE: [NMusers] Confidence intervals of PsN bootstrap output

Hi Jakob,

"The 15% bootstrap samples where data suggest a negative drug effect
would
in one case terminate at the zero boundary, in the other case it would
terminate (often unsuccessfully) at highly negative values for log
Emax"...

I have seen that transformation can make the likelihood surface more
stable.
In my experience, when runs terminate using ordinary Emax
parameterization
with 0 lower bounds (note that NONMEM is using a transformation behind
the
scenes to avoid constrained optimization), you can avoid termination and
even get the $COV to run with different parameterizations.  The estimate
might be quite negative as you suggest, but I have seen it recovered.
Also,
I have seen termination avoided and COV achieved with Emax=EXP(THETA(X))
and
EC50=EXP(THETA(Y)) when EC50 and EMAX becomes large. I have seen
variance
components that can be estimated in this way but not with traditional
$OMEGA
implementation.

Best,
matt


RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-11 Thread Ribbing, Jakob
Matt,

Thank you for very good comments. One thing though: Your example where
15% of bootstrap samples have negative values of Emax. I certainly agree
that reparameterising to estimate log of Emax is helpful for obtaining a
useful covmatrix (as Emax is highly uncertain and in this example know
not to produce negative effect).

However, for the non-parametric bootstrap the parameter distribution
would be more or less unchanged, compared to Emax on original scale with
a lower boundary at zero. The 15% bootstrap samples where data suggest a
negative drug effect would in one case terminate at the zero boundary,
in the other case it would terminate (often unsuccessfully) at highly
negative values for log Emax. However, the latter parameterisation would
be useful as it may allow nonmem covmatrix to agree better with the
non-parametric bootstrap.

I certainly agree LRT could be helpful to determine wether negative
values should be allowed or not. Again, the bootstrap may be helpful to
determine wether significant LRT is driven by a single outlying
individual (or a few). There are of course many other procedures to
determine this.


Best regards

Jakob


-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Matt Hutmacher
Sent: 11 July 2011 15:48
To: 'nmusers'
Subject: RE: [NMusers] Confidence intervals of PsN bootstrap output

Hello all,

Sorry to enter the conversation late. (I deleted prior posts to keep
from
exceeding the length limit). 

I certainly agree with that nonparametric bootstrap procedures need
consideration and interpretation of output.  I feel that such procedures
lead to difficulty (as described by many of the previous emails) when
the
design is unbalanced (especially when severely so) and only a few
individuals supply data which supports estimation of a covariate or
structural parameter.  For example, it might be in a sparse PK setting
that
only a few subjects had samples in the absorption phase.  Sampling with
replacement might lead some datasets with fewer subjects with absorption
data than in the original dataset.  This might lead to erratic behavior
(for
example Ka going to large unlikely value) during estimation and hence a
multimodal distribution of the estimates. An example of this for
parametric
simulation is in Kimko and Duffull (eds), Simulation for Clinical Design
(2003), Evaluation of Random Sparse Designs for a Population
Pharmacokinetic
Study:  Assessement of Power and Bias Using Simulation, Hutmacher and
Kowalksi.  There, some random sample designs lead to large estimates of
Ka -
this did not affect CL or V however - pairwise scatterplots were used to
demonstrate this (as Mark Gastonguay suggested in his thread to do).  In
such cases, it might be confidence intervals for the nonparametric
bootstrap
are too wide - valid at the nominal level, but inaccurate.

With respect to dealing with boundary constraints and the non-parametric
bootstrap, upfront thought I think can lead to less arbitrariness.  Do
the
CI's reflect similar findings based on likelihood profiling (LP) or
likelihood ratio tests (LRT)?  For example, it might require more
thought to
reconcile a bootstrap procedure that yielded 15% of your Emax's be 0 if
your
LRT for Emax was > 10 points or the 95% CI based on LP did not include
0,
for example.  By allowing the 0 in the constraints an explicit
assumption is
made that one is unclear that Emax is greater than 0, and thus the
modeler
is allowing a point mass at 0 to exist, which is a difficult
distribution
statistically to deal with.  One must contemplate whether this makes
sense
in the overall clinical interpretation.  If it does not, then perhaps
EMAX =
exp(theta(X)) should be used to ensure that EMAX is not equal to 0 ever.
Reparameterization can be done for just about any parameter to ensure a
'valid' estimate and I would suggest to do this (a sort of
likelihood-based
prior knowledge manifestation) than to arbitrarily pick which estimates
from
the bootstrap to use. Even OMEGA matrices can be parameterized to ensure
non-positive semi-definite matrices, which might help in certain
situations.
I would also be careful if the nonparametric bootstrap CI's are
different
from the COV step CI's as this indicates that something is unknown with
respect to estimation or inference.  In the case of small sample size
and
non-realistic clinical inference, I would suggest a more formal Bayesian
analysis which pre-specifies the analysts assumptions regarding the
probability or viability of certain estimates (can be influenced by the
prior).

Best regards,
Matt  



RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-11 Thread Ribbing, Jakob
Resending and apologizing for any duplicate messages!

-Original Message-
From: Ribbing, Jakob 
Sent: 11 July 2011 10:13
To: nmusers
Subject: RE: [NMusers] Confidence intervals of PsN bootstrap output

All,

This first part is more to clarify and I do not believe this is in
disagreement with what has been said before. The last paragraph is a
question.

The two examples I mentioned regarding boundary conditions are regarding
variance parameters. The second of these, however, is with regards to a
boundary at eta-correlation of one, which is a must rather than just an
irritating NONMEM feature.

I used these examples because they were less controversial and it is
difficult to come up with general statements that apply to all cases.
However, as a third example for a fixed-effects parameter: Imagine a
covariate acting in a linear fashion on a structural parameter that is
bound to be non-negative (e.g. a rate constant, volume, clearance, ED50,
etc). Imagine boundaries on the covariate parameter have been set to
avoid negative values for the structural-model parameter (on the
individual level). For this scenario if a substantial fraction of the
bootstrapped covariate-parameter values end up at one of the boundaries,
one may have to consider two options:
a) Decide that a linear covariate model is inappropriate (at least for
the goal of extrapolating to the whole population with more extreme
covariate values) and change the model into using a different functional
form
b) Dismiss this as random chance, due to small sample/limited
information and a (covariate) slope which "truly" is not far from one of
the boundaries. If this is the case, deleting the bootstrap estimates at
boundary would bias the distribution in an undesirable manner. For that
case the boundary condition is not due to local minimum and we would not
want to discard bootstrap samples at boundary). (Nick's example is of a
different kind, where it is either a local minimum or else not reaching
a minimum at all)

A related question - I am thinking more in terms of simulations with
parameter uncertainty; not just obtaining CI, which was originally what
this thread was about:
There are sometimes situations where a limited set of (clinical-) trial
data gives reasonable point estimates but with huge parameter
uncertainty (regardless nonmem covmaxtrix or bootstrap with appropriate
stratification). The distribution and CI on these parameters may include
unreasonable values, even though there is no obvious physiological
boundary (unreasonable based on prior knowledge that has not been
incorporated into the analysis, e.g. for a certain mechanism and patient
population Typical-Emax beyond 400% or 10 units - depending on if Emax
is parameterised as relative or absolute change). In these situations, a
simplistic option could be to trim one or both ends with regards to the
Emax distribution and discard these bootstrap samples, especially if
only a few values are unreasonable. Alternatively, before running the
bootstrap, one may set the boundary in the control stream (a boundary
that everyone can agree is unreasonable). One would then keep bootstrap
samples that ends up at this boundary for bootstrap distribution, which
is in a way truncated, but so that bootstrap samples indicating linear
concentration/dose-response maintains almost reasonable Emax and
ED50/EC50 values (but as a spike in the distribution at upper Emax).
Notice that re-parameterising the Emax model would not solve the
underlying issue with unreasonable estimates and reducing to a linear
model may be unsuitable, both based on the original dataset and also for
mechanistic reasons). Could you suggest alternative ways of dealing with
this, for these rather general examples (other than the obvious of
applying an informative prior on Emax)? I would be interested in your
solutions both in terms of the non-parametric bootstrap as well as the
parametric bootstrap (based on the nonmem covmatrix).

Much appreciated

Jakob


-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Nick Holford
Sent: 11 July 2011 06:37
To: nmusers
Subject: Re: [NMusers] Confidence intervals of PsN bootstrap output



RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-11 Thread Ribbing, Jakob
All,

This first part is more to clarify and I do not believe this is in
disagreement with what has been said before. The last paragraph is a
question.

The two examples I mentioned regarding boundary conditions are regarding
variance parameters. The second of these, however, is with regards to a
boundary at eta-correlation of one, which is a must rather than just an
irritating NONMEM feature.

I used these examples because they were less controversial and it is
difficult to come up with general statements that apply to all cases.
However, as a third example for a fixed-effects parameter: Imagine a
covariate acting in a linear fashion on a structural parameter that is
bound to be non-negative (e.g. a rate constant, volume, clearance, ED50,
etc). Imagine boundaries on the covariate parameter have been set to
avoid negative values for the structural-model parameter (on the
individual level). For this scenario if a substantial fraction of the
bootstrapped covariate-parameter values end up at one of the boundaries,
one may have to consider two options:
a) Decide that a linear covariate model is inappropriate (at least for
the goal of extrapolating to the whole population with more extreme
covariate values) and change the model into using a different functional
form
b) Dismiss this as random chance, due to small sample/limited
information and a (covariate) slope which "truly" is not far from one of
the boundaries. If this is the case, deleting the bootstrap estimates at
boundary would bias the distribution in an undesirable manner. For that
case the boundary condition is not due to local minimum and we would not
want to discard bootstrap samples at boundary). (Nick's example is of a
different kind, where it is either a local minimum or else not reaching
a minimum at all)

A related question - I am thinking more in terms of simulations with
parameter uncertainty; not just obtaining CI, which was originally what
this thread was about:
There are sometimes situations where a limited set of (clinical-) trial
data gives reasonable point estimates but with huge parameter
uncertainty (regardless nonmem covmaxtrix or bootstrap with appropriate
stratification). The distribution and CI on these parameters may include
unreasonable values, even though there is no obvious physiological
boundary (unreasonable based on prior knowledge that has not been
incorporated into the analysis, e.g. for a certain mechanism and patient
population Typical-Emax beyond 400% or 10 units - depending on if Emax
is parameterised as relative or absolute change). In these situations, a
simplistic option could be to trim one or both ends with regards to the
Emax distribution and discard these bootstrap samples, especially if
only a few values are unreasonable. Alternatively, before running the
bootstrap, one may set the boundary in the control stream (a boundary
that everyone can agree is unreasonable). One would then keep bootstrap
samples that ends up at this boundary for bootstrap distribution, which
is in a way truncated, but so that bootstrap samples indicating linear
concentration/dose-response maintains almost reasonable Emax and
ED50/EC50 values (but as a spike in the distribution at upper Emax).
Notice that re-parameterising the Emax model would not solve the
underlying issue with unreasonable estimates and reducing to a linear
model may be unsuitable, both based on the original dataset and also for
mechanistic reasons). Could you suggest alternative ways of dealing with
this, for these rather general examples (other than the obvious of
applying an informative prior on Emax)? I would be interested in your
solutions both in terms of the non-parametric bootstrap as well as the
parametric bootstrap (based on the nonmem covmatrix).

Much appreciated

Jakob


-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Nick Holford
Sent: 11 July 2011 06:37
To: nmusers
Subject: Re: [NMusers] Confidence intervals of PsN bootstrap output

Leonid,

With regard to discarding runs at the boundary what I had in mind was 
runs which had reached the maximum number of iterations but I realized 
later that Jacob was referring to NONMEM's often irritating messages 
that usually just mean the initial estimate changed a lot or variance 
was getting close to zero.

There are of course some cases where the estimate is truly at a user 
defined constraint. Assuming that the user has thought carefully about 
these constraints then I would interpret a run that finished at this 
constraint boundary as showing NONMEM was stuck in a local minimum 
(probably because of the constraint boundary) and if the constraint was 
relaxed then perhaps a more useful estimate would be obtained.

In those cases then I think one can make an argument for discarding runs

with parameters that are at this kind of boundary as well as those which

reached an iteration limit.

In general I agree with your remarks (echoing those from Marc

RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-08 Thread Ribbing, Jakob
Dear all,

 

Previous attempts to send this e-mail appear to have been unsuccessful.

I am resending with further reduction in text and apologize in case of
any duplicate or triplicate postings.

 

Jakob

 



From: Ribbing, Jakob 
Sent: 08 July 2011 16:31
To: nmusers@globomaxnm.com
Cc: 'Justin Wilkins'; Norman Z; Ribbing, Jakob
Subject: RE: [NMusers] Confidence intervals of PsN bootstrap output

 

Dear all,

 

I would generally agree with Justin's comment that one can take any PsN
output as is, for internal or external reports.

However, specifically for the R script used in the PsN bootstrap you can
not rely on this, as is.

There are several issues with this code, of which some are described in
the e-mail thread, from PsN users list.[Jakob]  (I will try to send this
as separate e-mail)

You would either have to correct the PsN R-code or else write your own
script to get the output that you need for regulatory interaction.

 

Regarding what subset of bootstrap sample to use, I do NOT want to open
up for a discussion regarding whether there are any differences between
bootstrap samples that terminate successfully without or without cov
step, and those that terminate with rounding error. This has been
discussed previously on nmusers; several times and at length. (There is
still a difference in opinion, and as Justin said anyone is free to
follow their own preference)

 

However, regarding excluding bootstrap samples with terminations at
boundary I would strongly discourage doing this by default and without
any thought.

Just as an example, if a portion of your bootstrap samples for an omega
element end up at a boundary this is what you would miss out on:

*   If it is a diagonal omega with some frequency of termination at
lower boundary, excluding these would provide a confidence interval well
above zero. By excluding the bootstrap samples that do not fit with the
statistical model that you have selected, you automatically confirm your
selection (i.e. that data supports the estimation of this IIV or IOV, or
whatever the eta represents), but in my mind the CI based on this subset
is misleading.
*   If it is an off-diagonal omega element (representing covariance
between two etas, i.e. on the individual level) with frequent
termination at the upper boundary (correlation of 1) excluding these
bootstrap samples would provide a confidence interval of the eta
correlation that does not include 1. (Correlation is a secondary
parameter calculated based on covariance and IIV (variance) for the two
etas). Again, I would think the CI based on this subset is misleading,
as one automatically confirms the selection of a BLOCK(3) omega
structure, without taking into consideration a reduction to two
parameters that was preferred by a portion of bootstrap samples. I have
included an illustration of this case in the figure below (I do not know
if postings to nmusers allow including figures, but thought it was worth
a try).

Obviously, if only using the subset with successful covariance step the
exclusion includes bootstrap samples with termination at boundary (if
there are any).

 

I hope this discussion does not discourage any new users from trying the
(non-parameteric) bootstrap.

In my opinion this is a very powerful method that can provide a huge
amount of useful information, beyond the nonmem covariance matrix.

Next time around the nmusers discussion may be regarding whether the
nonmem covariance matrix can be trusted and when a summary of this form
is useful, or whether to use the Sandwich or R-matrix; there are many
areas where there is no safe ground to tread and no full consensus among
users, just as it is sometimes difficult to come up with general advice
on what is the most appropriate procedure.

 

 

Best regards

 

Jakob

 

 

 

An illustration of the uncertainty distribution for the correlation
between two etas (Notice that full correlation is only available in the
subset with boundary problems, as a correlation of one is an implicit
boundary condition. Full correlation is also the only reason to the
boundary problem among these bootstrap samples):

[Jakob] Removed

 

The original parameterisation is based on covariance between the two
etas, rather than correlation, and here the reason to the boundary issue
is not at all obvious:

[Jakob] Removed

 

 

To subscribe to the PsN mailing list:

http://psn.sourceforge.net/list.php

 





RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-08 Thread Ribbing, Jakob
Dear all,

 

Below you find the thread from the PsN mailing list that could not be
included with the e-mail I sent just before this one.

 

Best

 

Jakob

 

Preferably keep any discussion around the specific implementation in PsN
to the PsN mailing list, as of little interest to nmusers that are not
using PsN.

The previous discussion on the PsN list, regarding the R-script used in
the PsN bootstrap is found below:

 

-Original Message-
From: fengdubianbian [mailto:fengdubianb...@hotmail.com] 
Sent: 15 June 2011 08:30
To: psn-gene...@lists.sourceforge.net
Subject: [Psn-general] bootstrap problem

 

hey all,

 

There is .r file auto generated by psn 3.2.4 during bootstraping.

Some vertical lines will be plot on the distribution of parameters.

Actually the Median is Mean, the mean is median.

 

the R code is:

 

if (showmean) {

  legend=paste(legend, "; Mean = ", sp[3], sep="")

}

if (showmedian) {

  legend=paste(legend, "; Median = ", sp[4], sep="")

}

 

>sp

 Min.   1st Qu.Median  Mean   3rd Qu.  Max. 

0.0001998 0.0002994 0.0002994 0.0002967 0.0002994 0.0004768 

 

 

 

 

Kun Wang Ph.D

 

 

 

-Original Message-
From: Kajsa Harling [mailto:kajsa.harl...@farmbio.uu.se] 
Sent: 23 June 2011 11:52
To: General Discussion about PsN.
Subject: Re: [Psn-general] bootstrap problem

 

 

Thank you for the error report. This will be fixed in the next release.

 

Best regards,

Kajsa

 

 

-Original Message-
From: Ribbing, Jakob 
Sent: 24 June 2011 09:25
To: General Discussion about PsN.
Cc: 'jakob.ribb...@pfizer.com'
Subject: RE: [Psn-general] bootstrap problem

 

Kajsa,

 

While you are looking at that R script in PsN; As I recall there are
additional bugs. For example, what bootstrap samples to use is hard
coded on the script, so no matter what you set in psn.conf or on the
command line to bootstrap; for histograms the R script will only use the
samples with successful terminations. I almost always want to use all bs
samples.

 

When you are ready to move bootstrap post-processing into Xpose I can
send you an R script that we use at Pfizer for the PsN bootstrap. This
provides a full summary of what you may get out of a bootstrap, with
nicer graphics, tables summarizing both the nonmem covstep and the
non-parametric bootstrap and including optional parameter
transformations and bs statistics for secondary parameters. Out script
would have to be in Xpose, though, because there are too many options
for PsN.

And I would have to find time to tweak it a bit; I have written the code
only for our ePharm environment in LINUX. Unfortunately I will not find
the time to do this in 2011, but it is worth waiting for :>)

 

Happy summer solstice!

 

Jakob





RE: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-08 Thread Ribbing, Jakob
It seems my previous attempt to post this was unsuccessful (either
because of the graphs included or because 70 kb was too much?)

I am resending without graphs and apologize in case of any duplicate
postings!

 



From: Ribbing, Jakob 
Sent: 08 July 2011 16:31
To: nmusers@globomaxnm.com
Cc: 'Justin Wilkins'; Norman Z; Ribbing, Jakob
Subject: RE: [NMusers] Confidence intervals of PsN bootstrap output

 

Dear all,

 

I would generally agree with Justin's comment that one can take any PsN
output as is, for internal or external reports.

However, specifically for the R script used in the PsN bootstrap you can
not rely on this, as is.

There are several issues with this code, of which some are described in
the e-mail thread below, from PsN users list.

You would either have to correct the PsN R-code or else write your own
script to get the output that you need for regulatory interaction.

 

Regarding what subset of bootstrap sample to use, I do NOT want to open
up for a discussion regarding whether there are any differences between
bootstrap samples that terminate successfully without or without cov
step, and those that terminate with rounding error. This has been
discussed previously on nmusers; several times and at length. (There is
still a difference in opinion, and as Justin said anyone is free to
follow their own preference)

 

However, regarding excluding bootstrap samples with terminations at
boundary I would strongly discourage doing this by default and without
any thought.

Just as an example, if a portion of your bootstrap samples for an omega
element end up at a boundary this is what you would miss out on:

*   If it is a diagonal omega with some frequency of termination at
lower boundary, excluding these would provide a confidence interval well
above zero. By excluding the bootstrap samples that do not fit with the
statistical model that you have selected, you automatically confirm your
selection (i.e. that data supports the estimation of this IIV or IOV, or
whatever the eta represents), but in my mind the CI based on this subset
is misleading.
*   If it is an off-diagonal omega element (representing covariance
between two etas, i.e. on the individual level) with frequent
termination at the upper boundary (correlation of 1) excluding these
bootstrap samples would provide a confidence interval of the eta
correlation that does not include 1. (Correlation is a secondary
parameter calculated based on covariance and IIV (variance) for the two
etas). Again, I would think the CI based on this subset is misleading,
as one automatically confirms the selection of a BLOCK(3) omega
structure, without taking into consideration a reduction to two
parameters that was preferred by a portion of bootstrap samples. I have
included an illustration of this case in the figure below (I do not know
if postings to nmusers allow including figures, but thought it was worth
a try).

Obviously, if only using the subset with successful covariance step the
exclusion includes bootstrap samples with termination at boundary (if
there are any).

 

I hope this discussion does not discourage any new users from trying the
(non-parameteric) bootstrap.

In my opinion this is a very powerful method that can provide a huge
amount of useful information, beyond the nonmem covariance matrix.

Next time around the nmusers discussion may be regarding whether the
nonmem covariance matrix can be trusted and when a summary of this form
is useful, or whether to use the Sandwich or R-matrix; there are many
areas where there is no safe ground to tread and no full consensus among
users, just as it is sometimes difficult to come up with general advice
on what is the most appropriate procedure.

 

 

Best regards

 

Jakob

 

 

 

An illustration of the uncertainty distribution for the correlation
between two etas (Notice that full correlation is only available in the
subset with boundary problems, as a correlation of one is an implicit
boundary condition. Full correlation is also the only reason to the
boundary problem among these bootstrap samples):

[Jakob] Removed

 

The original parameterisation is based on covariance between the two
etas, rather than correlation, and here the reason to the boundary issue
is not at all obvious:

[Jakob] Removed

 

 

To subscribe to the PsN mailing list:

http://psn.sourceforge.net/list.php

 

 

Preferably keep any discussion around the specific implementation in PsN
to the PsN mailing list, as of little interest to nmusers that are not
using PsN.

The previous discussion on the PsN list, regarding the R-script used in
the PsN bootstrap is found below:

 

-Original Message-
From: fengdubianbian [mailto:fengdubianb...@hotmail.com] 
Sent: 15 June 2011 08:30
To: psn-gene...@lists.sourceforge.net
Subject: [Psn-general] bootstrap problem

 

hey all,

 

There is .r file auto generated by psn 3.2.4 during bootstraping.

Some vertical lines 

Re: [NMusers] Confidence intervals of PsN bootstrap output

2011-07-05 Thread Ribbing, Jakob
Hi Norman,

I would suggest you rely on you own calculation, rather than the output from 
the R script that is used by PsN (but only trust Excel as far as the back-of-an 
Envelop). I would do just like you and include all bs samples in calculating 
percentiles. Others prefer only to use a subset, based on termination status.

PsN has it's own mailing list (sourceforge) where you can find out more about 
bugs and features of PsN programs.

Best

Jakob

Sent from my iPhone

On 5 jul 2011, at 22:16, "Norman Z"  wrote:

> Hello everyone,
>  
> I am using PsN to do some bootstrap and have some questions regarding the PsN 
> output. 
> 
> 1. There are two confidence interval (CI) reported in the output file 
> "bootstrap_results.csv":
> standard.error.confidence.intervals
> percentile.confidence.intervals
> I wonder which one should be used in the publication or report, and what is 
> the difference between them. 
> 
> 2. When I use excel function
> "=PERCENTILE(T5:T505,5%)" and "=PERCENTILE(T5:T505,95%)" to calculate the 5% 
> and 95% percentile of a parameter from the data  "raw_results1.csv" the 
> result is different from both "standard.error.confidence.intervals" and 
> "percentile.confidence.intervals".
> The same happens to the excel function "=MEDIAN(T5:T505)" result and the 
> "medians" in the "bootstrap_results.csv". 
> Does anyone know why it is the case, and which value I should use?
> 
> bootstrap_results.csv
> medians
> 423.5635
> standard.error.confidence.intervals
> 5%419.73239
> 95%428.26761
> percentile.confidence.intervals
> 5%419.56165
> 95%427.9239
> 
> Excel calculation from raw_results1.csv
> Median423.578
> 5% percentile419.593
> 95% percentile427.922
> 
> Thanks,
> 
> Norman
> 


RE: [NMusers] About DoLoop in NONMEM

2011-03-24 Thread Ribbing, Jakob
Dear Liu,

 

In nonmem you can not define differential equations through a loop; each
has to be explicitly written out.

However, I notice that you have the same rate of transit among all of
your (transit) compartments.

Therefore, an analytical solution would be the most efficient way of
solving your problem.

 

Please see the reference below.

 

Kind regards

 

Jakob

 

 

Analytical solution:

Implementation of a transit compartment model for describing drug
absorption in pharmacokinetic studies.

Savic RM, Jonker DM, Kerbusch T, Karlsson MO.

J Pharmacokinet Pharmacodyn. 2007 Oct;34(5):711-26. Epub 2007 Jul 26.

 

You may be familiar with this already?

Transit compartments versus gamma distribution function to model signal
transduction processes in pharmacodynamics.

Sun YN, Jusko WJ.

J Pharm Sci. 1998 Jun;87(6):732-7.



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Liu Dongyang

Sent: 25 March 2011 04:17

To: nmusers

Subject: [NMusers] About DoLoop in NONMEM

 

Dear NonMEM users,

 

 I am constructing a transduction model to capture disease onset
time-delay using NonMEM. Because

I need to use too mamy compartments (means too much ODE) to handle this
delay, I want to use DoLoop

 to save a little room. But I can not find a way to use it in $DES
block. The block and error message are 

listed as below,

 

$DES

I=4

   DOWHILE(I.LT.22)  ;(I also tried Do I=5,22, same as DOWHILE)

   im1=I-1

 DADT(I) = KT*(A(im1)-A(I))

   I=I+1

   ENDDO

  

 

Error message:

 

 DOWHILE(I.LT.22)

 X

 THE CHARACTERS IN ERROR ARE: DO

 438  THIS WORD IS NOT APPROPRIATE FOR THIS BLOCK OF ABBREVIATED CODE.

 

 

I am very appreciated if anyone can tell me how to incorporate DoLoop
into $DES block.

Thanks a lot!

 

 

Best regards,

 

Liu, Dongyang, PhD, Postdoc Fellow

Department of Pharmaceutical sciences,

State University of New York at Buffalo.

Tel(o):01-716-645-4840,

Cell:  01-716-908-6644,

 



RE: [NMusers] How to generate a random number with $EST

2011-03-14 Thread Ribbing, Jakob
Dear all,

A single control stream with $SIM followed by $EST could possibly do the trick 
(similar to what Luann suggests below, using two control streams). However, 
before we air additional suggestions on how we may or may not achieve 
random-number generation during estimation; maybe it is better to hold off 
until we hear back from Nieves? - As several persons have already indicated 
they can not see much use of this feature and we do not know specifically how 
Nieves meant to use it.

Best regards

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Luann Phillips
Sent: 14 March 2011 15:00
To: Nieves Vélez de Mendizabal
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] How to generate a random number with $EST

Nieves,

Two different thoughts using typical code. I'm not sure how you would 
provide a seed for the CALL RANDOM using verbatim code.

(1) Generate the random numbers (one for all obs. or  one for each time 
value) in another software and import them as part of the data (as Nick 
suggested).

(2) If it is important that NONMEM generate the random numbers then you 
could try:

a) Run your control stream once using SIMONLY to obtain the random 
numbers. Also have a line that saves the input DV to a new variable name 
(i.e. CP =DV). In the table file use the NOAPPEND option to prevent 
adding (DV,PRED,RES,WRES), output all the variables in the input 
datasets + the random number variable + CP.

---
$PK

CP=DV

IF(ICALL.EQ.4)THEN
   CALL RANDOM (2,R)
   RN=1-R
ENDIF



$SIMULATION (1234123) (12343 UNIFORM) ONLYSIM

$TABLE ID TIME DAY TSLD AMT RATE DUR DOSE N CP CVLQ CMT MDV EVID WTKG 
SAMP NUM RN FORMAT=sF8.4 NOAPPEND NOPRINT NOHEADER FILE=xxx.tbl
---

b) Use the table file from (a) as the input dataset. However, you will 
need to use CP as DV (since the original DV values were replaced in the 
simulation run).

---
$DATA trash-1.tbl

$INPUT ID TIME DAY=DROP TSLD AMT RATE DUR DOSE N DV CVLQ
CMT MDV EVID WTKG SAMP NUM RN

$PK

Normal model statements
Use RN as your random number

Note: TIME is now the time that NONMEM computes (elapsed time from first 
observation), so I dropped DAY. DV is really CP (the original DV). Also, 
note that you may want to specify the format (option in NM7) on the 
output table to maintain the precision of the numbers in your original 
input file.
-


Best regards,

Luann Phillips
Director, PK/PD
Cognigen Corporation


Nieves Vélez de Mendizabal wrote:
> Dear NONMEM Users,
> 
> 
> I'm developing a model with NONMEM6|7. In this model, I need to generate 
> a random number at every time step. The problem is that the use of the 
> function "CALL RANDOM (2,R)" is supposed to be only for simulation, 
> isn't it?
> 
> Thus, is it possible to generate a random number at every time step with 
> NONMEM? Does anybody know how?
> 
> This is part of the code that does not work (it's not working because 
> the condition ICALL.EQ.4 never happens and for that it's not getting 
> into the "if", but on the other hand, in order to use the function 
> RANDOM(2,R), such function requires ICALL.EQ.4):
> 
> ...
> 
> $SUBS ADVAN6 TOL=5
> 
> $MODEL
> 
> ...
> 
> $PK
> 
> ...
> 
> IF (ICALL.EQ.4) THEN
> 
> CALL RANDOM (2,R) ;Rand number in[0,1[
> 
> T=1-R
> 
> ...
> 
> ENDIF
> 
> $DES
> 
> ...
> 
> $ERROR
> 
> ...
> 
> $ESTIMATION MAXEVAL=0 NUMERICAL METHOD=COND LAPLACE LIKE CENTERING 
> PRINT=2 MSFO=msfo3
> 
>  
> 
> Thank you!
> 
> Nieves
> 
> -- 
> 
> Nieves Velez de Mendizabal, Ph.D
> Departamento de Farmacia y Tecnología Farmacéutica
> Facultad de Farmacia
> Universidad de Navarra
> Phone: (+34) 658 732 851
> Phone: (+34) 948 255 400 ext. 5827
> nve...@unav.es
>  
> 


RE: [NMusers] distribution assumption of Eta in NONMEM

2010-06-01 Thread Ribbing, Jakob
Dear Ethan,

 

I am not 100% on exactly who said what in this thread and do not want to
put my words in anyone else's mouth, but I think we can all agree on
this:

*   In simulation mode NONMEM assumes a normal distribution of etas
*   In estimation mode omega can be seen as an estimate of the
variance of eta's. Depending on the parameterisation of random effects
(e.g. additive/proportional vs. log normal) and estimation method;
estimates of population parameters may become biased or imprecise if the
true eta distribution is not close to normal. One clear indication of
this problem is if we have very little shrinkage and the EBE eta
distribution from a large study is skewed.
*   We may often come closer to a normal distribution of etas by
applying a so-called semi-parametric distribution of individual
parameters. If this transformation is providing substantially lower OFV
we can expect that it also improves simulation properties of the model
(and possibly also improves accuracy and precision of population
parameters). In what transformation to use, we may rely on OFV (testing
various transformations), the nonmem nonparametric estimation, or, in
case of rich data the shape of the EBE eta distribution (may give some
hint even with shrinkage).

 

Consequently, depending on the context, estimation in nonmem may or may
not assume a normal distribution of etas. However, even when normality
is not an assumption of the estimation method it may be desirable to
approach a normal distribution (unless non-parametric estimation), to
improve simulation properties. In my opinion this may be worthwhile even
if I do not see clear benefits in VPC:s etc. I do not want even 0.5% of
patients to have a negative effect of drug treatment
(additive/proportional eta) if I believe that is an impossible outcome.
I do not want 1% of all subjects to have more than 100% enzyme
inhibition, etc.

 

Best regards

 

Jakob

 



RE: [NMusers] distribution assumption of Eta in NONMEM

2010-06-01 Thread Ribbing, Jakob
Dear all,

 

Dropping in a little late in the game all I can say is this:

Shame on all you great minds for reinventing your own wisdom :>)

 

Most of the content in the current thread has already been discussed in
an earlier thread:

http://www.mail-archive.com/nmusers@globomaxnm.com/msg01271.html

 

However, this old thread does contain a lot of postings and quite a few
which are VERY confusing, so you may want to skip ahead to Matt's
posting here:

http://www.mail-archive.com/nmusers@globomaxnm.com/msg01302.html

(There are also many other postings which are very useful, but the one
above captures the essence with regards to the original question in the
current thread)

 

That said I think there are always new learnings in each thread, as
people tend to express themselves differently and the original question
branch into several new discussion points.

So I guess there are never two threads that are exactly alike, even when
the usual suspects participate in both. 

 

Cheers

 

Jakob

 

 



RE: [NMusers] How to think about the different determination methods?

2010-02-09 Thread Ribbing, Jakob
Dear Ye hong bo,

 

If I understand you correctly no single sample has been assayed with multiple 
assay methods? It may be that the assay method only makes a small contribution 
to the overall residual, but if you have enough information on the three SIGMAs 
you may keep it as three separate error magnitudes (however, the relative 
precision of assay methods will be confounded by that one centre may handle 
their sample collection etc. more accurate than another)

 

As I see it there are two ways to go:

 

Either start out with a simpler model by fixing OMEGAS to zero where you do not 
have enough information to describe IIV. It is rare that there is enough 
information to estimate separate etas for inter-compartmental clearance 
parameters (Q:s), so you may consider using the same eta or fixing one OMEGA to 
zero there.

Also, unless you have good information on the three individual volume 
parameters you may start out by only having an eta on the total volume (VSS 
below) and estimate the total volume and the fractions of that volume that 
represents the central and one of the peripheral volumes (FVC and FVP1 below). 
You can then proceed by allowing etas on one or both of these fractions 
according to the code below (estimating OMEGA4 and OMEGA6). An OMEGA BLOCK to 
estimate the covariance across (etas on) CL and volume parameters may further 
stabilize the model, if that correlation is important.

 

 TVFVC  = THETA(4)

  PHI   = LOG(TVFVC/(1-TVFVC))

  DENOM = 1 + EXP(PHI + ETA(4))

  FVC= EXP(PHI + ETA(4)) / DENOM

 TFVP1  = THETA(6)

  PHI2  = LOG(TFVP1/(1-TFVP1))

  DENOM2= 1 + EXP(PHI2 + ETA(6))

  FVP1   = EXP(PHI2 + ETA(6)) / DENOM2

 

  FVP   = 1 - FVC

 V2 = FVC*VSS

 VP = FVP*VSS

  FVP2  = 1 - FVP1

 V3 = FVP1 * VP

 V4 = FVP2 * VP

 

For the above code, FVC and FVP1 are estimated with a logit-transformation 
which is necessary only when adding etas on these parameters. Also, the logit 
code used above is a little more complex than needed, with the benefit that 
THETA(4) and THETA(6) above represent the typical fraction, rather than some 
value on the logit scale. For alternative 2 below this parameterisation is not 
suitable as it does not allow MU modelling (I think). The standard way of 
implementing the logit transformation gives exactly the same fit and allows for 
MU modelling.

 

Else (alternative 2), estimate your model using the new Monte Carlo methods in 
NONMEM 7. You can investigate large OMEGA BLOCKs to find out where you have 
important eta correlations, but for parameters where you have little or no 
information on the individual level you may have to fix OMEGA to a small value 
(e.g. 10 or 15% CV, which is biologically more plausible than no variability at 
all, and still efficient using Monte Carlo methods). However, it is not 
straight forward to use these estimation methods in nonmem, so allow ample time 
for getting yourself acquainted with these (settings for the various estimation 
methods that are appropriate for your data and model + implementing MU 
modelling in your control stream).

 

I hope this helps and wish you a happy New Year!

 

Jakob



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of yhb5442387
Sent: 09 February 2010 14:03
To: nmusers
Subject: [NMusers] How to think about the different determination methods?

 

Dear NMusers:

   I am dealing with the ppf(Propofol) data collected from 3 different 
centers,in which the drug concentrations ananlysis happens to be 3 different 
assays.Those are GC,Hplc-UV,HPlc-fluorescence,separaterly.As a item,the assay 
way is included,labeled as 1,2,3,in order.

And as an introduction from the Mannual, the assay way is arranged as the 
intraindividual variability .The syntax is as follows:

IF (ASSY.EQ.1)   Y=F*(1+EPS(1))

IF (ASSY.EQ.2)   Y=F*(1+EPS(2))

IF (ASSY.EQ.3)   Y=F*(1+EPS(3))

And by the way,the pharmacokinetics of ppf were described by a 
three-compartment model.So the subroutine of advan 11,trans 4 was applied.

Of course,the combined Additive and CCV error model were considered at the 
beginning,but it seems to me that the additive error was so little (0.1) 
that even could be ignored.So the CCV model was applied finally,as mentioned 
above.

So there are 6 thetas(Cl,V1,Q2,V2,Q3,V3),6 etas (exp ISV model) and 3 eps in 
the base model.Then the problem happened.

No matter what intial estimates I tried,the results of $EST and $COV steps 
allways indicate that the model was overparactermized.

The hint of R Matrix is either singular or NON-positive semidefinite appeared 
in the output files.And from the PDx-plotter,the plot of objective function Vs 
iteration was fairly flat.So I am confirmed that the model was 
overparactermized.In addtition,I have checked the R matrix in which some values 
in the line of SG22,SG33,are 0.

Here are my questions:

Should I take the assay error as an intraindividual variability?

RE: [NMusers] lab values

2010-01-13 Thread Ribbing, Jakob
Dirk,

 

I think the approach is influenced by what this lab value represents. If it is 
a biomarker/endpoint that is influenced by drug treatment then the best 
approach is to include this in your PK-PD model as a dependent variable. If you 
treat this as a traditional covariate it should not be influenced by treatment. 
Assuming your drug improves disease symptom or progression (as measured by this 
biomarker) it would not be ideal to use either LOCF or LOCB. The baseline for 
this biomarker (DAY -1 in your case) can be used as a covariate in your PK 
model, as it is not influenced by drug treatment.

 

If you can not spend the time to build a proper PK-PD model but still believe 
this covariate is important for your PK model then maybe you can do something 
simple, like assuming a linear slope in this biomarker between the two 
measurements and use the two observed values for interpolation?

 

Best regards

 

Jakob

 



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Garmann, Dirk
Sent: 13 January 2010 12:41
To: nmusers@globomaxnm.com
Subject: [NMusers] lab values

 

Dear NMUSERS,#

 

I would like to ask for some opinions regarding the handling of missing lab 
values in a NONMEM Dataset;

 

Our normal procedure: 

Parameter values will be carried backward to the first visit if the first visit 
value is missing, it will be carry forward to the last visit if no value is 
available at the last visit and will be set at the median value of two adjacent 
visits in other cases.  

 

Now we have a phase III study (multiple doses), one safety lab at day -1 and 
one safety lab at final examination only, no lab in between (>6 month)

 

Two main strategies are possible

 

1.)Different from our standard procedure:

Carry the lab value at final examination backward to day -1. 

 

2.)According to our standard: Use the median (or perhaps a regression 
between the first and final examination)

:

My assumptions: 

The first strategy might be useful to reflect the influence of the drug on lab 
values and will reflect the steady state situation.

 

The second strategy might be better to characterize the influence of the lab 
values on the PK of the drug, e.g if a disease worsens during the study.

 

As our main focus will be the last one, I would use the standard approach.

 

I know that this is quite basic, however as this was discussed during a meeting 
I would appreciate to have your opinion.

 

Many thanks in advance

 

Dirk

 

Dirk Garmann, PhD

Clinical Scientific Expert /Pharmacokineticist

Merz Pharmaceuticals

Eckenheimer Landstrasse 100

60318 Frankfurt

Phone +49 (69) 1503 720

 



Merz Pharmaceuticals GmbH, Frankfurt am Main

Amtsgericht Frankfurt am Main, HRB 53808

Geschäftsführung: Dr. Martin Zügel (Vors.), Dr. Alexander Gebauer, 

Dr. Karsten Schlemm, Dr. Eugen Wilbert  



Die vorgenannten Angaben der E-Mail haben grundsätzlich nur informativen 
Charakter. Dies ist kein Anerkenntnis, dass es sich beim Inhalt dieser E-Mail 
um eine rechtsverbindliche Erklärung der entsprechenden Gesellschaft der Merz 
Gruppe handelt, es sei denn dies ist ausdrücklich als solches formuliert. 
Erklärungen, die eine Gesellschaft der Merz Gruppe verpflichten sollen, 
bedürfen jeweils der Unterschrift durch zwei zeichnungsberechtigte Personen 
dieser Gesellschaft. 

Diese E-Mail enthält  vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,  informieren Sie bitte sofort den Absender und vernichten Sie diese 
E-Mail. Das  unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail 
ist nicht  gestattet. 

This e-mail may contain confidential  and/or privileged information. If you are 
not the intended recipient (or have  received this e-mail in error) please 
notify the sender immediately and  destroy this email. Any unauthorised 
copying, disclosure or distribution of  the material in this e-mail is strictly 
 forbidden. (MRZ/2010) 

 



RE: [NMusers] BSV and BOV interaction

2009-12-21 Thread Ribbing, Jakob
Andreas,

The code snippet you picked out is not overparameterized, since the
assumption is made that the variance of eta 5 and 6 are the same:

  $OMEGA BLOCK(1) 0.05
  $OMEGA BLOCK(1) SAME

This first equation that you suggest is this:
  IOV2=0
  IF (DESC.EQ.2) IOV2=1
  ETCL = ETA(1)+IOV2*ETA(5)
As you note the equation you suggest implies that the between-subject
variability in CL will be larger for the first occasion than the second.
Unless inclusion criteria resulted in weird data that forced me to make
that assumption I would not feel comfortable using this
parameterisation. Also I do not fully understand this "Watch out that
this implies that the random effect variation is larger for DESC.EQ.2
than for DESC.EQ.1 since ETA(5) is (hopefully) not negative." Both eta 1
and eta 5 may be negative and positive, so if you are hoping for only
positive eta5 values it seems something is wrong with the structural
model. Or did you mean that you hope the variance of eta5 is positive
(ie. OMEGA(5,5))?

Finally, I also have my doubts about your last suggestion regarding how
to combine eta 1 and 5: "You could multiply the two to allow for the
variation being smaller or larger in the latter case but multiplication
makes the estimation more unstable." How would you interpret that model?
Subjects that have abnormally high CL at occasion 1 are likely to have
either abnormally high, or abnormally low CL at occasion 2. I think
simulations would give you patterns you do not see in real life with
such assumptions. Also, if data supports such a model, it may be more a
reflection of the error model. If some subjects have more error in their
observations a simple eta on epsilon may be more appropriate.

I hope everyone will have a nice break, both from nmusers and from work!
Best regards

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of andreas.kra...@actelion.com
Sent: 21 December 2009 08:18
To: Jia Ji
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] BSV and BOV interaction

Jia,

you are overparameterized. Take this snippet from your code:

  IOV2=0
  IF (DESC.EQ.1) IOV2=ETA(5)
  IF (DESC.EQ.2) IOV2=ETA(6)

  ETCL = ETA(1)+IOV1 

Now consider the two possibilites:
a) DESC.EQ.1: ETCL = ETA(1) + ETA(5)
b) DESC.EQ2.2: ETCL = ETA(1) + ETA(6)

In other words, you have two equations to identify 3 parameters.
Usually you associate the "base" random effect with one case and add a 
deviation parameter to the other case.
An example would be

  IOV2=0
  IF (DESC.EQ.2) IOV2=1
  ETCL = ETA(1)+IOV2*ETA(5)

Thus, ETA(1) estimates your random effect variation for the case
DESC.EQ.1 
and ETA(1) + ETA(5) is the random effect variation for the case
DESC.EQ.2.
ETA(5) is thus the additional random effect variation for the second
case 
compared to the first.
Watch out that this implies that the random effect variation is larger
for 
DESC.EQ.2 than for DESC.EQ.1 since ETA(5) is (hopefully) not negative.
You could multiply the two to allow for the variation being smaller or 
larger in the latter case but multiplication makes the estimation more 
unstable.

Why do you see the need to link the two? Why don't you define
IF(DESC.EQ.1) ETCL=ETA(5)
IF(DESC.EQ.2) ETCL=ETA(6)
CL=THETA(1)*EXP(ETCL)

and get rid of ETA(1)? That decouples the two estimates entirely.

Andreas







Jia Ji  
Sent by: owner-nmus...@globomaxnm.com
12/19/2009 12:32 AM

To
nmusers@globomaxnm.com
cc

Subject
[NMusers] BSV and BOV interaction






Dear All,
 
I am trying to model our data with a two-compartment model now. In our 
trial, some patients received escalated dose at the second cycle so they

have one more set of kinetics data. So there were BSV and BOV on PK 
parameters in the model. Objective function value is 
significantly improved (compared with the model not having BOV) and SE
of 
ETAs are around 40% or less. The code is as below:
 
$PK
  DESC=1
  IF (TIME.GE.100) DESC=2
  IOV1=0
  IF (DESC.EQ.1) IOV1=ETA(2)
  IF (DESC.EQ.2) IOV1=ETA(3)
  
  IOV2=0
  IF (DESC.EQ.1) IOV2=ETA(5)
  IF (DESC.EQ.2) IOV2=ETA(6)

  ETCL = ETA(1)+IOV1 
  ETQ = ETA(4)+IOV2 
  ETV2 = ETA(7)

  CL=THETA(1)*EXP(ETCL)
  V1=THETA(2)
  Q=THETA(3)*EXP(ETQ)
  V2=THETA(4)*EXP(ETV2)
 
;OMEGA initial estimates
  $OMEGA 0.0529
  $OMEGA BLOCK(1) 0.05
  $OMEGA BLOCK(1) SAME
  $OMEGA 0.318 
  $OMEGA BLOCK(1) 0.05
  $OMEGA BLOCK(1) SAME
  $OMEGA 0.711
  
When I looked at scatterplot of ETA, I found that there is strong 
correlation between ETA(1) and ETA(2), which is BSV and BOV of CL. And
the 
same thing happened to BSV and BOV of Q. Worrying about 
over-parameterization (I am not NONMEM 7 user), I tried to define a
THETA 
for this correlation as the code below (just test on CL only first):
 
$PK
  DESC=1
  IF (TIME.GE.100) DESC=2
  IOV1=0
  IF (DESC.EQ.1) IOV1=THETA(1)*ETA(1)
  IF (DESC.EQ.2) IOV1=THETA(1)*ETA(1)
 
  ETCL = ETA(1)+IOV1  
  ETQ = ETA(2)
  ETV2 = ETA(3)

  CL=THETA(2)*EXP(ETCL)
  V1=THETA(3)
  Q=THETA(4)*EXP(ET

RE: [NMusers] Calculating shrinkage when some etas are zero

2009-08-21 Thread Ribbing, Jakob
Hi Douglas,

 

This has been a concern for me as well, although I do not know if this ever 
happens(?). For the automatic (generic scripts) exclusion of etas that I use 
for eta-diagnostics, I tend to exclude a group (e.g. each dose or dose-study 
combination) if all subjects have eta=0 in that group. This would for example 
exclude IOV-eta3 from a study that only hade two occasions, or the placebo 
group(s) for etas on drug effect. I feel safe with that exclusion for my 
diagnostics. If I had to make the choice between excluding all etas that are 
exactly equal to zero or none at all, I would more trust diagnostics after 
exclusion.

 

Jakob

 



From: Eleveld, DJ [mailto:d.j.elev...@anest.umcg.nl] 
Sent: 21 August 2009 13:57
To: Ribbing, Jakob; Pyry Välitalo; nmusers@globomaxnm.com
Subject: RE: [NMusers] Calculating shrinkage when some etas are zero

 

Hi Pyry and Jacob,

 

If you exclude zero etas then what happens to infomative individuals who just 
happen to have the population typical values?  

This approch would exclude these individuals when trying to indicate how 
informative an estimation is about a parameter.

I know this is unlikely, but it is possible. 

 

The etas just tell what value is estimated, its not the whole story about how 
infomative an estimation is.  I dont think you can do

this without considering how 'certian' you are of each of those eta values.

 

Douglas Eleveld

 



Van: owner-nmus...@globomaxnm.com namens Ribbing, Jakob
Verzonden: vr 21-8-2009 12:26
Aan: Pyry Välitalo; nmusers@globomaxnm.com
Onderwerp: RE: [NMusers] Calculating shrinkage when some etas are zero

Hi Pyry,

 

Yes, when calculating shrinkage or looking at eta-diagnostic plots it is often 
better to exclude etas from subjects that has no information on that parameter 
at all. For a PK model we would not include subjects that were only 
administered placebo (if PK is exogenous compound). In the same manner placebo 
subjects are not informative on the drug-effects parameters of a (PK-)PD model. 
These subjects have informative etas for the placebo-part of the PD model, but 
not on the drug-effects (etas on Emax, ED50, etc.). For any eta-diagnostics you 
can removed these etas based on design (placebo subject, IV dosing, et c) or 
the empirical-Bayes estimate of eta being zero.

 

Cheers

 

Jakob

 



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Pyry Välitalo
Sent: 21 August 2009 10:45
To: nmusers@globomaxnm.com
Subject: [NMusers] Calculating shrinkage when some etas are zero

 

Hi all,

I saw this snippet of information on PsN-general mailing list.

Kajsa Harling wrote in PsN-general:
"I talked to the experts here about shrinkage. Apparently, sometimes an
individual's eta may be exactly 0 (no effect, placebo, you probably
understand this better than I do). These zeros should not be included in
the shrinkage calculation, but now they are (erroneously) in PsN."

This led me to wonder about the calculation of shrinkage. I decided to post 
here on nmusers, because my question mainly relates to NONMEM. I could not find 
previous discussions about this topic exactly.

As I understand, if a parameter with BSV is not used by some individuals, the 
etas for these individuals will be set to zero. An example would be a dataset 
with IV and oral dosing data. If oral absorption rate constant KA with BSV is 
estimated for this data, then all eta(KA) values for IV dosing group will be 
zero.

The shrinkage of etas is calculated as 
1-sd(etas)/omega 
If the etas that equal exactly zero would have to be removed from this equation 
then it would mean that NONMEM estimates the omega based on only those 
individuals who need it for the parameter in question, e.g. the omega(KA) would 
be estimated only based on the oral dosing group. Is this a correct 
interpretation for the rationale to leave out zero etas? 

I guess the inclusion of zero etas into shrinkage calculations significantly 
increases the estimate of shrinkage because the zero etas always reduce the 
sd(etas). As a practical example, suppose a dataset of 20 patients with oral 
and 20 patients with IV administration. Suppose NONMEM estimates an omega of 
0.4 for BSV of KA. Suppose the sd(etas) for oral group is 0.3 and thus sd(etas) 
for all patients is 0.3/sqrt(2) since the etas in IV group for KA are zero. 
Thus, as far as I know, PsN would currently calculate a shrinkage of 
1-(0.3/sqrt(2))/0.4=0.47.
Would it be more appropriate to manually calculate a shrinkage of 
1-0.3/0.4=0.25 instead?

All comments much appreciated.

Kind regards,
Pyry



Kajsa Harling wrote:

Dear Ethan,

I have also been away for a while, thank you for your patience.

I talked to the experts here about shrinkage. Apparently, sometimes an
individual's eta may be exactly 0 (no effect, placebo, you probably
understand

RE: [NMusers] Calculating shrinkage when some etas are zero

2009-08-21 Thread Ribbing, Jakob
Hi Pyry,

 

Yes, when calculating shrinkage or looking at eta-diagnostic plots it is often 
better to exclude etas from subjects that has no information on that parameter 
at all. For a PK model we would not include subjects that were only 
administered placebo (if PK is exogenous compound). In the same manner placebo 
subjects are not informative on the drug-effects parameters of a (PK-)PD model. 
These subjects have informative etas for the placebo-part of the PD model, but 
not on the drug-effects (etas on Emax, ED50, etc.). For any eta-diagnostics you 
can removed these etas based on design (placebo subject, IV dosing, et c) or 
the empirical-Bayes estimate of eta being zero.

 

Cheers

 

Jakob

 



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
Behalf Of Pyry Välitalo
Sent: 21 August 2009 10:45
To: nmusers@globomaxnm.com
Subject: [NMusers] Calculating shrinkage when some etas are zero

 

Hi all,

I saw this snippet of information on PsN-general mailing list.

Kajsa Harling wrote in PsN-general:
"I talked to the experts here about shrinkage. Apparently, sometimes an
individual's eta may be exactly 0 (no effect, placebo, you probably
understand this better than I do). These zeros should not be included in
the shrinkage calculation, but now they are (erroneously) in PsN."

This led me to wonder about the calculation of shrinkage. I decided to post 
here on nmusers, because my question mainly relates to NONMEM. I could not find 
previous discussions about this topic exactly.

As I understand, if a parameter with BSV is not used by some individuals, the 
etas for these individuals will be set to zero. An example would be a dataset 
with IV and oral dosing data. If oral absorption rate constant KA with BSV is 
estimated for this data, then all eta(KA) values for IV dosing group will be 
zero.

The shrinkage of etas is calculated as 
1-sd(etas)/omega 
If the etas that equal exactly zero would have to be removed from this equation 
then it would mean that NONMEM estimates the omega based on only those 
individuals who need it for the parameter in question, e.g. the omega(KA) would 
be estimated only based on the oral dosing group. Is this a correct 
interpretation for the rationale to leave out zero etas? 

I guess the inclusion of zero etas into shrinkage calculations significantly 
increases the estimate of shrinkage because the zero etas always reduce the 
sd(etas). As a practical example, suppose a dataset of 20 patients with oral 
and 20 patients with IV administration. Suppose NONMEM estimates an omega of 
0.4 for BSV of KA. Suppose the sd(etas) for oral group is 0.3 and thus sd(etas) 
for all patients is 0.3/sqrt(2) since the etas in IV group for KA are zero. 
Thus, as far as I know, PsN would currently calculate a shrinkage of 
1-(0.3/sqrt(2))/0.4=0.47.
Would it be more appropriate to manually calculate a shrinkage of 
1-0.3/0.4=0.25 instead?

All comments much appreciated.

Kind regards,
Pyry



Kajsa Harling wrote:

Dear Ethan,

I have also been away for a while, thank you for your patience.

I talked to the experts here about shrinkage. Apparently, sometimes an
individual's eta may be exactly 0 (no effect, placebo, you probably
understand this better than I do). These zeros should not be included in
the shrinkage calculation, but now they are (erroneously) in PsN.

Does this explain the discrepancy?

Then, the heading shrinkage_wres is incorrect, it should say
shrinkage_iwres (or eps) they say.

Comments are fine as long as they do not have commas in them. But this
is fixed in the latest release.

Best regards,
Kajsa





RE: [NMusers] Omega ratio

2009-07-15 Thread Ribbing, Jakob
Dear Khaled,

 

You could for example report this as "Including covariate X in the
model, the estimate of random (unexplained) between-subject variability
in parameter Y reduced from 41.8 %CV to 40.5 %CV".

 

Reporting % explained variability may lead to confusion on if this is in
percent or percental units or if it is on the CV scale or variance
(OMEGA) scale (3 different values that you can present), so I think the
above is easier to understand. If you prefer you can report it as r or
R^2, but the values are generally not very impressive.

 

Best

 

Jakob

 



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Khaled Nm
Sent: 15 July 2009 12:07
To: nmusers@globomaxnm.com
Subject: [NMusers] Omega ratio

 

Dear all,

 

I am still confused how to determine the % of explained variability by
including covariates from Omega . Omega estimate were 0.175 (before) and
0.164 (after). Objective function value decreased more than 150 units
and the goodness-of-fit plots confirmed the positive impact of this
covariate.

How these values should be treated?

 

Any feedback? thanks in advance

Khaled

 

 



RE: [NMusers] estimating Ka from dataset combining rich sample study and sparse sampling study

2009-06-17 Thread Ribbing, Jakob
Hi Ethan,

 

IOV on KA is often more pronounced than on CL or V, so I would start
there.

 

To account for a higher IIV in the MD study, just estimate a theta for
the ratio of %CV MD over %CV SD:

 

CL=TVCL*EXP(ETA(X)*(1+THETA(Y)*MD))

 

Where MD is a 0 or 1 indicator of study.

 

Since the two studies are from the same population considering a more
complex structural model, like Mats suggests, would also make a lot of
sense. It all depends on how long you can follow the SD profiles.

 

I hope this helps!

 

Jakob

 



From: Ethan Wu [mailto:ethan.w...@yahoo.com] 
Sent: 17 June 2009 22:05
To: Ribbing, Jakob; Jurgen Bulitta; nmusers@globomaxnm.com
Cc: Roger Jelliffe; Neely, Michael
Subject: Re: [NMusers] estimating Ka from dataset combining rich sample
study and sparse sampling study

 

Hi Jakob,

   sparse data came from MD study. and IIV on CL increased from 0.14 to
0.25, on V from 0.185 to 0.196 after inclusion of sparse data

   both in the same population. 

 

  I think what you suggest making sense to me. I would keep Eta on Ka
first, start exploring IOV on CL and V, then explore covariates on CL
and V, to see if decreasing IIV on CL and V would leads to more
reasonable estimate of IIV on Ka.

  

  but, overall, I think that it is the stress of shrinkage on Ka leads
to "dumping" IIV to CL and V, not something wrong with the model itself.

 

  

 

____

From: "Ribbing, Jakob" 
To: Ethan Wu ; Jurgen Bulitta
; nmusers@globomaxnm.com
Cc: Roger Jelliffe ; "Neely, Michael" 
Sent: Wednesday, June 17, 2009 4:43:28 PM
Subject: RE: [NMusers] estimating Ka from dataset combining rich sample
study and sparse sampling study

Hi Ethan,

 

If OMEGA(?) for KA is drastically reduced when including the sparse
data, then something is wrong with your model and in this case it is not
the estimation method or assumption on distribution of individual
parameter). Eta-shrinkage would not drastically reduce the estimate of
OMEGA, since this estimate is driven by the subjects/studies which
contain information on the parameter.

 

If the sparse data is multiple dosing it may be that KA is variable
between occasions, rather than between subjects (assuming the sparse
data contain some information on KA). Or if the sparse data is from a
less well-controlled study or a different population, it may be that
increased IIV in other parts of the model (e.g. OMEGA on V) is making
IIV in KA appear low for the rich study, when fitting the two studies
together. If you get the covariate model in place this problem will be
solved. For the simple model you have it should be quick to start out
assuming that most parameters (THETAs and OMEGAs) are different between
the two studies and then reduce down to a model which is stable and
parsimonious. Obviously, if you eventually can explain the differences
using more mechanistic covariates than study number that is of more use.

 

Cheers

 

Jakob

 

 

 



RE: [NMusers] estimating Ka from dataset combining rich sample study and sparse sampling study

2009-06-17 Thread Ribbing, Jakob
Hi Ethan,

 

If OMEGA(?) for KA is drastically reduced when including the sparse
data, then something is wrong with your model and in this case it is not
the estimation method or assumption on distribution of individual
parameter). Eta-shrinkage would not drastically reduce the estimate of
OMEGA, since this estimate is driven by the subjects/studies which
contain information on the parameter.

 

If the sparse data is multiple dosing it may be that KA is variable
between occasions, rather than between subjects (assuming the sparse
data contain some information on KA). Or if the sparse data is from a
less well-controlled study or a different population, it may be that
increased IIV in other parts of the model (e.g. OMEGA on V) is making
IIV in KA appear low for the rich study, when fitting the two studies
together. If you get the covariate model in place this problem will be
solved. For the simple model you have it should be quick to start out
assuming that most parameters (THETAs and OMEGAs) are different between
the two studies and then reduce down to a model which is stable and
parsimonious. Obviously, if you eventually can explain the differences
using more mechanistic covariates than study number that is of more use.

 

Cheers

 

Jakob

 

 



RE: [NMusers] 20 variable limit in $INPUT

2009-03-16 Thread Ribbing, Jakob
Andreas,

PsN has functionality for automatically applying the CONT-data-item
approach mentioned by Mark and Nick.

To run your model in nonmem with more than 20 input variables, you would
simply type:
execute --wrap_data run1.mod

In a subdirectory PsN will create a new dataset and model file with CONT
and other necessary components and run this in NONMEM. The nonmem output
file, table files etc. will be returned to the directory where user runs
execute.

There are situations where the PsN-wrap_data functionality fails, but
for these (rare) situations it also seems impossible to make the
CONT-functionality to work "manually" (outside of PsN). It is not clear
to me if these problems arise due to errors in the NONMEM documentation
or implementation of the CONT data item, but again; it most often works!

Cheers

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Nick Holford
Sent: 16 March 2009 14:48
To: nmusers
Subject: Re: [NMusers] 20 variable limit in $INPUT

Look in your NONMEM html\cont.htm.
The CONT data item lets you specify additional input records which can 
be continued to allow more than 20 data items.

Here is a description in more detail.
http://www.cognigencorp.com/nonmem/nm/99aug272005.html

I found this by searching with Google for "NONMEM cont data item".

Much better than trying to use the NONMEM archive search which responded

with:

No matches were found for 'cont and data and (item or items)'


andreas.kra...@actelion.com wrote:
>
> I am looking for a solution to get around the limit of 20 input 
> variables in nonmem.
> The message you get with more than 20 variables in $INPUT is this:
>
>   16  $INPUT: NO. OF DATA ITEMS EXCEEDS 20.
> STOP 4 statement executed
>
> I did not find anything in the archives or on the Web. I recall having

> successfully done that in nonmem V, and I think it was about changing 
> the value of 20 to another value in a few files.
> My original naive idea was that this was just a single change to the 
> SIZES file in nm VI but that seems to not be the case.
>
> To keep the discussion focused, I am not looking for workarounds like 
> solutions with concatenated values and an indicator variable.
> For a change I am trying to get the software to adapt to the user's 
> needs instead of the usual opposite situation.
>
> Thanks for any pointers.
>
>   Andreas
>
> -
>
> Andreas Krause, PhD
> Lead Scientist Modeling and Simulation
>
> Actelion Pharmaceuticals Ltd
> Gewerbestrasse 16
> CH-4123 Allschwil
> Switzerland
> The information of this email and in any file transmitted with it is
strictly confidential and may be legally privileged.
> It is intended solely for the addressee. If you are not the intended
recipient, any copying, distribution or any other use of this email is
prohibited and may be unlawful. In such case, you should please notify
the sender immediately and destroy this email.
> The content of this email is not legally binding unless confirmed by
letter.
> Any views expressed in this message are those of the individual
sender, except where the message states otherwise and the sender is
authorised to state them to be the views of the sender's company. For
further information about Actelion please see our website at
http://www.actelion.com 
>   

-- 
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holf...@auckland.ac.nz tel:+64(9)923-6730 fax:+64(9)373-7090
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford



[NMusers] Lag-time on nmusers distribution

2009-02-06 Thread Ribbing, Jakob
Dear all,

 

My apologies for sending redundant messages. The reason this often
happens is a distribution lag of about half an hour* (at least for me).
By the time the message reaches other nmusers it is already obsolete
because someone else answered before. Is there any way that Globomax can
increase the speed of distribution? Or maybe put the nmusers who have
previously participated in discussions at the top of the distribution
list?

 

Before I was aware of this distribution lag it often was not clear to me
why someone would send a message, which basically repeated what someone
else already said and I sometimes wondered if there was a disagreement
between the two postings that was too subtle for me to notice. I find
this distribution list a great source of learning and inspiration but a
quicker distribution to the growing number of nmusers would save both
time and confusion. (On the other hand, for this specific case Martin
also makes the point that it is good to see several postings
recommending PsN)

 

Thanks

 

Jakob

 

*Elodies, Marcs and my postings all had a lag of 30 minutes before
reaching me. Martins and Robs postings had a lag of 40 minutes. This lag
is not because of the e-mail server on my end. 

 

 



RE: [NMusers] simulation question

2009-02-06 Thread Ribbing, Jakob
Ethan,

 

Sebastian is right that a non-parametric bootstrap may be suitable for
determining the uncertainty in the population parameters. However, I got
the impression that you wanted to investigate possible study designs on
how informative they are for a future model-based analysis? If you would
like to do simulation based on your current best guess of the model
parameters (i.e. the point estimates) and you would like to do model
based analysis of the future study in isolation, then PsN has another
program which is highly efficient and would automatically provide you
with summary statistics, without any programming required. This program
is called sse for stochastic simulation and estimation:

http://psn.sourceforge.net/PDF_docs/sse_userguide.pdf

 

sse would also allow you to investigate the performance of alternative
models, e.g. simulation with a two-compartment model and estimation with
a one-compartment model, to see if CL can be estimated with good
precision and low bias, even from the sparse data of a future study.

 

You need to generate the data sets with the study designs yourself, but
the rest is really slick. If you are planning to analyse the new study
in conjunction with the currently-available data (i.e. a pooled
analysis) there may be some clever way of tweaking sse into evaluating
this, but I could not say exactly how to best achieve that. (maybe
someone in Uppsala has a suggestion in that case)

 

Best regards

 

Jakob



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Sebastian Ueckert
Sent: 06 February 2009 09:51
To: Ethan Wu
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] simulation question

 

Dear Ethan,
the simplest solution to solve your problem would be to use the
bootstrap command of PSN (http://psn.sourceforge.net/). With PSN
installed you would simply do:

bootstrap final_model.mod -samples=200

PSN would take care of unsuccessful runs and provide a nice summary of
the individual estimates.

Best regards
Sebastian

On Thu, Feb 5, 2009 at 11:18 PM, Ethan Wu  wrote:

Dear users,  

  I am trying to compare several specific PK/PD study designs by: 

 -- run 200 simulations with the final model (develope from original
dataset)

 -- fit the final model to the 200 simulated dataset

To achieve above, I used $SIM SUBPROB=200 option

 however, nonmem would  completely stop after running into estimation
problem at one specific simulation/estimation cycles, for some designs
it stop even before 10th iterations.

Is there anyway nonmem could continue go on? 

Or, does someone know alternative way to achieve the goal?

thanks

 

 

 



RE: [NMusers] CrcL or Cr in pediatric model

2009-01-14 Thread Ribbing, Jakob
Leonid,

As I understand the linear model you suggested it can be simplified* to
this structure:
THETA(1)*((WT/70)^(3/4)+THETA(2)*CRCL)

I call this additive, because the two covariates affect TVCL in an
absolute sense, without interaction. My main message was that I find
this model appealing, because it has the properties:
a)There is a linear increase of CL with CRCL
b)An increase in CRCL increases CL with an absolute number which is the
same for two subjects with different WT

The same can not be said about this model:
TVCL=THETA(1)*(WT/70)^(3/4) * RF^GAMMA
The latter model carries a built-in interaction which may provide a
better description of the data in situations where e.g. non-renal
elimination decreases with CRCL or where the secretory component of
renal elimination is more important for creatinine than for the drug.
However, in the opposite situations the interaction would be working in
the wrong direction (assuming GAMMA<1). Maybe we can leave what
basic-model assumption we want to use as a matter of personal or
drug-specific preference?

Best

Jakob

PS
Nonmem users is like an octopus: Just when you think you are free one of
its threads pulls you back in again :>)
Much of this discussion is around additivity. If I have understood the
definition of additivity wrong, then I apologies on beforehand, so that
this can still be my final "contribution" to this thread. Likewise if I
misunderstood what model Leonid was actually suggesting...
DS

*This is how I have simplified the suggested linear model:
TVCL=THETA(1)*(WT/70)^(3/4) * (1+THETA(2)*RF)=
=THETA(1)*(WT/70)^(3/4) * (1+THETA(2)*CRCL/(WT/70)^(3/4)) =
=THETA(1)*(WT/70)^(3/4)+THETA(1)*(WT/70)^(3/4)*THETA(2)*CRCL/(WT/70)^(3/
4)=
=THETA(1)*((WT/70)^(3/4)+THETA(2)*CRCL)

Or  =THETA(1)* (WT/70)^(3/4)+THETA(1)*THETA(2)*CRCL
Similar: THETA(1)* (WT/70)^(3/4)+THETA(2)*CRCL (where the interpretation
of THETA(2) changed from the line before)

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Leonid Gibiansky
Sent: 13 January 2009 22:50
To: nmusers@globomaxnm.com
Subject: Re: [NMusers] CrcL or Cr in pediatric model

Jakob,

The model that I mentioned is not additive; it is multiplicative:

Parameter= MeanValue*Effect1(WT)*Effect2(RF)

but the effect of RF is expressed as a linear function of RF
Effect2(RF) = 1 + THETA()*RF

Leonid




 


RE: [NMusers] CrcL or Cr in pediatric model

2009-01-13 Thread Ribbing, Jakob
ing in this area to use 
> mechanism based models to understand how renal function influences 
> pharmacokinetics and at the very least compare the predictions of an 
> empirical model (e.g. Model 2) with a mechanism based model (e.g.
Model 
> 3) so that you can understand what you are missing.
> 
> Nick
> 
> Anderson, B. J., K. Allegaert, et al. (2007). "Vancomycin 
> pharmacokinetics in preterm neonates and the prediction of adult 
> clearance." Br J Clin Pharmacol 63(1): 75-84.
> 
> Anderson, B. J. and N. H. Holford (2008). "Mechanism-based concepts of

> size and maturity in pharmacokinetics." Annu Rev Pharmacol Toxicol 48:

> 303-32.
> 
> Boyd, E. (1935). The growth of the surface area of the human body. 
> Minneapolis, University of Minnesota Press.
> 
> Cole, M., L. Price, et al. (2004). "Estimation of glomerular
filtration 
> rate in paediatric cancer patients using 51CR-EDTA population 
> pharmacokinetics." Br J Cancer 90(1): 60-4.
> 
> DuBois, D. and E. F. DuBois (1916). "A formula to estimate the 
> approximate surface area if height and weight be known." Archives of 
> Internal Medicine 17: 863-871.
> 
> Hellerstein, S., U. Alon, et al. (1992). "Creatinine for estimation of

> glomerular filtration rate." Pediatric Nephrology 6: 507-511.
> 
> Leger, F., F. Bouissou, et al. (2002). "Estimation of glomerular 
> filtration rate in children." Pediatr Nephrol 17(11): 903-7.
> 
> Matthews, I., C. Kirkpatrick, et al. (2004). "Quantitative
justification 
> for target concentration intervention - Parameter variability and 
> predictive performance using population pharmacokinetic models for 
> aminoglycosides."
> 
> Mould, D. R., N. H. Holford, et al. (2002). "Population
pharmacokinetic 
> and adverse event analysis of topotecan in patients with solid
tumors." 
> Clinical Pharmacology & Therapeutics. 71(5): 334-48.
> British Journal  of Clinical Pharmacology 58(1): 8-19.
> 
> Rhodin, M. M., B. J. Anderson, et al. (2008). "Human renal function 
> maturation: a quantitative description using weight and postmenstrual 
> age." Pediatr Nephrol. Epub. (please contact me if you want a pdf
copy)
> 
> 
> 
> 
> Leonid Gibiansky wrote:
>> Jakob,
>> Restrictions on the parameter values is not the only (and not the 
>> major) problem with additive parametrization. In this specific case, 
>> CRCL (as clearance) increases proportionally to WT^(3/4) (or similar 
>> power, if you accept that allometric scaling has biological meaning
or 
>> that the filtration rate is proportional to the kidney size). Then
you 
>> have
>>
>> TVCL=THETA(1)*WT^(3/4)+THETA(2)*WT^(3/4)
>> (where the second term approximates CRCL dependence on WT).
>> Clearly, the model is unstable.
>>
>> Answering the question:
>> > why would two persons, with WT 50 and 70 kg
>> > but otherwise identical (including CRCL and any other covariates,
>> > except WT), be expected to differ by 36% in CL?
>>
>> we are back to the problem of correlation. If two persons of
different 
>> WT have the same CRCL, they should differ by the "health" of their 
>> renal function. I would rater have the model
>> CL=THETA(1)*(WT/70)^(3/4)*(CRCL/BSA)^GAMMA
>> Then, if two subjects (50 and 70 kg) have the same CRCL, their CL
will 
>> be influenced by WT, and by renal function (in this particular 
>> realization, CRCL per body surface area). While the result could be 
>> the same as in
>> CL ~ CRCL,
>> we described two separate and important dependencies:
>> CL ~ WT; and CL ~ renal function
>> For the patient that you mentioned, they act in the opposite 
>> directions and cancel each other, but it is important to describe
both 
>> dependencies.
>>
>> > Regarding 3 below, is the suggestion to estimate
>> > independent allometric
>> > models on CL for each level of renal function?
>>
>> The suggestion was to define the renal disease as categorical 
>> variable, and then correct CL, for example:
>> TCL ~ THETA(1) (for healthy)
>> TCL ~ THETA(2) (for patients with severe renal impairment)
>>
>> Thanks
>> Leonid
>>
>> --
>> Leonid Gibiansky, Ph.D.
>> President, QuantPharm LLC
>> web:www.quantpharm.com
>> e-mail: LGibiansky at quantpharm.com
>> tel:(301) 767 5566
>>
>>
>>
>>
>> Ribbing, Jakob wrote:
>>> Leonid,
>>>
>>> I usually prefer multiplicative parameterisation as well, since it
is
>>> easier to set boundar

RE: [NMusers] CrcL or Cr in pediatric model

2009-01-12 Thread Ribbing, Jakob
Thank you for this, Nick.

Regarding estimating separate eta for the two CL components I completely
agree with you. When I talked about a possible correlation component
between renal and non-renal CL that could not be attributed to size, my
intention was not to estimate separate random components for the two
processes. What would be possible, however, were to estimate a
(fixed-effect) interaction component between WT and CRCL (with the hope
of concluding it is not needed). This test can thus provide some further
support to that other important covariates have been integrated
correctly, or point to a potential problem.

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Nick Holford
Sent: 13 January 2009 01:44
To: nmusers
Subject: Re: [NMusers] CrcL or Cr in pediatric model

Peter, Jakob, Leonid,

A practical example of how to deal with collinearity of age and weight 
over a wide range (premature neonates to young adults) using GFR has 
been recently reported (Rhodin et al 2008).

One way to overcome the somewhat imagined concern about using weight for

Clcr and weight for overall clearance is to predict Clcr for a standard 
weight person and compute renal function relative to a normal standard 
weight person. Then you can apply weight to clearance and not worry 
about using weight 'twice' (Mould et al. 2002; Matthews et al. 2004).

Jakob's concern about using the same random effect for both portions of 
clearance with and additive non-renal plus non-renal clearance model is 
quite reasonable. However, I think it might be quite difficult to 
estimate separate ETAs for each component of clearance unless one has 
more than one estimate of total clearance with a different renal 
function in order to estimate the individual components of clearance.

As I am sure you know I dont think it is a good idea to try to estimate 
allometric exponents unless you have lots of subjects with a very wide 
weight range AND you can be pretty confident (or dont care) that you 
have accounted for all other factors affecting clearance that are 
correlated with weight (see Anderson & Holford 2008 for an example of 
how hard it is to get precise estimates).

Nick


Rhodin, M. M., B. J. Anderson, et al. (2008). "Human renal function 
maturation: a quantitative description using weight and postmenstrual 
age." Pediatr Nephrol. Epub
Mould, D. R., N. H. Holford, et al. (2002). "Population pharmacokinetic 
and adverse event analysis of topotecan in patients with solid tumors." 
Clinical Pharmacology & Therapeutics. 71(5): 334-48.
Matthews, I., C. Kirkpatrick, et al. (2004). "Quantitative justification

for target concentration intervention - Parameter variability and 
predictive performance using population pharmacokinetic models for 
aminoglycosides." British Journal  of Clinical Pharmacology 58(1): 8-19.
Anderson, B. J. and N. H. Holford (2008). "Mechanism-based concepts of 
size and maturity in pharmacokinetics." Annu Rev Pharmacol Toxicol 48: 
303-32.








Ribbing, Jakob wrote:
> Correction, I meant WT 50 and 75 in the example below:
> 75^0.75/(50^0.75)=1.36
>
> -Original Message-
> From: Ribbing, Jakob 
> Sent: 13 January 2009 00:50
> To: nmusers@globomaxnm.com; 'Leonid Gibiansky'; Bonate, Peter
> Subject: RE: [NMusers] CrcL or Cr in pediatric model
>
> Leonid,
>
> I usually prefer multiplicative parameterisation as well, since it is
> easier to set boundaries (which is not necessary for power models, but
> for multiplicative-linear models). However, boundaries on the additive
> covariate models can still be set indirectly, using EXIT statements
(not
> as neat as boundaries directly on the THETAS, I admit).
>
> In this case it may possibly be more mechanistic using the additive
> parameterisation: For example if the non-renal CL is mainly liver, the
> two blood flows run in parallel and the two elimination processes are
> independent (except there may be a correlation between liver function
> and renal function related to something other than size). A
> multiplicative parameterisation contains an assumed interaction which
is
> fixed and in this case may not be appropriate. If the drug is mainly
> eliminated via filtration, why would two persons, with WT 50 and 70 kg
> but otherwise identical (including CRCL and any other covariates,
except
> WT), be expected to differ by 36% in CL? This is what you get using a
> multiplicative parameterisation. The fixed interaction may also drive
> the selection of the functional form (e.g. a power model vs a linear
> model for CRCL on CL). I do not know anything about Peter's specific
> example so this is just theoretical.
>
> Regarding 3 below, is the suggestion to estimate independent
allometric
> models on CL for each level of renal

RE: [NMusers] CrcL or Cr in pediatric model

2009-01-12 Thread Ribbing, Jakob
Correction, I meant WT 50 and 75 in the example below:
75^0.75/(50^0.75)=1.36

-Original Message-
From: Ribbing, Jakob 
Sent: 13 January 2009 00:50
To: nmusers@globomaxnm.com; 'Leonid Gibiansky'; Bonate, Peter
Subject: RE: [NMusers] CrcL or Cr in pediatric model

Leonid,

I usually prefer multiplicative parameterisation as well, since it is
easier to set boundaries (which is not necessary for power models, but
for multiplicative-linear models). However, boundaries on the additive
covariate models can still be set indirectly, using EXIT statements (not
as neat as boundaries directly on the THETAS, I admit).

In this case it may possibly be more mechanistic using the additive
parameterisation: For example if the non-renal CL is mainly liver, the
two blood flows run in parallel and the two elimination processes are
independent (except there may be a correlation between liver function
and renal function related to something other than size). A
multiplicative parameterisation contains an assumed interaction which is
fixed and in this case may not be appropriate. If the drug is mainly
eliminated via filtration, why would two persons, with WT 50 and 70 kg
but otherwise identical (including CRCL and any other covariates, except
WT), be expected to differ by 36% in CL? This is what you get using a
multiplicative parameterisation. The fixed interaction may also drive
the selection of the functional form (e.g. a power model vs a linear
model for CRCL on CL). I do not know anything about Peter's specific
example so this is just theoretical.

Regarding 3 below, is the suggestion to estimate independent allometric
models on CL for each level of renal function?

Thanks

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Leonid Gibiansky
Sent: 12 January 2009 23:30
To: Bonate, Peter
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] CrcL or Cr in pediatric model

Hi Peter,

If allometric exponent is fixed, collinearity is not an issue from the 
mathematical point of view (convergence, CI on parameter estimates, 
etc.). However, in this case CRCL can end up being significant due to 
additional WT dependence (that could differ from allometric) rather than

  due to renal function influence (that is not good if you need to 
interpret it as the renal impairment influence on PK).

Few points to consider:
   1. I usually normalize CRCL by WT^(3/4) or by (1.73 m^2 BSA) to get 
rid of WT - CRCL dependence. If you need to use it in pediatric 
population, normalization could be different but the idea to normalize 
CRCL by something that is "normal CRCL for a given WT" should be valid.
   2. In the pediatric population used for the analysis, are there any 
reasons to suspect that kids have impaired renal function ? If not, I 
would hesitate to use CRCL as a covariate.
   3. Often, categorical description of renal impairment allows to 
decrease or remove the WT-CRCL correlation
   4. Expressions to compute CRCL in pediatric population (note that 
most of those are normalized by BSA, as suggested in (1)) can be found
here:
  http://www.globalrph.com/specialpop.htm
  http://www.thedrugmonitor.com/clcreqs.html
   5. Couple of recent papers:
  http://www.clinchem.org/cgi/content/full/49/6/1011
  http://www.ajhp.org/cgi/content/abstract/37/11/1514

Thanks
Leonid

P.S. I do not think that this is a good idea to use additive dependence:

TVCL=THETA(X)*(WT/70)**0.75+THETA(Y)*CRCL
--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566




Bonate, Peter wrote:
> I have an interesting question I'd like to get the group's collective 
> opinion on.  I am fitting a pediatric and adult pk dataset.  I have 
> fixed weight a priori to its allometric exponents in the model.  When
I 
> test serum creatinine and estimated creatinine clearance equation as 
> covariates in the model (power function), both are statistically 
> significant.  CrCL appears to be a better predictor than serum Cr (LRT
= 
> 22.7 vs 16.7).  I have an issue with using CrCL as a predictor in the 
> model since it's estimate is based on weight and weight is already in 
> the model.  Also, there might be collinearity issues with CrCL and 
> weight in the same model, even though they are both significant.  Does

> anyone have a good argument for using CrCL in the model instead of
serum Cr?
> 
> Thanks
> 
> Pete bonate
> 
> 
> 
> Peter L. Bonate, PhD, FCP
> Genzyme Corporation
> Senior Director
> Clinical Pharmacology and Pharmacokinetics
> 4545 Horizon Hill Blvd
> San Antonio, TX  78229   USA
> _peter.bon...@genzyme.com_ <mailto:peter.bon...@genzyme.com>
> phone: 210-949-8662
> fax: 210-949-8219
> crackberry: 210-315-2713
>  
> alea jacta est - The die is cast.
> 
> Julius Caesar
> 
> 


RE: [NMusers] CrcL or Cr in pediatric model

2009-01-12 Thread Ribbing, Jakob
Leonid,

I usually prefer multiplicative parameterisation as well, since it is
easier to set boundaries (which is not necessary for power models, but
for multiplicative-linear models). However, boundaries on the additive
covariate models can still be set indirectly, using EXIT statements (not
as neat as boundaries directly on the THETAS, I admit).

In this case it may possibly be more mechanistic using the additive
parameterisation: For example if the non-renal CL is mainly liver, the
two blood flows run in parallel and the two elimination processes are
independent (except there may be a correlation between liver function
and renal function related to something other than size). A
multiplicative parameterisation contains an assumed interaction which is
fixed and in this case may not be appropriate. If the drug is mainly
eliminated via filtration, why would two persons, with WT 50 and 70 kg
but otherwise identical (including CRCL and any other covariates, except
WT), be expected to differ by 36% in CL? This is what you get using a
multiplicative parameterisation. The fixed interaction may also drive
the selection of the functional form (e.g. a power model vs a linear
model for CRCL on CL). I do not know anything about Peter's specific
example so this is just theoretical.

Regarding 3 below, is the suggestion to estimate independent allometric
models on CL for each level of renal function?

Thanks

Jakob

-Original Message-
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Leonid Gibiansky
Sent: 12 January 2009 23:30
To: Bonate, Peter
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] CrcL or Cr in pediatric model

Hi Peter,

If allometric exponent is fixed, collinearity is not an issue from the 
mathematical point of view (convergence, CI on parameter estimates, 
etc.). However, in this case CRCL can end up being significant due to 
additional WT dependence (that could differ from allometric) rather than

  due to renal function influence (that is not good if you need to 
interpret it as the renal impairment influence on PK).

Few points to consider:
   1. I usually normalize CRCL by WT^(3/4) or by (1.73 m^2 BSA) to get 
rid of WT - CRCL dependence. If you need to use it in pediatric 
population, normalization could be different but the idea to normalize 
CRCL by something that is "normal CRCL for a given WT" should be valid.
   2. In the pediatric population used for the analysis, are there any 
reasons to suspect that kids have impaired renal function ? If not, I 
would hesitate to use CRCL as a covariate.
   3. Often, categorical description of renal impairment allows to 
decrease or remove the WT-CRCL correlation
   4. Expressions to compute CRCL in pediatric population (note that 
most of those are normalized by BSA, as suggested in (1)) can be found
here:
  http://www.globalrph.com/specialpop.htm
  http://www.thedrugmonitor.com/clcreqs.html
   5. Couple of recent papers:
  http://www.clinchem.org/cgi/content/full/49/6/1011
  http://www.ajhp.org/cgi/content/abstract/37/11/1514

Thanks
Leonid

P.S. I do not think that this is a good idea to use additive dependence:

TVCL=THETA(X)*(WT/70)**0.75+THETA(Y)*CRCL
--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566




Bonate, Peter wrote:
> I have an interesting question I'd like to get the group's collective 
> opinion on.  I am fitting a pediatric and adult pk dataset.  I have 
> fixed weight a priori to its allometric exponents in the model.  When
I 
> test serum creatinine and estimated creatinine clearance equation as 
> covariates in the model (power function), both are statistically 
> significant.  CrCL appears to be a better predictor than serum Cr (LRT
= 
> 22.7 vs 16.7).  I have an issue with using CrCL as a predictor in the 
> model since it's estimate is based on weight and weight is already in 
> the model.  Also, there might be collinearity issues with CrCL and 
> weight in the same model, even though they are both significant.  Does

> anyone have a good argument for using CrCL in the model instead of
serum Cr?
> 
> Thanks
> 
> Pete bonate
> 
> 
> 
> Peter L. Bonate, PhD, FCP
> Genzyme Corporation
> Senior Director
> Clinical Pharmacology and Pharmacokinetics
> 4545 Horizon Hill Blvd
> San Antonio, TX  78229   USA
> _peter.bon...@genzyme.com_ 
> phone: 210-949-8662
> fax: 210-949-8219
> crackberry: 210-315-2713
>  
> alea jacta est - The die is cast.
> 
> Julius Caesar
> 
> 


RE: [NMusers] CrcL or Cr in pediatric model

2009-01-12 Thread Ribbing, Jakob
Pete,

 

Is the drug cleared almost completely thru renal elimination?

 

Otherwise, maybe a slope intercept model for CL as a function of CRCL?

 

TVCL=THETA(X)*(WT/70)**0.75+THETA(Y)*CRCL

 

The intercept is nonrenal CL according to the allometric model and the
slope according to CRCL. This model may be inappropriate if renally
impared are included in the dataset or if there are other reasons to why
the linear model for CRCL may be inappropriate. With this model the
collinearity is a smaller problem since the exponent in the allometric
model is not estimated.

 

Best regards

 

Jakob

 



From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Bonate, Peter
Sent: 12 January 2009 21:52
To: nmusers@globomaxnm.com
Subject: [NMusers] CrcL or Cr in pediatric model

 

I have an interesting question I'd like to get the group's collective
opinion on.  I am fitting a pediatric and adult pk dataset.  I have
fixed weight a priori to its allometric exponents in the model.  When I
test serum creatinine and estimated creatinine clearance equation as
covariates in the model (power function), both are statistically
significant.  CrCL appears to be a better predictor than serum Cr (LRT =
22.7 vs 16.7).  I have an issue with using CrCL as a predictor in the
model since it's estimate is based on weight and weight is already in
the model.  Also, there might be collinearity issues with CrCL and
weight in the same model, even though they are both significant.  Does
anyone have a good argument for using CrCL in the model instead of serum
Cr?

Thanks 

Pete bonate 

 

Peter L. Bonate, PhD, FCP 
Genzyme Corporation 
Senior Director 
Clinical Pharmacology and Pharmacokinetics 
4545 Horizon Hill Blvd 
San Antonio, TX  78229   USA 
peter.bon...@genzyme.com   
phone: 210-949-8662 
fax: 210-949-8219 
crackberry: 210-315-2713 
  
alea jacta est - The die is cast. 

Julius Caesar 

 



RE: [NMusers] Very small P-Value for ETABAR

2008-11-16 Thread Ribbing, Jakob
Xia,

I must admit, I am still confused. In my mind, you can not estimate
THETA(2) in your code, since it is completely confounded with THETA(1).
Moreover, if you fix THETA(2) to a non-zero value, THETA(1) will no
longer be the typical value of CL (or the population typical value of
CL), meaning that the interpretability of THETA(1) is lost.

Regarding your definition of omega^ I think this is an attempt to allow
a semi-parametric model. Can you please explain how equation 2 affects
equation 1, using code acceptable in the nonmem NONMEM program?
Currently, I am not clear on how many random effects you are estimating
for CL.

Thanks

Jakob

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of XIA LI
Sent: 17 November 2008 05:28
To: Leonid Gibiansky
Cc: 'Nick Holford'; 'nmusers'
Subject: Re: [NMusers] Very small P-Value for ETABAR

Leonid,
Sorry, I did make myself clear. 

CL=THETA(1)*EXP(ETA(1))(1)
where  ETA(1) is Normal( 0,  omega^2) or 
log Normal(Eta_bar,omega^2)

Adding one more stage means giving some functions for the MEAN and
VARIANCE of ETA(1), say:

Eta_bar=THETA(2)
omega^= THETA(3)*EXP(ETA(2)) (2)

Sorry for any confusion!
Best,
Xia


 Original message 
>Date: Fri, 14 Nov 2008 18:37:22 -0500
>From: Leonid Gibiansky <[EMAIL PROTECTED]>  
>Subject: Re: [NMusers] Very small P-Value for ETABAR  
>To: Xia Li <[EMAIL PROTECTED]>
>Cc: "'Nick Holford'" <[EMAIL PROTECTED]>, "'nmusers'"

>
>Xia,
>I could be missing something but this
>ETA(1)= THETA(2)*exp(ETA(2))   (Eq. 1)
>does not make sense to me. In the original definition, ETA(1) is the 
>random variable with normal distribution. Even if posthoc ETAs are not 
>normal, they are still random. For example, it can be either positive
or 
>negative (unlike ETA1 given by (1)). If I the understood intentions 
>correctly, this is an attempt to describe a transformation of the
random 
>effects to make it normal:
>
>CL = THETA(1) exp(ETA(1)) is replaced by
>CL = THETA(1) exp(THETA(2)*exp(ETA(1)))(2)
>
>But not every transformation is reasonable. I hardly can imagine the 
>case when you may want to use (2). Could you give some more realistic 
>examples, please, and situation when they were useful?
>
>On the separate note, mean of THETA(2)*exp(ETA(2)) is not equal to 
>THETA(2): geometric mean of THETA(2)*exp(ETA(2)) is equal to THETA(2)
>
>Thanks
>Leonid
>
>--
>Leonid Gibiansky, Ph.D.
>President, QuantPharm LLC
>web:www.quantpharm.com
>e-mail: LGibiansky at quantpharm.com
>tel:(301) 767 5566
>
>
>
>
>Xia Li wrote:
>> Hi Nick,
>> My pleasure!
>> 
>> This is a topic from Bayesian Hierarchical Model(BHM). If we look at
the
>> simplest PK statement: CL=THETA(1)*EXP(ETA(1)), where ETA(1) is the
between
>> subject random effect. We assume the "similarity" among the subjects
may be
>> modeled by THETA(1) and ETA(1).
>> 
>> Now here, if we observe that there is an underlying pattern between
>> ETA(1)'s, i.e. deviation from zero or no longer normal and we assume
that
>> there is a similarity among those patterns. 
>> 
>> Since ETA(1)'s are assumed similar, it is reasonable to model the
>> "similarity" among the ETA(1)'s by THETA(2) and ETA(2): ETA(1)=
>> THETA(2)*exp(ETA(2)). Hence we have one more stage, ETA(1) now is
>> lognormal(nonsymmetrical) with mean THETA(2) (doesnt have to be
zero).  
>> 
>> We will not say the variance of ETA(1) is confounded with the
variance of
>> ETA(2), we say it is a function of variance of ETA(2).In statistics,
>> confounding means hard to distinguish from each other. Here, it is a
direct
>> causation.
>> 
>> Sorry I don't have a NM-TRAN code for this now. I usually use SAS and
Win
>> bugs to do modeling and haven't tried this BHM in NONMEM. I will
figure out
>> can I do it in NONMEM later.
>> 
>> Best,
>> Xia
>> 
>> -Original Message-
>> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
>> Behalf Of Nick Holford
>> Sent: Friday, November 14, 2008 3:34 PM
>> To: nmusers
>> Subject: Re: [NMusers] Very small P-Value for ETABAR
>> 
>> Jakob, Mats,
>> 
>> Thanks very much for your careful explanations of how asymmetric EBE 
>> distributions can arise. That is very helpful for my understanding.
>> 
>> Xia,
>> 
>> I am intrigued by your suggestion for how to estimate and account for

>> the bias in the mean of the EBE distribution.
>> 
>> In the usual ETA on EPS model I might write:
>> 
>> ; SD of residual error for mixed proportional and additive random
effects
>> PROP=THETA(1)*F
>> ADD=THETA(2)
>> SD=SQRT(PROP*PROP + ADD*ADD)
>> Y=F + EPS(1)*SD*EXP(ETA(1))
>> 
>> where EPS(1) is distributed mean zero, variance 1 FIXED
>> and ETA(1) is the between subject random effect for residual error
>> 
>> You seem to be suggesting:
>> ETABAR=THETA(3)
>> Y=F + EPS(1)*SD*EXP(ETA(1)) * ETABAR*EXP(ETA(2))
>> 
>> It seems to me that the variance of ETA(1) will be confounded with
the 
>> variance of ETA(2). Would you please explain more clearly (w

RE: [NMusers] Very small P-Value for ETABAR

2008-11-14 Thread Ribbing, Jakob
Nick,

The only way I can see ETABAR being biased when fitting the correct
model, is due to asymmetric shrinkage, i.e. that the distribution of EBE
etas is shrunk more in one tail than the other so that the EBE-eta
distribution becomes non-symmetric.

A situation where I would expect this to happen is when putting an "eta
on epsilon" (see ref below). This is a great and simple way of handling
that subjects have different intra-individual error magnitude (instead
of just assuming the same SIGMA for all). In practice, you multiply
whatever the model weight (W) is by e.g. exp(eta) to incorporate eta on
epsilon. This is a simple way of accounting for e.g. that some subjects
are more compliant than others (compliant with therapy, fasting and
other prohibited/compulsory activities during the study).

Assuming that data is not extremely sparse: For subjects where the eta
is highly positive, there will be evidence of them having a higher
variability in the intra-individual error, since their observations
otherwise will become highly unlikely (epsilons which are extremely
positive and negative, in comparison to the value of SIGMA). The eta for
these subject will only be shrunk to a small degree. For the compliant
subject, eps is small (close to zero) for all observations and
consequently, these observations are likely regardless of if the
intra-individual error magnitude is typical or smaller. The eta on these
subjects will shrink from the true (highly negative) eta towards zero.
In consequence, ETABAR can be expected to be positive. This asymmetric
shrinkage does not invalidate the model and it may work great both for
fitting your data and simulate from the model.

Other examples of asymmetric shrinkage may be if there is a continuum of
EC50 values but many subjects where not administered doses high enough
to see a profound effect (all subjects received a low dose so that
drug-effects below Emax have been observed in all): For subjects with
high EC50, that did not receive a high dose, there is no clear effect at
all and the very high eta on EC50 will be shrunk a bit towards zero. For
subjects with low or normal EC50 there will be information in the data
to determine the correct EC50 without shrinkage. The EBE eta
distribution will be skewed to the left, e.g. ranging from -4 to 2, but
still with the median around 0. The model may still be fine, if
alternative parameterisations do not fit the data better.

Best Regards

Jakob


J Pharmacokinet Biopharm. 1995 Dec;23(6):651-72.
Three new residual error models for population PK/PD analyses.Karlsson
MO, Beal SL, Sheiner LB.
Department of Pharmacy, School of Pharmacy, University of California,
San Francisco 94143-0626, USA.

Residual error models, traditionally used in population pharmacokinetic
analyses, have been developed as if all sources of error have properties
similar to those of assay error. Since assay error often is only a minor
part of the difference between predicted and observed concentrations,
other sources, with potentially other properties, should be considered.
We have simulated three complex error structures. The first model
acknowledges two separate sources of residual error, replication error
plus pure residual (assay) error. Simulation results for this case
suggest that ignoring these separate sources of error does not adversely
affect parameter estimates. The second model allows serially correlated
errors, as may occur with structural model misspecification. Ignoring
this error structure leads to biased random-effect parameter estimates.
A simple autocorrelation model, where the correlation between two errors
is assumed to decrease exponentially with the time between them,
provides more accurate estimates of the variability parameters in this
case. The third model allows time-dependent error magnitude. This may be
caused, for example, by inaccurate sample timing. A time-constant error
model fit to time-varying error data can lead to bias in all population
parameter estimates. A simple two-step time-dependent error model is
sufficient to improve parameter estimates, even when the true time
dependence is more complex. Using a real data set, we also illustrate
the use of the different error models to facilitate the model building
process, to provide information about error sources, and to provide more
accurate parameter estimates.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Nick Holford
Sent: 14 November 2008 00:11
To: nmusers
Subject: Re: [NMusers] Very small P-Value for ETABAR

Jakob,

Thanks for some more info on this issue. I have seen work from Mats and 
Rada that says ETABAR can be biased when there is a lot of shrinkage 
even when the data is simulated and fitted with the correct model. Can 
you confirm this and can you explain how it arises? In the worst case of

shrinkage then bias is impossible because all ETAs must be zero. So why 
does it occur with non-zero shrinkage?

Nick

Ribb

RE: [NMusers] Very small P-Value for ETABAR

2008-11-13 Thread Ribbing, Jakob
Dear all,

First of all, I am not sure that there is any assumption of etas having
a normal distribution when estimating a parametric model in NONMEM. The
variance of eta (OMEGA) does not carry an assumption of normality. I
believe that Stuart used to say the assumption of normality is only when
simulating. I guess the assumption also affects EBE:s unless the
individual information is completely dominating? If the assumption of
normality is wrong, the weighting of information may not be optimal, but
as long as the true distribution is symmetric the estimated parameters
are in principle correct (but again, the model may not be suitable for
simulation if the distributional assumption was wrong). I will be off
line for a few days, but I am sure somebody will correct me if I am
wrong about this.

If etas are shrunk, you can not expect a normal distribution of that
(EBE) eta. That does not invalidate parameterization/distributional
assumptions. Trying other semi-parametric distributions or a
non-parametric distribution (or a mixture model) may give more
confidence in sticking with the original parameterization or else reject
it as inadequate. In the end, you may feel confident about the model
even if the EBE eta distribution is asymmetric and biased (I mentioned
two examples in my earlier posting).

Connecting to how PsN may help in this case: http://psn.sourceforge.net/
In practice to evaluate shrinkage, you would simply give the command
(assuming the model file is called run1.mod):
execute --shrinkage run1.mod

Another quick evaluation that can be made with this program is to
produce mirror plots (PsN links in nicely with Xpose for producing the
diagnostic plots):

execute --mirror=3 run1.mod

This will give you three simulation table files that have been derived
by simulating under the model and then fitting the simulated data using
the same model (using the design of the original data). If you see a
similar pattern in the mirror plots as in the original diagnostic plots,
this gives you more confidence in the model. That brings us back to
Leonids point about it being more useful to look at diagnostic plots
than eta bar.

Wishing you a great weekend!

Jakob

-Original Message-
From: BAE, KYUN-SEOP 
Sent: 13 November 2008 22:05
To: Ribbing, Jakob; XIA LI; nmusers@globomaxnm.com
Subject: RE: [NMusers] Very small P-Value for ETABAR

Dear All,

Realized etas (EBEs, MAPs) is estimated under the assumption of normal
distribution.
However, the resultant distribution of EBEs may not be normal or mean of
them may not be 0.
To pass t-test, one may use "CENTERING" option at $ESTIMATION.
But, this practice is discouraged by some (and I agree).

Normal assumption cannot coerce the distribution of EBE to be normal, 
and furthermore non-normal (and/or not-zero-mean) distribution of EBE
can be nature's nature.
One simple example is mixture population with polymorphism.

If I could not get normal(?) EBEs even after careful examination of
covariate relationships as others suggested, 
I would bear it and assume nonparametric distribution.

Regards,

Kyun-Seop
=
Kyun-Seop Bae MD PhD
Email: [EMAIL PROTECTED]

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Ribbing, Jakob
Sent: Thursday, November 13, 2008 13:19
To: XIA LI; nmusers@globomaxnm.com
Subject: RE: [NMusers] Very small P-Value for ETABAR

Hi Xia,

Just to clarify one thing (I agree with almost everything you said):

The p-value indeed is related to the test of ETABAR=0. However, this is
not a test of normality, only a test that may reject the mean of the
etas being zero (H0). Therefore, shrinkage per se does not lead to
rejection of HO, as long as both tails of the eta distribution are
shrunk to a similar degree.

I agree with the assumption of normality. This comes into play when you
simulate from the model and if you got the distribution of individual
parameters wrong, simulations may not reflect even the data used to fit
the model.

Best Regards

Jakob

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of XIA LI
Sent: 13 November 2008 20:31
To: nmusers@globomaxnm.com
Subject: Re: [NMusers] Very small P-Value for ETABAR

Dear All,

Just some quick statistical points...

P value is usually associated with hypothesis test. As far as I know,
NONMEM assume normal distribution for ETA, ETA~N(0,omega), which means
the null hypothesis to test is H0: ETABAR=0. A small P value indicates a
significant test. You reject the null hypothesis. 
 
More...
As we all know, ETA is used to capture the variation among individual
parameters and model's unexplained error. We usually use the function
(or model) parameter=typical value*exp (ETA), which leads to a lognormal
distribution assumption for all fixed effect parameters (i.e., CL, V,
Ka, Ke...).

By some statistical theory, the variation of individual parameter equals
a function of the typical value and the va

RE: [NMusers] Very small P-Value for ETABAR

2008-11-13 Thread Ribbing, Jakob
Hi Xia,

Just to clarify one thing (I agree with almost everything you said):

The p-value indeed is related to the test of ETABAR=0. However, this is
not a test of normality, only a test that may reject the mean of the
etas being zero (H0). Therefore, shrinkage per se does not lead to
rejection of HO, as long as both tails of the eta distribution are
shrunk to a similar degree.

I agree with the assumption of normality. This comes into play when you
simulate from the model and if you got the distribution of individual
parameters wrong, simulations may not reflect even the data used to fit
the model.

Best Regards

Jakob

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of XIA LI
Sent: 13 November 2008 20:31
To: nmusers@globomaxnm.com
Subject: Re: [NMusers] Very small P-Value for ETABAR

Dear All,

Just some quick statistical points...

P value is usually associated with hypothesis test. As far as I know,
NONMEM assume normal distribution for ETA, ETA~N(0,omega), which means
the null hypothesis to test is H0: ETABAR=0. A small P value indicates a
significant test. You reject the null hypothesis. 
 
More...
As we all know, ETA is used to capture the variation among individual
parameters and model's unexplained error. We usually use the function
(or model) parameter=typical value*exp (ETA), which leads to a lognormal
distribution assumption for all fixed effect parameters (i.e., CL, V,
Ka, Ke...).

By some statistical theory, the variation of individual parameter equals
a function of the typical value and the variance of ETA. 

VAR (CL) = typical value*exp (omega/2). NO MATH PLS!!

If your typical value captures all overall patterns among patients
clearance, then ETA will have a nice symmetric normal distribution with
small variance. Otherwise, you leave too many patterns to ETA and will
see some deviation or shrinkage (whatever you call).

Why adding covariates is a good way to deal with this situation? You
model become CL=typical value*exp (covariate)*exp (ETA). The variation
of individual parameter will be changed to: 

VAR (CL) = (typical value + covariate)*exp (omega/2)). 

You have one more item to capture the overall patterns, less leave to
ETA. So a 'good' covariate will reduce both the magnitude of omega and
ETA's deviation from normal.

Understanding this is also useful when you are modeling BOV studies.
When you see variation of PK parameters decrease with time (or
occasions). Adding a covariate that make physiological sense and also
decrease with time may help your modeling.

Best,
Xia
==
Xia Li
Mathematical Science Department
University of Cincinnati


RE: [NMusers] Very small P-Value for ETABAR

2008-11-13 Thread Ribbing, Jakob
Hi Jian,

As Bill says, including a covariate may fix your problem. However, two
other underlying problems may also be causing this:
1.  Asymmetric shrinkage of the eta. Two examples of this that I
have seen is if you have an eta on epsilon (different residual-error
magnitude in different subjects) or if the doses yield a clear effect in
some subjects but not in others (eta on EC50/EC50 may become more shrunk
on the right tail, since any drug effect in the less sensitive subjects
is difficult to separate from the background noise or circadian
variation). An important covariate may reduce the degree of shrinkage
and the asymmetry in the shrinkage. Other than that, shrinkage is not an
issue unless you use the empirical Bayes estimates for diagnostics, i.e.
use the individual parameters in graphs, calculations, PK predictions as
input to the PD model (IPK approach), etc.
2.  Incorrect distributional assumptions: The parametric model
assumes e.g. a log-normal distribution of the parameter, around its
typical value. If this is not correct eta bar may become biased. You may
try other transformations in nonmem, e.g. proportional or other,
so-called semi-parametric distributions.

For references on Semi-parameteric distributions, search abstracts from
Petterson, Hanze, Savic and Karlsson. For reference on shrinkage, see
the publication below.

Cheers

Jakob


Clin Pharmacol Ther. 2007 Jul;82(1):17-20. Links
Diagnosing model diagnostics.Karlsson MO, Savic RM.
Department of Pharmaceutical Biosciences, Uppsala University, Uppsala,
Sweden.

Conclusions from clinical trial results that are derived from
model-based analyses rely on the model adequately describing the
underlying system. The traditionally used diagnostics intended to
provide information about model adequacy have seldom discussed
shortcomings. Without an understanding of the properties of these
diagnostics, development and use of new diagnostics, and additional
information pertaining to the diagnostics, there is risk that adequate
models will be rejected and inadequate models accepted. Thus, a
diagnosis of available diagnostics is desirable.




From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Denney, William S.
Sent: 13 November 2008 15:41
To: Jian Xu; nmusers@globomaxnm.com
Subject: RE: [NMusers] Very small P-Value for ETABAR

Hi Jian,
 
I would look for a covariate effect on that parameter.
 
Thanks,
 
Bill


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Jian Xu
Sent: Thursday, November 13, 2008 10:16 AM
To: nmusers@globomaxnm.com
Subject: [NMusers] Very small P-Value for ETABAR
Dear NMUSERS,
 
A few years back, there was a discussion on the P-value for ETABAR.
However, I am not sure how to appropriately handle very small P-value(s)
for ETABAR situation during the development of a model. 
 
I need some clarifications to a few questions:
1: Should we just ignore this small P-Value warning? 
2: Can we change IIV model to avoid small P-value for ETABAR? Or any
other suggestions? 
3: Does NONMEM make any assumptions on ETA distribution?
 
This P-value for ETABAR really bugs me a lot. I look forward to seeing
some input.
 
Thank you and I appreicate your time and help.
 
Jian

Notice:  This e-mail message, together with any attachments, contains
information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
New Jersey, USA 08889), and/or its affiliates (which may be known
outside the United States as Merck Frosst, Merck Sharp & Dohme or
MSD and in Japan, as Banyu - direct contact information for affiliates
is
available at http://www.merck.com/contact/contacts.html) that may be
confidential, proprietary copyrighted and/or legally privileged. It is
intended solely for the use of the individual or entity named on this
message. If you are not the intended recipient, and have received this
message in error, please notify us immediately by reply e-mail and
then delete it from your system.


RE: [NMusers] Subpopulation

2008-08-12 Thread Ribbing, Jakob
Dear Huali,

 

The best is to derive one model for all data. If you are pressed with
time it may be sufficient to describe the data in the dose range which
is clinically relevant (if known). Possibly, in this dose range there is
no nonlinerarity. However, splitting the subjects based on the outcome
is not a good idea. The two models you end up with will both be biased
in the parameter estimates since subjects with e.g. high Km (or slow
absorption/high Vmax) will be more abundant in the linear-kinetics
dataset and vice versa for the nonlinear-kinetics dataset*.
Additionally, without a model it is difficult to distinguish an initial
nonlinearity from the absorption process, so that borderline cases may
end up in the wrong dataset.

 

The nonlinearity may only be relevant for a subpopulation of your study
subjects. This can be investigated in a mixture model, in case a single
distribution of parameter values can not describe your data. Before
making such an attempt, try to understand the possible sources of
nonlinearity in your specific case, so that the model captures this.

 

I hope this helps!

 

Jakob

 

*I do not know the source of nonlinearity in the specific case, so this
just to exemplify with nonlinear CL.



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Huali Wu
Sent: 12 August 2008 17:14
To: nmusers@globomaxnm.com
Subject: [NMusers] Subpopulation

 

Dear NMusers:

 

I am trying to fit a dataset with 13 dose levels. The highest dose is
about 10 times of the lowest dose. Each patient receive one dose and
were sampled intensively up to 7 days. The results of individual PK
analysis shown linear kinetics for some of the patients and nonlinear
kinetics for the other patients. I have tried to fit all of them
together. But my advisor wants me to fit linear patients and nonlinear
patients separately to get a better look of fitting. 

 

Additionally, all the nonlinear patients are from higher dose levels.
But not all the patients in higher dose levels shown nonlinear kinetics.
So my question is which way is more appropriate in this case? Should I
fit them all together or separately? Could these two types of patients
be considered as subpopulations?

 

Any comment or suggestion will be highly appreciated.

 

Best regards,

 

Huali

 



RE: [NMusers] adding error term to covariate

2008-07-29 Thread Ribbing, Jakob
Dear Li,

You are right in thinking that your baseline better not be treated as an
ordinary covariate (where we pretend that the covariate values are
measured without error). Unless you want to be very restrictive in how
to use the model (e.g. change from baseline in a study with similar
design and patients with same baseline distribution), it is best to
estimate the baseline. This also allows you to investigate relations (on
the individual level) between baseline and drug effect/disease
progression, etc.

I am sure you will find many old threads on this topic if you search the
nmusers archive. Additionally, here is a recent article:

Related Articles, LinksDansirikul C, Silber HE, Karlsson MO.Approaches
to handling pharmacodynamic baseline responses.J Pharmacokinet
Pharmacodyn. 2008 Jun;35(3):269-83. Epub 2008 Apr 30.

Cheers

Jakob


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of LI,HONG
Sent: 29 July 2008 16:06
To: nmusers@globomaxnm.com
Subject: [NMusers] adding error term to covariate

Dear group,

I have a baseline as covariate in the model, and it is the same 
measurement as the modeled variable.  To me, it is reasonable to 
believe that there is  measurement error for this covariate.  Is 
there a way to incorporate some kind of error term into this 
baseline covariate? Thanks a lot.

LI,HONG
Graduate student
University of Florida
School of Pharmacy
Department of Pharmaceutics
Office: P4-10
Phone: (352)273-7865



RE: [NMusers] NMVI and SEs assessment

2008-04-23 Thread Ribbing, Jakob
Hi Bernad,

 

Regarding differences between nm5 and 6, you can try to run both models
with different initial estimates, to see if the two minima are stable
within a nonmem version. Which minima yield the lowest OFV? Do you have
information in your data to describe a two-compartment model? What is
the correlation between the estimates of THETA3 and THETA4? You can
assess this on your 1000 sets of bootstrap parameters.

 

To calculate confidence interval for a parameter you use the bootstrap
parameters directly and calculate the percentiles. If you use the
bootstrap parameters to calculate SE and then use SE to calculate the
confidence interval, you are assuming that the distribution is normal
(which is wrong in this case).

 

Finally, regarding the covariate model: If you have few subjects in your
dataset, you may get a reduction in OMEGA which may seem relevant in
your sample, but which actually is not. To evaluate the uncertainty in
clinical relevance you can set up a criterion and evaluate for each
bootstrap sample. For example, you can plot de distribution of
(CLgenotyp1/CLgenotype2-1)/SQRT(omegaCL) for your 1000 bootstrap
samples.

 

 

Good luck!

 

Jakob

 



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Bernard ROYER
Sent: 23 April 2008 10:01
To: nmusers@globomaxnm.com
Subject: [NMusers] NMVI and SEs assessment

 

Dear NMusers,


I have questions about bootstraping and NONMEM VI assement of SEs.

I have developed a 2-compartment PK model using NMVI that converged with
estimations of THETAs and OMEGAs. The predictive check simulations
indicated that the model satisfactorily described the data with
parameters estimated by NMVI. Then I started the assement of SEs by
boostrating the data using WFN (1000 resamplings). I found similar
results for THETAs, OMEGAs and SIGMAs (mixt error used). About the
results of SEs, I also found similar results for the SEs of THETA1
(Volume), THETA2 (Clearance), OMEGAs and SIGMA1 (proportional part), but
not for the SEs of THETA3 nor THETA4 (K12 and K21) and the SIGMA2
(residual error). I think that the actual values are more close of those
obtained with bootstrap than those obtained with NMVI.

The results of the obtained SEs are described in the table below. I also
performed a run with the same Input and data set with NMV and the
results are also described in the table (NMV gives same results for
thetas, omegas, sigmas and OFV).


SE of:NMVI valuesBootstrap valuesNMV values
THETA10.1480.1540.154
THETA20.002180.002420.0227
THETA30.000630.00130.0013
THETA40.000360.00220.0019
OMEGA10.00940.00950.0094
OMEGA20.00500.00540.0051
SIGMA10.0063  0.00640.0064
SIGMA20.6750.946   0.832


My questions are:

- Why NMVI gives evaluations of SE less reliable than NMV ? and why only
for THETA3, THETA4 and SIGMA 1

- for covariates determination wiht NMVI, do I need to perform bootstrap
for each covariate or taking into account the decrease of omega is
sufficient ?

- The value of residual error obtained with NMVI is 1.69 (SE = 0.675).
The value obtained with boostrap is 1.58 (SE = 0.95), thus zero is
included in IC95. How interpreting residual error including zero in IC95
with boostrap but not with NMVI. Removing SIGMA2 leads to failure of the
run. Should I fix this value or leaving it with its SE ?


Bernard Royer
Pharmacology Dpt
University Hospital
Besancon, France



RE: [NMusers] Using Prior subroutine

2008-02-20 Thread Ribbing, Jakob
Hi Byung-Jin,

As an alternative to Nicks suggestions PsN can move additional files
into the actual run directory. You need to add the argument
"extra_files" and specify the file name. For example, using the tool
"execute":

execute run1.mod -extra_files=prior.for

where run1.mod is the name of the control stream and prior.for is the
name of the text file holding the NWPRI subroutine.

Good luck!

Jakob


RE: [NMusers] IIV on %CV-Scale - Standard Error (SE)

2008-02-15 Thread Ribbing, Jakob
Thank you all for the clarifications on this,

James previously sent a good explanation outside of nm-users which I am
adding at the end, in case anyone still wants to understand where the
number 2 comes from.

Leonid, you are right there was a typo in my formula and your correction
is correct. What I meant to say was that if one calculates the relative
SE on the variance scale, one has to divide this number by 2 in order to
get the RSE on the appropriate scale. To make this clear to everyone
(stop reading here if you already understand the issue):

If your model is parameterised as:
CL=TVCL*EXP(ETA(1))
And OMEGA11 is estimated to 0.09, this means that the standard deviation
of ETA1 is sqrt(0.09)=0.3. The IIV in CL is then approximately 30%
around the typical CL (TVCL). If the SE of OMEGA11 is estimated to 0.009
the relative SE of OMEGA11 is 0.009/0.09=10% (on the variance scale). We
then report IIV in CL as 30% with RSE=5%.

Best

jakob

-Original Message-
From: James G Wright [mailto:[EMAIL PROTECTED] 
Sent: 15 February 2008 13:57
To: Ribbing, Jakob
Subject: RE: [NMusers] IIV on %CV-Scale - Standard Error (SE)

Hi Jakob,

The trick is just knowing that you need to apply Wald's formula because
the SE is a length (that depends on scale) and not a point.  Thus, to
get the right length, you need to apply a correction factor that depends
on the rescale.  For transformations like x^n, the rescale depends on x,
hence the need to include the derivative.  When I last taught calculus,
I explained this as follows (bear with me, and apologises if I have
missed the point of misunderstanding and inadvertently patronize).

Imagine you have a square, with sides of length 10 (x), it has area of
100 (x^2).  Now imagine, you increase the length of a side by 1.  The
are is now 121=(x+1)^2=x^2+2x+1.  Visually, you have added an extra
length on to each side of the square and extra +1 in the corner.  The 2x
is the first derivative of x^2, that is how much x^2 changes if you add
1 to x. 

What does this have to do with a confidence interval?  To calculate the
95% confidence interval you add/subtract 1.96 times the SE to x.   You
can estimate the impact this has on x^2, by using the first derivative.
Of course, you have to do all this backwards for the square root case.
For more complex functions, the formula is only approximate because
second-order terms can become important.

That's how I visualise the equations in my head anyway.  Best regards,
James

James G Wright PhD
Scientist
Wright Dose Ltd
Tel: 44 (0) 772 5636914


[NMusers] IIV on %CV-Scale - Standard Error (SE)

2008-02-15 Thread Ribbing, Jakob
Hi all,

I think that Paul stumbled on a rather important issue. The SE of the residual 
error may not be of primary interest, but the same as discussed under this 
thread also applies to the standard error of omega. (I changed the name of the 
subject since this thread now is about omega)

I prefer to report IIV on the %CV scale, i.e. sqrt(OMEGAnn) for a parameter 
with log-normal distribution. It then makes no sense to report the standard 
error on any other scale. For log-normally distributed parameters the relative 
SE of IIV then becomes:
sqrt(SE.OMEGAnn)/(2*sqrt(OMEGAnn))*100%

Notice the factor 2 in the denominator. I got this from Mats Karlsson who 
picked it up from France Mentré, but I have never seen the actual mathematical 
derivation for this formula. I think this is what Varun is doing in his e-mail 
a few hours ago. However, I am not sure; being illiterate I could not 
understand the derivation. Either way, if we are satisfied with the 
approximation of IIV as the square root of omega, the factor 2 in the 
approximation of the SE on the %CV-scale is exact enough.

If you would like to convince yourself of that the factor 2 is correct (up to 3 
significant digits), you can load the below Splus function and then run with 
different CV:s, e.g:
ratio(IIV=1)
ratio(IIV=0.5)

Regards

Jakob


"ratio" <- function (IIV.stdev=1) {
ncol <- 1000 #1000 Studies, in which IIV is estimated
ETAS <- rnorm(n=1000*ncol, 0, IIV.stdev)
ETA  <- matrix(data=ETAS, ncol=ncol)
IIVs.stds<- colStdevs(ETA) #Estimate of IIV on sd-scale
IIVs.vars<- colVars(ETA)   #Estimate of IIV on var-scale

SE.std  <- stdev(IIVs.stds)/sqrt(ncol)
SE.var  <- stdev(IIVs.vars)/sqrt(ncol)
CV.std  <- SE.std/IIV.stdev
CV.var  <- SE.var/(IIV.stdev^2)
print(paste("SE on Var scale:", SE.var))
print(paste("SE on Std scale:", SE.std))
print(paste("Ratio CV var, CV std:", CV.var/CV.std))
invisible()
}




From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of varun goel
Sent: 14 February 2008 23:07
To: [EMAIL PROTECTED]; NONMEM users forum
Subject: Re: [NMusers] Combined residual model and IWRES.

Dear Paul, 

You can use the delta method to compute the variance and expected value of a 
transformation, which is square in your case.

given y=theta^2


E(y)=theta^2
Var(y)=Var(theta)+(2*theta)^2 ; the later portion is square of the first 
derivative of  y with respect of theta. 

In your example theta is the standard deviation whereas error estimate is 
variance. I did not follow your values very well, so I ran a model with same 
reparameterization and got following results.

theta=2.65, rse=27.2%
err=7.04; rse=54.4%

theta.1<-2.65
rse<-27.2 
var.theta.1<-(rse*theta.1/100)^2  ## = 0.51955 

err.1<-7.04
rse.err.1<-54.4#%
var.err.1<-(rse.err.1*err.1/100)^2 ##  = 14.66

##now from delta method
 
E(err)=2.65^2 ## 7.025 close to 7.04
var(err)=(2*2.65)^2*0.51955 ##   14.59 close to 14.66

Hope it helps

Varun Goel
PhD Candidate, Pharmacometrics
Experimental and Clinical Pharmacology
University of Minnesota