RE: [NMusers] backward integration from t-a to t

2014-01-16 Thread lgibian...@quantpharm.com
Hi Pavel,
You mentioned that the effect compartment did not help, and the model I 
suggested is identical to the effect compartment. May be try something like 
transit compartment model:

DADT(2)=C-K0*A(2)
DADT(3)=K0*A(2)-K0*A(3)
...
DADT(X)=K0*A(X-1)-K0*A(X)

AUCapprox=A(2)+...+A(X)

This will prolong the shape of AUCapprox(t). It could be a bit simpler and  
smoother than tlag implementation 
Leonid 







Original email:
-
From: Pavel Belo non...@optonline.net
Date: Thu, 16 Jan 2014 13:05:54 -0500 (EST)
To: lgibian...@quantpharm.com, nmusers@globomaxnm.com
Subject: RE: [NMusers] backward integration from t-a to t


Hello Leonid,

Thank you bein helpful.  You got the main point.  AUC is a better 
predictor than concentration, but it has to disppear very slowly but 
surely.

A potential challenge is biological meaning of this approach.  It will 
be necessary to explain it to the biologists, who ask question like "Why 
do you use 2 compartment in PK model while human body has so many 
compartments?".

We will see!

Thanks,
Pavel



On Wed, Jan 15, 2014 at 01:19 AM, lgibian...@quantpharm.com wrote:

> Pavel,
> I think one can use equation
> DADT(2)=C-K0*A(2)
>
> where C is the drug concentration. When K0=0, A2 is cumulative AUC. 
> When
> k0>0, A2 would represent something like AUC for the interval prior to 
> the current
> time
> The length of the interval would be proportional to 1/K0 (and equal to
> infinity when k0=0). Conceptually, K0 is the rate of "AUC elimination" 
> from the
> system. PD then can be made dependent on A2, and the model would 
> select optimal
> value of K0. One interesting case to understand the concept is when C 
> is constant.
> Then A2=C/K0 while AUC over some interval TAU is AUC=C*TAU. So 
> roughly, A2 can
> be interpreted as AUC over the interval of 1/K0. Leonid
>
>
> Original email:
> -
> From: Pavel Belo non...@optonline.net
> Date: Tue, 14 Jan 2014 13:45:18 -0500 (EST)
> To: robert.ba...@iconplc.com, nmusers@globomaxnm.com
> Subject: [NMusers] backward integration from t-a to t
>
>
>
>
>
> Dear Robert,
>
>
>
>
> Â
>
> Efficacy is frequently considered a function of AUC. (AUC is just 
> an integral. It is obvious how to calculate AUC any software which can 
> solve ODE.) A disadvantage of this model of efficacy is that the 
> effect is irreversable because AUC of concentration can only 
> increase; it cannot decrease. In many cases, a more meaningful model 
> is a model where AUC is calculated form time t -a to t (kind of 
> "moving average"), where t is time in the system of differential 
> equations (variable T in NONMEM).  There are 2 obvious ways to 
> calculate AUC(t-a, t). The first is to do backward integration, which 
> looks like a hard and resource consuming way for NONMEM. The second 
> one is to keep in memory AUC for all time points used during the 
> integration and calculate AUC(t-a,t) as AUC(t) - AUC(t-a), there 
> AUC(t-a) can be interpolated using two closest time points below and 
> above t-a.Â
>
> Â
>
> Is there a way to access AUC for the past time points (> integration 
> routine? It seems like an easy thing to do.  Â
>
> Â
>
> Kind regards,
>
>
> Pavel Â
>
>
> 
> mail2web - Check your email from the web at
> http://link.mail2web.com/mail2web
>
>
>


myhosting.com - Premium Microsoft® Windows® and Linux web and application
hosting - http://link.myhosting.com/myhosting




RE: [NMusers] backward integration from t-a to t

2014-01-14 Thread lgibian...@quantpharm.com
Pavel,
I think one can use equation 

DADT(2)=C-K0*A(2)

where C is the drug concentration. When K0=0, A2 is cumulative AUC. When
k0>0, 
A2 would represent something like AUC for the interval prior to the current
time
The length of the interval would be proportional to 1/K0 (and equal to
infinity 
when k0=0). Conceptually, K0 is the rate of "AUC elimination" from the
system. 
PD then can be made dependent on A2, and the model would select optimal
value of 
K0. One interesting case to understand the concept is when C is constant.
Then 
A2=C/K0 while AUC over some interval TAU is AUC=C*TAU. So roughly, A2 can
be 
interpreted as AUC over the interval of 1/K0. 
Leonid


Original email:
-
From: Pavel Belo non...@optonline.net
Date: Tue, 14 Jan 2014 13:45:18 -0500 (EST)
To: robert.ba...@iconplc.com, nmusers@globomaxnm.com
Subject: [NMusers] backward integration from t-a to t





Dear Robert,




 


Efficacy is frequently considered a function of AUC.  (AUC is just an 
integral. It is obvious how to calculate AUC any software which can 
solve ODE.)  A disadvantage of this model of efficacy is that the effect 
is irreversable because AUC of concentration can only increase; it 
cannot decrease.  In many cases, a more meaningful model is a model 
where AUC is calculated form time t -a to t (kind of "moving average"), 
where t is time in the system of differential equations (variable T in 
NONMEM).   There are 2 obvious ways to calculate AUC(t-a, t).  The first 
is to do backward integration, which looks like a hard and resource 
consuming way for NONMEM.  The second one is to keep in memory AUC for 
all time points used during the integration and calculate AUC(t-a,t) as 
AUC(t) - AUC(t-a), there AUC(t-a) can be interpolated using two closest 
time points below and above t-a. 


 


Is there a way to access AUC for the past time points (http://link.mail2web.com/mail2web




RE: [NMusers] Time-varing covariate

2013-08-24 Thread lgibian...@quantpharm.com
I think there was a typo, should it be: 
IF(TIME.GT.OTIM) CCOV=OCOV+(T-OTIM)*(COV-OCOV)/(TIME-OTIM) ;CCOV is linear
 ?
Leonid
Original email:
-
From: Mats Karlsson mats.karls...@farmbio.uu.se
Date: Sat, 24 Aug 2013 10:02:00 +0200
To: william.s.den...@pfizer.com, ellen.siwei...@gmail.com, 
nmusers@globomaxnm.com
Subject: RE: [NMusers] Time-varing covariate


Dear Bill and Siwei,

 

Although the thought in Bill's reply is right, I think there is an error in
the code. NONMEM by default uses the present values to update from the
previous time. 

Further, it is possible to do this interpolation on the fly in the model
file without changes to the data set. 

 

$INPUT ID TIME DV COV ;Cov is time-varying covariate

$PK

IF(NEWIND.NE.2) OTIM=0  ;initialize variable to store old time

IF(NEWIND.NE.2) OCOV=0  ;initialize variable to store old covariate value

 

$DES

CCOV=OCOV

IF(TIME.GT.OTIM) CCOV=OCOV+(T-TIME)*(COV-OCOV)/(TIME-OTIM) ;CCOV is linear
interpolation between observed covariate values

 

$ERROR

OCOV =COV;store previous time 

OTIM  =TIME  ;store previous time

 

(NB haven't tested the code).

Best regards,

Mats

Mats Karlsson, PhD

Professor of Pharmacometrics

 

Dept of Pharmaceutical Biosciences

Faculty of Pharmacy

Uppsala University

Box 591

75124 Uppsala

 

Phone: +46 18 4714105

Fax + 46 18 4714003

 
www.farmbio.uu.se/research/researchgroups/pharmacometrics/

 

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On
Behalf Of Denney, William S.
Sent: 23 August 2013 20:09
To: siwei Dai; nmusers@globomaxnm.com
Subject: RE: [NMusers] Time-varing covariate

 

Hi Siwei,

 

If you are using an algebraic model (i.e. no differential equations), then
you can simply include it in your equation:

 

e.g. assuming that SBP is systolic blood pressure in your original data set:

 

EFF=THETA(1)+SBP*THETA(2)

 

If you have a differential equation model and you want the time varying
covariate to have an effect that is not a step change, you will need to
interpolate the covariate.  Within your data set, you'll need a column for
the next time and the next value of the time varying covariate.  Using the
same assumption that you have SBP in your data set as your time varying
covariate, you will want to make two new columns to allow for interpolation:

 

ID   TIME  NTIME  SBP  NSBP  DV

10 1  110  115   5

11 2  115  112   3

12 4  112  108   4

 

Then to use your parameter, you will need code like the following in your
$DES section to linearly interpolate:

 

$DES

.

;; Current SBP

CSBP = (NSBP - SBP)/(NTIME - TIME) * (T - TIME) + SBP

.

 

The parameter T is the current time for the differential equation solver
which will be somewhere between TIME and NTIME.  TIME is an important column
name for NONMEM.

 

Thanks,

 

Bill

 

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On
Behalf Of siwei Dai
Sent: Friday, August 23, 2013 12:01 PM
To: nmusers@globomaxnm.com
Subject: [NMusers] Time-varing covariate

 

Hi, Dear NMusers:

 

I want to add a time-varing covariate in my model. For example, blood
pressure or blood flow as covariates. But I am not sure how to do it. I see
some earlier threads to discuss it but they all use complicated methods.  

 

I am wondering if there are any new way  to do it in NM 7.2?  I see in the
user guide that EVID=4 can indicate physiological change. Is this what I
should use?

 

Thank you very much for any suggestions. 

 

Best regards,

 

Siwei




mail2web.com – Enhanced email for the mobile individual based on Microsoft®
Exchange - http://link.mail2web.com/Personal/EnhancedEmail




Re: [NMusers] Question about handling BLOQ data with mixture model

2012-05-25 Thread lgibian...@quantpharm.com
Andy,
It was just "comment to the comment" where there was a discrepancy.
Original 
values do sum to one, so this could not be the reason for the model
problem. 
Leonid


Original Message:
-
From: Andy Stein andy.st...@gmail.com
Date: Fri, 25 May 2012 14:45:07 -0400
To: lgibian...@quantpharm.com, nmusers@globomaxnm.com, yapingz2...@gmail.com
Subject: Re: [NMusers] Question about handling BLOQ data with mixture model


I wanted to follow up on the comments to Yaping's email.First, the
three probabilities below from the original code do in fact sum to 1.

   P(1)=THETA(8)/100

   P(2)=(1-THETA(8)/100)*THETA(9)/1000

   P(3)=(1-THETA(8)/100)*(1-THETA(9)/1000)


Note that: P(2) + P(3) = 1-THETA(8)/100


And thus P(1) + P(2) + P(3) = 1


Also, the model worked completely fine when the BLOQ part of the code was
left out and only the mixture was modeled.  That is what led us to think
that the combination of BLOQ with a Mixture was causing the problem.


Andy

On Fri, May 25, 2012 at 1:07 PM, lgibian...@quantpharm.com <
lgibian...@quantpharm.com> wrote:

> Sum of probabilities should sum to 1. More standard way would be to use
>
> P(1) = 1/(1+THETA(8)+THETA(9))
>
> P(2) = THETA(8)/(1+THETA(8)+THETA(9))
>
> P(3) = THETA(9)/(1+THETA(8)+THETA(9))
>
> where THETA(8) and THETA(9) are any positive numbers
> Regards
> Leonid
>
> Original Message:
> -
> From: Martin Bergstrand martin.bergstr...@farmbio.uu.se
> Date: Fri, 25 May 2012 22:59:45 +0700
> To: yapingz2...@gmail.com, nmusers@globomaxnm.com
> Subject: RE: [NMusers] Question about handling BLOQ data with mixture
model
>
>
> Dear Yaping,
>
>
>
> I can see that you need to make any particular consideration  because you
> are applying a mixture model. CUMD is dependent on IPRED that in its turn
> is
> dependent on the assigned mixture. That should be enough.
>
>
>
> However, I spot what must be an error in your way of defining your mixture
> probabilities. As it is now you total probability does not sum up to 1.
Why
> don't you parameterize it his way:
>
>
>
> $MIX
>
>   NSPOP=3
>
>   P(1) = THETA(8)/100
>
>   P(2) = (1-THETA(8)/100)*THETA(9)/100
>
>   P(3) = 1-THETA(8)/100 -THETA(9)/100
>
>
>
> $THETA (0, 3.78, 100)  ; PMIX1
>
> $THETA (80, 91, 100)   ; PMIX2/(1-PMIX1)
>
>
>
> I have kept the division of THETAs by 100 since I assume that you want
> estimates in %. However the division with 1000 did not make any sense to
me
> despite the correctly assigned THETA boundaries?
>
>
>
> Finally a word of caution, be careful with the use of NONMEM reserved
> variables such as T (integrated time in $DES) and F (default model
> prediction with some ADVANS). In my experience things can go wrong when
you
> use them outside the way it was intended, conflicts can occur.
>
>
>
> Kind regards,
>
>
>
> Martin Bergstrand, PhD
>
> Pharmacometrics Research Group
>
> Dept of Pharmaceutical Biosciences
>
> Uppsala University
>
> Sweden
>
> martin.bergstr...@farmbio.uu.se
>
>
>
> Visiting scientist:
>
> Mahidol-Oxford Tropical Medicine Research Unit,
>
> Bangkok, Thailand
>
>
>
>
>
> From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
> On
> Behalf Of Yaping Zhang
> Sent: den 25 maj 2012 02:04
> To: nmusers@globomaxnm.com
> Subject: [NMusers] Question about handling BLOQ data with mixture model
>
>
>
> Hello NMUsers,
>
>
>
> I am trying to analyze BLOQ data using the M3 method (Stuart Beal, Ways to
> Fit a PK Model with Some Data Below the Quantification Limit, 2001). The
> model I have is a mixture model to describe PD response of three
> subpopulations. The complete control stream is pasted below.
>
>
>
> I have implemented just the mixture model (no BLOQ) and just the BLOQ
error
> model (no mixture) and it works fine with nonmem 6.  But the run crashed
> immediately using nonmem 6 if including both BLOQ and the mixture model.
>
>
>
> I am wondering if I need to account for the mixture in the section of the
> code below when putting a mixture model together with the BLOQ error model
>
> IF (BLOQ.EQ.1) THEN
>
> F_FLAG=1
>
> Y =CUMD + something based on MIXNUM?
>
> ENDIF
>
>
>
> Any ideas are gratefully received!
>
>
>
> Many thanks,
>
> Yaping
>
>
>
>
>
> $PROB AMN107A2303
>
>
>
> $INPUT NUM=DROP STUD=DROP SUBJ=DROP ID AGE0=DROP SEX=DROP RACE=DROP
>
> DART=DROP ARM ACTT=DROP POP PPK=DROP COUN=DROP
>
> SOK OTIM=DROP TIME TVIS=DROP
>
> AMT=DROP DOSE=DROP SCHD=DROP AUC=DROP CMIN=DROP
>
> ODV=DROP MDV DV BLOQ STY=DROP EVDT=DROP
>

RE: [NMusers] Question about handling BLOQ data with mixture model

2012-05-25 Thread lgibian...@quantpharm.com
Sum of probabilities should sum to 1. More standard way would be to use

P(1) = 1/(1+THETA(8)+THETA(9)) 

P(2) = THETA(8)/(1+THETA(8)+THETA(9)) 

P(3) = THETA(9)/(1+THETA(8)+THETA(9)) 

where THETA(8) and THETA(9) are any positive numbers
Regards
Leonid

Original Message:
-
From: Martin Bergstrand martin.bergstr...@farmbio.uu.se
Date: Fri, 25 May 2012 22:59:45 +0700
To: yapingz2...@gmail.com, nmusers@globomaxnm.com
Subject: RE: [NMusers] Question about handling BLOQ data with mixture model


Dear Yaping,

 

I can see that you need to make any particular consideration  because you
are applying a mixture model. CUMD is dependent on IPRED that in its turn is
dependent on the assigned mixture. That should be enough. 

 

However, I spot what must be an error in your way of defining your mixture
probabilities. As it is now you total probability does not sum up to 1. Why
don't you parameterize it his way:

 

$MIX

   NSPOP=3

   P(1) = THETA(8)/100

   P(2) = (1-THETA(8)/100)*THETA(9)/100

   P(3) = 1-THETA(8)/100 -THETA(9)/100

 

$THETA (0, 3.78, 100)  ; PMIX1

$THETA (80, 91, 100)   ; PMIX2/(1-PMIX1)

 

I have kept the division of THETAs by 100 since I assume that you want
estimates in %. However the division with 1000 did not make any sense to me
despite the correctly assigned THETA boundaries? 

 

Finally a word of caution, be careful with the use of NONMEM reserved
variables such as T (integrated time in $DES) and F (default model
prediction with some ADVANS). In my experience things can go wrong when you
use them outside the way it was intended, conflicts can occur. 

 

Kind regards,

 

Martin Bergstrand, PhD

Pharmacometrics Research Group 

Dept of Pharmaceutical Biosciences

Uppsala University

Sweden

martin.bergstr...@farmbio.uu.se

 

Visiting scientist:

Mahidol-Oxford Tropical Medicine Research Unit, 

Bangkok, Thailand

 

 

From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On
Behalf Of Yaping Zhang
Sent: den 25 maj 2012 02:04
To: nmusers@globomaxnm.com
Subject: [NMusers] Question about handling BLOQ data with mixture model

 

Hello NMUsers,

 

I am trying to analyze BLOQ data using the M3 method (Stuart Beal, Ways to
Fit a PK Model with Some Data Below the Quantification Limit, 2001). The
model I have is a mixture model to describe PD response of three
subpopulations. The complete control stream is pasted below.

 

I have implemented just the mixture model (no BLOQ) and just the BLOQ error
model (no mixture) and it works fine with nonmem 6.  But the run crashed
immediately using nonmem 6 if including both BLOQ and the mixture model.

 

I am wondering if I need to account for the mixture in the section of the
code below when putting a mixture model together with the BLOQ error model

IF (BLOQ.EQ.1) THEN

F_FLAG=1

Y =CUMD + something based on MIXNUM?

ENDIF

 

Any ideas are gratefully received!

 

Many thanks,

Yaping

 

 

$PROB AMN107A2303

 

$INPUT NUM=DROP STUD=DROP SUBJ=DROP ID AGE0=DROP SEX=DROP RACE=DROP 

DART=DROP ARM ACTT=DROP POP PPK=DROP COUN=DROP 

SOK OTIM=DROP TIME TVIS=DROP 

AMT=DROP DOSE=DROP SCHD=DROP AUC=DROP CMIN=DROP 

ODV=DROP MDV DV BLOQ STY=DROP EVDT=DROP

 

$DATA ../data/AMN2303_ENEST.csv ; currently modified with matlab and saved
with oocalc

 

IGNORE=@

IGNORE=(MDV.EQ.1)

IGNORE=(ID.EQ.66, ID.EQ.92, ID.EQ.335, ID.EQ.346, ID.EQ.348, ID.EQ.416,
ID.EQ.496, ID.EQ.527, ID.EQ.762, ID.EQ.790)

 

$PRED

mu   =THETA(1)*EXP(ETA(1))

AA   =THETA(2)*EXP(ETA(2))

alpha   =THETA(3)*EXP(ETA(3))

BB   =THETA(4)*EXP(ETA(4))

beta =THETA(5)*EXP(ETA(5))

InCC=THETA(6)+ETA(6)

gamma=THETA(7)*EXP(ETA(7))

 

T = TIME

IF (TIME.LE.0) T = 0

 

EST=MIXEST

IF (MIXNUM.EQ.1) F =AA*EXP(mu*T/8766)

IF (MIXNUM.EQ.2) F =AA*EXP(alpha*T/8766)+BB*EXP(beta*T/8766)

IF (MIXNUM.EQ.3) F
=AA*EXP(alpha*T/8766)+BB*EXP(beta*T/8766)+EXP(InCC)*EXP(gamma*T/8766)

 

PROP=THETA(10)

W=SQRT(PROP*PROP)

 

IPRED=-2.8

IF(F.GT.0)IPRED =LOG10(F)

LLOQ=-2.5

DUM=(LLOQ-IPRED)/W

CUMD=PHI(DUM)

 

IF (BLOQ.EQ.0) THEN

F_FLAG=0

Y =IPRED+W*ERR(1)

ENDIF

 

IF (BLOQ.EQ.1) THEN

F_FLAG=1

Y =CUMD

ENDIF

 

$MIX

   NSPOP=3

   P(1)=THETA(8)/100

   P(2)=(1-THETA(8)/100)*THETA(9)/1000

   P(3)=(1-THETA(8)/100)*(1-THETA(9)/1000)

 

$THETA  (-10,-0.57,0); mu THETA(1)

$THETA  (0.0001,50.7,100); AA THETA(2)

$THETA  (-100,-14.2,0)  ; alpha THETA(3)

$THETA  (0.0001,0.196,10); BB THETA(4)

$THETA  (-10,-0.678,0)  ; beta THETA(5)

$THETA  (-100,-6.92,0)  ; InCC THETA(6)

$THETA  (0, 2.15)   ; gamma THETA(7)

$THETA  (0, 3.78, 100)   ; THETA(8)

$THETA  (800, 910, 1000)  ; THETA(9)

$THETA  (0.001,0.104, 5)   ; ERR THETA(10)

 

$OMEGA0.001 FIX; mu

$OMEGA0.767   ; AA

$OMEGA0.236   

RE: [NMusers] Successful minimization and covariance

2012-05-24 Thread lgibian...@quantpharm.com
Ayyappa,
Since you already have bootstrap data, you may check few things:
1. Take any of the runs which has successful covariance step, and look on
the 
RSEs. Are there any exceeding, say 60-70%? If yes, these parameters are not 
supported by the data. If all RSEs looks good, this would support validity
of 
the model
2. Compute eigenvalues of the correlation matrix of the bootstrap parameter 
estimates.  Roughly, the values above 1000 (for the ratio of the maximum to 
minimum of these eigenvalues) may indicate over-parameterization.
3. Create a scatter-plot matrix of bootstrap parameter estimates (N by N
matrix 
of plots where plot i-j is the 1000 parameter-i values plotted against 1000 
parameter-j values, or some variant of this diagnostics). If any of the 
parameters are strongly correlated, you will immediately see it on these
plots.

If none of these diagnostics reveals any suspicious behavior, I would
accept the 
model as is.

Another place to look is outliers. Few points with unrealistic
concentrations 
may lead to minimization or COV failure. In your case, this is not likely
since 
many bootstrap runs do not converge.

Yes another place is to look on your matrix of random effects: too many
random 
effects may lead to non-convergences if these effects are not supported by
the 
data.

Do you have oral, IV or mixed dosing? Was it done in differential equations 
(ADVAN 6, 8, 9, 13) or as exact solution (ADVAN 3-4)? Were there any
covaraite 
effect in the model, and if yes, was there a sufficient range of data to
support 
these effects?

Leonid



Original Message:
-
From: Ayyappa Chaturvedula chaturvedul...@mercer.edu
Date: Thu, 24 May 2012 08:33:21 -0400
To: nmusers@globomaxnm.com
Subject: [NMusers] Successful minimization and covariance


Dear Group,
This is a topic that has been discussed and different schools of thinking
exist 
to my knowledge.  But, I want to restate my case and get some opinions. 
The 
question  is about how important to have successful minimization and
covariance 
if diagnostics make sense.  I have developed  a two compartment model with
a 
Phase 3 trial data and minimization was successful but covariance step was
not.  
I went ahead and did a 1000 run bootstrap and wanted to get the confidence 
intervals of parameters.  There are 60% runs that are not successfully
minimized 
and many other do not have covariance step successful.   I put together CI
from 
the runs that have successful minimization and also including all 1000
runs.  
There is no difference in the parameter estimate or the confidence interval 
(less than 5% change in numbers).  The model diagnostics look good 
including 
VPC, NPDE plots, basic gof and a simulation to explain another trial data. 
Now, 
my question is in this particular case do I have to worry further to make
the 
successful covariance step and increase the number of runs that gets 
successfully minimized in  the bootstrap even though I cannot see much 
difference in the parameter estimates, diagnostics?  My bottom line is not
going 
to change in anyway.  I appreciate your expert opinions.

Regards,
Ayyappa


mail2web LIVE – Free email based on Microsoft® Exchange technology -
http://link.mail2web.com/LIVE




Re: [NMusers] RE: Successful minimization and covariance

2012-05-24 Thread lgibian...@quantpharm.com
Dennis,
Yes, I've seen it as well. I also noticed that starting from the solution
(or 
very close to the solution) sometimes leads to the non-convergence or
failure of 
the covariance step. In these cases, starting from the initial estimates 
randomly changed by about 10% from the solution may fix the problem.
Leonid

Original Message:
-
From: Fisher Dennis fis...@plessthan.com
Date: Thu, 24 May 2012 08:53:30 -0700
To: nmusers@globomaxnm.com, n.holf...@auckland.ac.nz
Subject: Re: [NMusers] RE: Successful minimization and covariance


Nick

In fairness, Stu Beal advocated strongly for the position taken by
Gianluca.  
However, my own experience is that runs that fail to converge sometimes
(often?) 
converge after minor changes to the initial estimates.  

Dennis

Dennis Fisher MD
P < (The "P Less Than" Company)
Phone: 1-866-PLessThan (1-866-753-7784)
Fax: 1-866-PLessThan (1-866-753-7784)
www.PLessThan.com

On May 24, 2012, at 7:51 AM, Nick Holford wrote:

> Gianluca,
> 
> What is your experimental evidence to allow you to conclude that failure
of 
convergence is due to over-parameterization? (instead of other things like
to 
large value of NSIG)
> 
> Nick
> 
> On 24/05/2012 4:34 p.m., Nucci, Gianluca wrote:
>> 
>> I would not worry about the validity of your bootstrapped CI (and I
would 
include all the runs) but I think you have to worry that your model is
seriously 
over-parameterized if 60% of bootstrapped runs fail to converge. It does
not 
mean it is a bad model – just that the data does not permit to fit all the 
parameters and you should consider fixing something, or use prior
information or 
add data from more intensively sampled studies.
>> Best Regards
>> 
>> Gianluca
>> 
>> Gianluca Nucci, PhD
>> 
>> Clinical Pharmacology
>> 
>> Pfizer PharmaTherapeutics R&D
>> 
>> 620 Memorial Drive,
>> 
>> Cambridge, MA 02139
>> 
>> Room # 464
>> 
>> Office 617-551-3525
>> 
>> Mobile 860-405-4824
>> 
>> Fax 860-686-8225
>> 
>> *From:*owner-nmus...@globomaxnm.com
[mailto:owner-nmus...@globomaxnm.com] *On 
Behalf Of *Ayyappa Chaturvedula
>> *Sent:* Thursday, May 24, 2012 8:33 AM
>> *To:* nmusers@globomaxnm.com
>> *Subject:* [NMusers] Successful minimization and covariance
>> 
>> Dear Group,
>> 
>> This is a topic that has been discussed and different schools of
thinking 
exist to my knowledge. But, I want to restate my case and get some
opinions. The 
question is about how important to have successful minimization and
covariance 
if diagnostics make sense. I have developed a two compartment model with a
Phase 
3 trial data and minimization was successful but covariance step was not. I
went 
ahead and did a 1000 run bootstrap and wanted to get the confidence
intervals of 
parameters. There are 60% runs that are not successfully minimized and many 
other do not have covariance step successful. I put together CI from the
runs 
that have successful minimization and also including all 1000 runs. There
is no 
difference in the parameter estimate or the confidence interval (less than
5% 
change in numbers). The model diagnostics look good including VPC, NPDE
plots, 
basic gof and a simulation to explain another trial data. Now, my question
is in 
this particular case do I have to worry further to make the successful 
covariance step and increase the number of runs that gets successfully
minimized 
in the bootstrap even though I cannot see much difference in the parameter 
estimates, diagnostics? My bottom line is not going to change in anyway. I 
appreciate your expert opinions.
>> 
>> Regards,
>> 
>> Ayyappa
>> 
> 
> -- 
> Nick Holford, Professor Clinical Pharmacology
> 
> First World Conference on Pharmacometrics, 5-7 September 2012
> Seoul, Korea http://www.go-wcop.org
> 
> Dept Pharmacology&  Clinical Pharmacology, Bldg 505 Room 202D
> University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand
> tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53
> email: n.holf...@auckland.ac.nz
> http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
> 
> 
> 



mail2web.com – What can On Demand Business Solutions do for you?
http://link.mail2web.com/Business/SharePoint




Re: [NMusers] Successful minimization and covariance

2012-05-24 Thread lgibian...@quantpharm.com
Please, let the group know if/how you resolve the problem
Leonid

Original Message:
-
From: Ayyappa Chaturvedula chaturvedul...@mercer.edu
Date: Thu, 24 May 2012 12:27:03 -0400
To: lgibian...@quantpharm.com
Subject: Re: [NMusers] Successful minimization and covariance


Thank you for the suggestions. I will work on those. It is a orally
administered 
drug. We do have good spread of the data points to support a 2-com model
and 
prior knowledge on the drug supports this.


On May 24, 2012, at 12:21 PM, "lgibian...@quantpharm.com" 
 wrote:

> Ayyappa,
> Since you already have bootstrap data, you may check few things:
> 1. Take any of the runs which has successful covariance step, and look on
> the 
> RSEs. Are there any exceeding, say 60-70%? If yes, these parameters are
not 
> supported by the data. If all RSEs looks good, this would support validity
> of 
> the model
> 2. Compute eigenvalues of the correlation matrix of the bootstrap
parameter 
> estimates.  Roughly, the values above 1000 (for the ratio of the maximum
to 
> minimum of these eigenvalues) may indicate over-parameterization.
> 3. Create a scatter-plot matrix of bootstrap parameter estimates (N by N
> matrix 
> of plots where plot i-j is the 1000 parameter-i values plotted against
1000 
> parameter-j values, or some variant of this diagnostics). If any of the 
> parameters are strongly correlated, you will immediately see it on these
> plots.
> 
> If none of these diagnostics reveals any suspicious behavior, I would
> accept the 
> model as is.
> 
> Another place to look is outliers. Few points with unrealistic
> concentrations 
> may lead to minimization or COV failure. In your case, this is not likely
> since 
> many bootstrap runs do not converge.
> 
> Yes another place is to look on your matrix of random effects: too many
> random 
> effects may lead to non-convergences if these effects are not supported by
> the 
> data.
> 
> Do you have oral, IV or mixed dosing? Was it done in differential
equations 
> (ADVAN 6, 8, 9, 13) or as exact solution (ADVAN 3-4)? Were there any
> covaraite 
> effect in the model, and if yes, was there a sufficient range of data to
> support 
> these effects?
> 
> Leonid
> 
> 
> 
> Original Message:
> -
> From: Ayyappa Chaturvedula chaturvedul...@mercer.edu
> Date: Thu, 24 May 2012 08:33:21 -0400
> To: nmusers@globomaxnm.com
> Subject: [NMusers] Successful minimization and covariance
> 
> 
> Dear Group,
> This is a topic that has been discussed and different schools of thinking
> exist 
> to my knowledge.  But, I want to restate my case and get some opinions. 
> The 
> question  is about how important to have successful minimization and
> covariance 
> if diagnostics make sense.  I have developed  a two compartment model with
> a 
> Phase 3 trial data and minimization was successful but covariance step was
> not.  
> I went ahead and did a 1000 run bootstrap and wanted to get the
confidence 
> intervals of parameters.  There are 60% runs that are not successfully
> minimized 
> and many other do not have covariance step successful.   I put together CI
> from 
> the runs that have successful minimization and also including all 1000
> runs.  
> There is no difference in the parameter estimate or the confidence
interval 
> (less than 5% change in numbers).  The model diagnostics look good 
> including 
> VPC, NPDE plots, basic gof and a simulation to explain another trial
data. 
> Now, 
> my question is in this particular case do I have to worry further to make
> the 
> successful covariance step and increase the number of runs that gets 
> successfully minimized in  the bootstrap even though I cannot see much 
> difference in the parameter estimates, diagnostics?  My bottom line is not
> going 
> to change in anyway.  I appreciate your expert opinions.
> 
> Regards,
> Ayyappa
> 
> 
> mail2web LIVE – Free email based on Microsoft® Exchange technology -
> http://link.mail2web.com/LIVE
> 
> 



mail2web.com - Microsoft® Exchange solutions from a leading provider -
http://link.mail2web.com/Business/Exchange




Re: [NMusers] Question regarding Calculation Process in $DES BLOCK

2011-06-06 Thread lgibian...@quantpharm.com
Li Li,
Li,
I think, you can avoid using MTIME (that is more useful for the ADVANs with 
analytical solutions) and use something like:

$DES
T1 = T-24*INT(T/24)   ;convert time to 0-24 hr period
KIN = KIN3
IF(T1.LE.TMAX) KIN=KIN2 ;24 > TMAX > TMIN defined in PK block
IF(T1.LE.TMIN) KIN=KIN1
DADT(1)= KIN-KOUT*A(1)


Leonid

Original Message:
-
From: Li Li lili.uf2...@gmail.com
Date: Fri, 3 Jun 2011 17:04:09 -0400
To: alisonboeckm...@fastmail.fm, luann.phill...@cognigencorp.com, 
pmdiderich...@wequantify.com, nmusers@globomaxnm.com
Subject: Re: [NMusers] Question regarding Calculation Process in $DES BLOCK


Hi,

Thank you for all you help and explanation. I tried with MTIME,MNEXT, and
MPAST. However, I got warning :(WARNING  80) $PK SETS MTIME BUT NOT MTDIFF.
WHEN AN ELEMENT OF MTIME IS  RESET, THEN $PK SHOULD ALSO SET MTDIFF=1.

I would like to separate 0-24 hr period to 3 parts, each represented by
a Kin equation.

Here is my codes:
$PK
TMIN = THETA(3)*EXP(ETA(3)); time at minimum
TGAP = THETA(4)*EXP(ETA(4))   ; time from Tmin to Tmax
MTIME(1) = TMIN
MTIME(2) = TMIN+TGAP
$DES
T1 = T-24*INT(T/24)   ;convert time to 0-24 hr period
FLAG1  = 1-MNEXT(1)   ; =1 only during 0-Tmin period
FLAG2  = MPAST(1)-MPAST(2) ; =1 only during Tmin-Tmax period
FLAG3  = 1-MNEXT(2)   ; =1 only during Tmax-24 period

DADT(1)= KIN1*FLAG1+KIN2*FLAG2+KIN3*FLAG3-KOUT*A(1)

Is there any mistakes in my coding?. I don't think I need to set MTDIFF=1 as
I don't want to reset MTIME for each individual, am I right?

For MNEXT(1), it will =1 until time past Tmin, right?

Thanks.

Li Li



On Fri, Jun 3, 2011 at 2:44 PM, Alison Boeckmann <
alisonboeckm...@fastmail.fm> wrote:

> My comments are attached in file luann.txt
>
> On Wed, 01 Jun 2011 17:57 -0400, "Luann Phillips"
>   wrote:
> > Alison,
> >
> > Thank you for the additional information. Especially the part about $DES
> > computing at the event time in a data record for output to the $TABLE. I
> > would like to make sure that I understand a specific point correctly.
> >
> > During a step from t=T1 to t=T2, NONMEM may still take on a value of
> > t=T2+i (i=a tiny number) and compute the equations at this time point.
> > When it is done, it goes back and computes the values at t=T2 for output
> > to the table file.
> >
> > So if you have an expression like the following in your $DES block,
> > would you still need to set flags to 'help' processing along at t=T2.
> > (context of example from a previous note from Li Li)
> >
> > Example:
> > integrating from TIME=T1 to TIME=T2 (which = a multiple of 24)
> >
> > RM=THETA(1)
> >
> > $DES
> >
> > TS=T-24*INT(T/24)
> > KIN=TS*RM
> > DADT(1) = -KIN*A(1)
> >
> > This function for Kin creates a cusp at every multiple of 24. So the
> > limit of Kin as you approach a multiple of 24 from the left is a maximum
> > and as you approach from the right the limit of Kin=0 (similar if not
> > same situation that occurs for an absorption alag). So if an integral
> > step of size h (within the advance from T1 to T2) encompasses a multiple
> > of 24 should flags be set to allow the integration routine to use
> > TS=T- 24*INT(time at beginning of interval h/24) for the full step?
> >
> > Or in terms of the an ALAG situation:
> >
> > What happens if NONMEM is taking a step (size h within an advance from
> > T1 to T2) that encompasses the value of ALAG?
> >
> > Does it use DADT(1) = 0*A(1) until t=ALAG and then switch to
> > DADT(1)=-Ka*A(1) at t >= ALAG (creating a cusp within the interval h)?
> >
> > or
> >
> > Does it use DAD(1) = 0 until t=end of the h interval (even though it's a
> > small bit past ALAG) and then switch to DADT(1)=-Ka*A(1) at the end of
> > the h interval?
> >
> > I really appreciate that you take the time to continue expanding our
> > knowledge about NONMEM.
> >
> > Best Regards,
> >
> > Luann Phillips
> > Director PK/PD
> > Cognigen Corporation
> > (716) 633-3463 ext. 236
> >
> >
> >
> >
> >
> >
> > Alison Boeckmann wrote:
> > > Here is a little background on how it works.
> > >
> > > ADVAN routines such as ADVAN6 use a subroutine from third party
sources
> > > to do the integration. For example, ADVAN6 calls DVERK from IMSL,
> > > ADVAN13 calls LSODA, etc. These subroutines are the ones that call
DES.
> > > They call DES with various values of T during the integration
> > > ("advance") from T1 to T2. (T1 and T2 are beginning and ending event
> > > time. Typically, these are the times on a pair of event records.) The
> > > integrating subroutine may decide it has enough information after a
> call
> > > with a value of T that is not exactly T2 (might be a little less or a
> > > little more.)
> > >
> > > A change was made with NONMEM V so that, after an advance, DES is
> called
> > > by the ADVAN routine itself (i.e., $DES statements are evaluated) at
> the
> > > exact value of the event time.
> > >
> > > From the NONMEM V Supplemental Guide of March 1998 (guides/supp.pdf):