Re: [NMusers] Clarification on performance of mu referencing

2023-08-23 Thread Bill Denney
Hi Sébastien,

My understanding is that keeping all thetas additive makes mu r referencing
more efficient.  The underlying mu reference math is then accurate for
matrix linear formulations.

My understanding is not perfect for this, so I'll defer to others if there
is a different answer.

Thanks,

Bill


On Wed, Aug 23, 2023, 8:11 AM Sébastien Bihorel <
sebastien.biho...@regeneron.com> wrote:

> Hi
>
> Training material distributed during an ICON training contains the
> following verbatim statements about mu referencing
>
> "
> Code already defined as Typical Value (TV), actual value (as recommended
> by Beal) are easy to convert:
> – TVCL = THETA(1)*AGE**THETA(2)
> – MU_5 = LOG(TVCL)
> – CL = EXP(MU_5+ETA(5))
>
> Even better, linear relationship of all THETAS with MU’s:
> – LTVCL = THETA(1) + THETA(2)*LOG(AGE)
> – MU_5 = LTVCL
> – CL = EXP(MU_5+ETA(5))
> "
>
> Can someone comment on why the second parameterization is "even better ?
> (besides the fact that estimates don't need to be bound)
>
> Thanks
>
> Sebastien Bihorel
> 
> This e-mail and any attachment hereto, is intended only for use by the
> addressee(s) named above and may contain legally privileged and/or
> confidential information. If you are not the intended recipient of this
> e-mail, any dissemination, distribution or copying of this email, or any
> attachment hereto, is strictly prohibited. If you receive this email in
> error please immediately notify me by return electronic mail and
> permanently delete this email and any attachment hereto, any copy of this
> e-mail and of any such attachment, and any printout thereof. Finally,
> please note that only authorized representatives of Regeneron
> Pharmaceuticals, Inc. have the power and authority to enter into business
> dealings with any third party.
> 
>


RE: [NMusers] Condition number

2022-11-30 Thread Bill Denney
Hi everyone,



This has been a great discussion!



Bob:  I’d like to clarify something that Pete, Matt, Ken, and Leonid were
discussing about how the covariance matrix is calculated.  I believe that
NONMEM rescales the values for estimation and then reverses the rescaling
for reporting.  Is the covariance matrix calculated on the rescaled values
or on the final parameter estimate values?



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Bauer, Robert
*Sent:* Wednesday, November 30, 2022 1:53 PM
*To:* 'nmusers@globomaxnm.com' 
*Subject:* RE: [NMusers] Condition number



Hello all:

Report of non-positive definiteness or negative eigenvalues, are reported
during the analysis of the R matrix (decomposition and inversion), which
occurs before the correlation matrix is constructed.Often, this is
caused by numerical imprecision.  If the R matrix step fails, the $COV step
fails to produce a final variance-covariance matrix, and of course, does
not produce a correlation matrix.  If the R matrix inversion step succeeds,
the variance-covariance matrix and its correlation matrix are produced, and
the correlation matrix is then assessed for its eigenvalues.  So, both the
R matrix (first step) and correlation matrix (second step) are decomposed
and assessed.



Robert J. Bauer, Ph.D.

Senior Director

Pharmacometrics R&D

ICON Early Phase

731 Arbor way, suite 100

Blue Bell, PA 19422

Office: (215) 616-6428

Mobile: (925) 286-0769

robert.ba...@iconplc.com

www.iconplc.com





ICON plc made the following annotations.
--

This e-mail transmission may contain confidential or legally privileged
information that is intended only for the individual or entity named in the
e-mail address. If you are not the intended recipient, you are hereby
notified that any disclosure, copying, distribution, or reliance upon the
contents of this e-mail is strictly prohibited. If you have received this
e-mail transmission in error, please reply to the sender, so that ICON plc
can arrange for proper delivery, and then please delete the message.

Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835


RE: [NMusers] relation of numerical error to gfortran?

2022-03-31 Thread Bill Denney
Hi Tong,



The general issue of floating point instability is generally a problem for
models that are hard to fit (3 days with full parallelization sounds hard
to fit).  It’s discussed in the intro7.pdf document in section I.70,
“Stable Model Development for Monte Carlo Methods”.



Given that you’re using SAEM, it seems like there could be a division by
zero or near zero when trying to choose the next point to estimate and the
values are going toward infinity or negative infinity.  I’ve experienced
this in the past with problems that are not well-defined around the OFV
minimum.  For instance, if a parameter has a near-flat derivative in the
likelihood surface, it can cause these issues.



It's not a specific problem to changing from Intel Fortran to gfortran; it
could have happened with almost any platform change (changing operating
system, changing compiler, changing operating system or compiler version,
changing architecture [AMD vs Intel], and even changing architectures
within a brand [Intel i9 vs Xeon, for instance]).



In my experience, usually this points to a model that’s not well-defined.
I would try to slowly build up the model—either in fixing parameters,
fitting the model, and add the parameters back one-by-one to see which
parameter estimation causes the estimation to slow down dramatically or to
cause the problem.  Or, subset your data: start with one study and build up
from there; occasionally, trying to identify a problem individual or
problem study and see if a different parameterization could help with that
data (e.g. do you need a different residual error for Phase 1 vs Phase 2
studies?).



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Tong Lu
*Sent:* Thursday, March 31, 2022 5:43 PM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] relation of numerical error to gfortran?



Hello All,



Has anyone experienced numerical errors when using NONMEM with gfortran but
not intel fortran?



We have a complex PKPD model which gave us strage individual fitting with
NM7.5.1 (compiled by gfortran) but not NM7.4.3 (compiled with intel
fortran). When using NM7.5.1, there is an unexpected & abrupt change of the
time profile in some individuals, which is likely caused by the issue of
numerical integration. This is a model with a large percent of BLQ, and has
an extremely long run time (3 day with full parallelization). We used
NOHABORT in SAEM estimation.



Somehow we failed to install NM7.5.1 and PsN 5.2.6 properly on our linux
cluster (CentOS) when using intel fortran. Thus no
apple-to-apple comparison can be made (NM 7.5.1 by gfortran v.s. NM 7.5.1
by intel fortran). I am wondering if you have experienced cases
where gfortran gave an issue but intel fortran is ok, for the same NONMEM
version.



The changes made in NM7.5.1 could also be related to this issue. Any
thoughts or experience around this could also be very helpful.



Thanks a lot for your insights

Best,

Tong


RE: [NMusers] Time-varying input/flexibility to change input rate on the fly

2021-08-06 Thread Bill Denney
Hi Robin,

I don't think that I've seen an update.  That said, the need I had then was
for a very specific need for an unusual drug.  I've only seen this type of
issue once where it seemed to need time-dependent effects.  Generally,
effects similar-- but not identical-- to what I was experiencing at the time
are better-modeled with simpler systems.  For example, adsorption to
infusion sets can almost always be modeled as a decrease in bioavailability
and/or a lag time (it's not typically time-dependent behavior).

I would assume that loss of part of a tablet or detachment of a patch could
be simply modeled as random variability (or a fixed effect) on
bioavailability.  Random pump malfunction would depend on how it
malfunctioned, but I would be wary of trying to model random effects as this
more complex time-dependent bioavailability unless you had data on the
malfunction method-- in which case I would suggest putting it into the
dataset as a different dosing record.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Robin Michelet
Sent: Friday, August 6, 2021 3:38 PM
To: nmusers@globomaxnm.com
Subject: [NMusers] Time-varying input/flexibility to change input rate on
the fly

Dear all,

I was wondering if any progress has been made on the topic raised originally
by Bill Denney in 2018:

https://www.mail-archive.com/nmusers@globomaxnm.com/msg06990.html

Are there any simpler ways in NM 7.5 to adapt input (e.g. infusion
rates) in $DES during the integration step without adapting the dataset
itself? I.e. to model the malfunctioning of an infusion pump (at random),
the loss of part of a tablet, or the detachment of a patch?

Thank you! I could not answer to the original topic which is why I just
linked to it.

--
Dr. ir. Robin Michelet
Senior scientist

Freie Universitaet Berlin
Institute of Pharmacy
Dept. of Clinical Pharmacy & Biochemistry Kelchstr. 31
12169 Berlin
Germany
Phone:  + 49 30 838 50659
Fax:  + 49 30 838 4 50656
Email: robin.miche...@fu-berlin.de
www.clinical-pharmacy.eu
https://fair-flagellin.eu/



RE: [NMusers] Assessment of elimination half life of mAb

2021-04-29 Thread Bill Denney
Hi Pete,

I agree that it is hard to communicate.  I like the general idea of C90 you
propose.  I tend to choose something in between your and Leonid's answer,
when possible.  I target an answer of "when is the pharmacodynamic effect
<5% of the maximum or therapeutic effect".  It does require more than just
the PK, though.  And for the just PK answer, I agree with Leonid and you,
targeting some smallish fraction of Cmax is often reasonable for similar
communication.

What I find clinicians typically try to understand when the drug has washed
out.  The answer that many have reasonably latched onto is when 5 half-lives
have passed, the drug is washed out.  That suggests that about 3% (2^-5)
effect is generally agreed as being washed out.

To Niurys's question about a citation for this, I don't have one either.
It's just a rule-of-thumb that I have tended to use.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Bonate, Peter
Sent: Thursday, April 29, 2021 12:01 PM
To: Leonid Gibiansky ; Niurys.CS

Cc: nmusers@globomaxnm.com
Subject: RE: [NMusers] Assessment of elimination half life of mAb

I've never really been happy with this.  It's an unsatisfactory solution.
You have a nonlinear drug.  Let's assume you have an approved drug.  It's
given at some fixed dose.  The clinician wants to know what is the drug's
half-life so they can washout their patient and start them on some other
therapy.  We go back to them and say, we can't give you a half-life because
it's a nonlinear drug, but once the kinetics become linear the half-life is
X hours.  That is a terrible answer.  Maybe we need to come up with a new
term, call it C90, the time it takes for Cmax to decline by 90%.  That we
can do.  We don't even need an analytical solution, we can eyeball it.  We
could even get fancy and do it in a population model.  C90 - the time it
takes for Cmax to decline 90% in 90% of patients.  Of course, for nonlinear
drugs, C90 only holds for that dose. Change in dose results in a new C90.
Just a thought.

pete



Peter Bonate, PhD
Executive Director
Pharmacokinetics, Modeling, and Simulation (PKMS) Clinical Pharmacology and
Exploratory Development (CPED) Astellas
1 Astellas Way, N3.158
Northbrook, IL  60062
peter.bon...@astellas.com
(224) 619-4901


It’s been a while since I’ve had something here, but here is a Dad joke.

Question:  Do you know why the math book was sad?
Answer:  Because it had so many problems


-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Leonid Gibiansky
Sent: Thursday, April 29, 2021 9:54 AM
To: Niurys.CS 
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] Assessment of elimination half life of mAb

I am not aware of any papers specifically addressing the half-live issue,
but there are tons of original papers and tutorials on TMDD, just search the
web Thanks Leonid

On 4/29/2021 9:48 AM, Niurys.CS wrote:
> Dear Leonid,
>
> Many thanks for clearing up my doubt. Can you suggest me any paper to
> go into this topic in any depth.
> Best,
> Niurys
>
> El 28/04/2021 19:34, "Leonid Gibiansky"  > escribió:
>
> There is no such thing as half-life of elimination for the nonlinear
> drug. But one can compute something like half-life:
>
> 1. Half-life of the linear part (defined by CL, V1, V2, Q): this
> defines the  half-life at high doses/high concentrations when
> nonlinear elimination is saturated.
>
> 2. Washout time: for the linear drug, 5 half-lives can be used to
> define washout time. During this time, concentrations drop
> approximately 2^5=32 times. So one can simulate the desired dosing
> (single dose or steady state), find the time from Cmax to Cmax/32
> and call it washout time (or time to Cmax/64 to be conservative)
>
> Thanks
> Leonid
>
>
> On 4/28/2021 5:17 PM, Niurys.CS wrote:
>
> Dear all
> I need some help to assess the elimination half life of a
> monoclonal antibody.
> The model that describes the data is a QSS aproximation of TMDD
> with Rmax constant. The model includes two binding process of
> mAb to its target: in central and peripheral compartments.
> Is there any specific equation to calcule lambda z and the
> elimination half life for each of the TMDD aproximations?
> Thanks
> Niurys
>



RE: [NMusers] Variability in Dosing Rate (and amount)

2020-12-15 Thread Bill Denney
Hi Paul,



Martin’s ideas are great ones.  My first thought on the “clever coding”
would be to treat it like bioavailability.  You should be sure that you
split it between days rather than estimate it completely separately between
days.  I would think of doing it in general like:



; Fraction of chow consumed on the first day

FDAY1 = 1/(1+EXP(-ETA(1))

; Fraction of chow consumed on the second and third days if there are only
two days of dosing

IF (NDAYS.EQ.2) THEN

  FDAY2=1-FDAY1

  FDAY3=0

ENDIF

; Fraction of chow consumed on the third day

IF (NDAYS.EQ.3) THEN

  FDAY2=(1-FDAY1)*(1/(1+EXP(ETA(2

  FDAY3=1-FDAY1-FDAY2

ENDIF

IF (CHOWDAY.EQ.1) F1=FDAY1

IF (CHOWDAY.EQ.2) F1=FDAY2

IF (CHOWDAY.EQ.3) F1=FDAY3



What the code does is ensure that the total dose among days is not greater
than the total dose measured.  (Note that the code was typed directly into
the email—there could be a typo in it, but it gives the intent.)  It
assumes that the dataset has columns setup as:



* AMT: the total dose as measured across the 2 or 3 days (not divided by
the number of days)

* NDAYS: the number of days where AMT was measured (i.e. 2 if it was
measured over 2 days and 3 if it was measured over three days)

* CHOWDAY: The day number in the set of days when AMT is measured (1, 2, or
3)



It requires that your ETAs are setup for inter-occasion variability (you
can find many examples of that with a web search).  It also requires that
you have a measurement or two of PK between each of these doses so that the
ETA values are estimable.  If you do not have PK between two doses (e.g.
after the dark period for Day 1), you may not be able to estimate ETA for
that dose.



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Paul Hutson
*Sent:* Tuesday, December 15, 2020 10:50 AM
*To:* Martin Bergstrand 
*Cc:* nmusers@globomaxnm.com
*Subject:* RE: [NMusers] Variability in Dosing Rate (and amount)



Thank you, Martin.  That is a great idea, yet I think you give me too much
credit to expect “clever coding”.

I’ll report back.  Be well.

Paul



Paul Hutson, PharmD, BCOP

Professor

UWisc School of Pharmacy

T: 608.263.2496

F: 608.265.5421



*From:* Martin Bergstrand 
*Sent:* Tuesday, December 15, 2020 8:51 AM
*To:* Paul Hutson 
*Cc:* nmusers@globomaxnm.com
*Subject:* Re: [NMusers] Variability in Dosing Rate (and amount)



Dear Paul,



I'm sorry for the late answer. Maybe you have already solved this issue by
now?



The approach that I would suggest is to implement the ingestion of the dose
as a zero-order infusion with an estimated duration and start.

   1. Set the dose time to the start of the 12 h dark period.
   2. Set the AMT data item to the total ingested drug amount.
   3. Set RATE data item to '-2' (=> estimation of duration (D) of infusion
   into compartment, D1 for CMP=1)
   4. Assuming that the dose is entered into CMT=1 you can in the NONMEM
   control file estimate ALAG1 and D1 governing the start and duration of an
   assumed constant ingestion.
   Note: you can consider different types of clever coding to limit the
   total ingestion within the 12 h dark period if you want.

This will of course be an approximation as the ingestions likely isn't
constant. It should however be sufficiently flexible to fit your data
without biasing assumptions of total dose/exposure.



Kind regards,



Martin Bergstrand, Ph.D.

Principal Consultant

Pharmetheus AB



martin.bergstr...@pharmetheus.com

www.pharmetheus.com



*This communication is confidential and is only intended for the use of the
individual or entity to which it is directed. It may contain information
that is privileged and exempt from disclosure under applicable law. If you
are not the intended recipient please notify us immediately. Please do not
copy it or disclose its contents to any other person.*



On Thu, Dec 10, 2020 at 5:32 AM Paul Hutson  wrote:

Dear Users, I hope that someone can suggest a paper or method for
addressing an issue with which I am grappling.

I am working on a mouse toxicokinetic study that has two basic cohorts.
One received a bolus gavage dose of known dose and time.  The other was
dosed by drug-laden chose.  The chow and thus drug ingested was measured,
usually daily in the morning, but sometimes after 2-3 days.  The “daily
dose” of chow was averaged over the 12 hours of the daily dark period in
which the animals were considered to be eating their chow.  2-3 blood
samples were obtained from each animal, and the basic 2 compartmental SEAM
IMP method is converging well on the gavage-only data.

Can the group suggest how to address the uncertainty in the rate of dosing
over the 12 hour dark period?  Of additional concern, hard to deal with, is
the potential that nightly chose ingestion varied over a series of 1-3
days.  I don’t think that the 12 August 2020 thread on a random effect on
ALAG applies to this case.

Many thanks.

Paul



Paul Hutson, PharmD, BCOP

Professor

UWisc School of Pharmacy

Re: [NMusers] Install problem 7.5.0 on Ubuntu 19.10

2020-10-25 Thread Bill Denney
Hi DJ,

Richard and I have experienced the same issue.  I believe that simply
replacing libfortran.so.3 with libfortran.so will fix it, but we haven't
worked out exactly where to do it yet:

https://github.com/billdenney/Pharmacometrics-Docker/pull/4

Thanks,

Bill

On Sun, Oct 25, 2020, 11:56 AM Eleveld-Ufkes, DJ 
wrote:

> Hi Everyone,
>
>
> I seem to have an install issue for 7.5.0 on Ubuntu 19.10. The install
> fails when compiling with compilemsgs.txt as:
>
>
> ./install: error while loading shared libraries: libgfortran.so.3: cannot
> open shared object file: No such file or directory
>
> Looking at my package installer did see that I don't have the option to
> install libgfortran.so.3. I do have the file libgfortran.so.5 in that
> directory.
>
> Does anyone have any advice getting NONMEM 7.5.0 installed on Ubuntu
> 19.10
>
> warm regards,
>
> Douglas Eleveld
>
>
>
> --
> De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de
> geadresseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik maken
> van dit bericht, het niet openbaar maken of op enige wijze verspreiden of
> vermenigvuldigen. Het UMCG kan niet aansprakelijk gesteld worden voor een
> incomplete aankomst of vertraging van dit verzonden bericht.
>
> The contents of this message are confidential and only intended for the
> eyes of the addressee(s). Others than the addressee(s) are not allowed to
> use this message, to make it public or to distribute or multiply this
> message in any way. The UMCG cannot be held responsible for incomplete
> reception or delay of this transferred message.
>


RE: [NMusers] M3 method - WRES, and CWRES

2020-09-01 Thread Bill Denney
Hi Mutaz,



Matt Hutmacher described it well here:
https://www.cognigen.com/nmusers/2010-April/2448.html



A very brief summary of his excellent post is that subjects with a
combination of censored (BLQ) an uncensored (above the LLOQ and below the
ULOQ) will be biased in their reporting of CWRES because you cannot
calculate CWRES for BLQ values.  (I say this before looking up what MDVRES
does.)



My guess that Bob or someone else can confirm is that the bias is
anticipated to be relatively small compared to the value of being able to
compare CWRES values the other observations for a subject.  It does not
definitively mean that the results are unbiased (see Matt’s Tmax example),
but generally, the CWRES values previously omitted are more useful than
excluding them from calculation.



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Mu'taz Jaber
*Sent:* Tuesday, September 1, 2020 7:25 PM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] M3 method - WRES, and CWRES



All,



Back in April 2010, Sebastian Bihorel and Martin Bergstrand initiated a
discussion regarding using the M3 and M4 methods for handling BQL data and
how it seemed to be a bug that NONMEM wouldn't compute WRES for the entire
set of subject data records whenever a BQL was included (
https://www.cognigen.com/nmusers/2010-April/2445.html).  Tom Ludden
responded with the following post (
https://www.cognigen.com/nmusers/2010-April/2447.html):



This issue was discussed with Stuart Beal. He believed that weighted

residuals would be incorrect for an individual that had both continuous

dependent variables and a likelihood in the calculation of their

contribution to the objective function value, as is the case with his M3

or M4 BQL methods The code for both RES and WRES are intentionally

bypassed in these cases.



Since then, we now have easy functionality with the F_FLAG=1 condition of
the M3/M4 code in $ERROR to tack on MDVRES=1 that allows the calculation of
WRES and CWRES to be available in output tables.



My questions are: Is Stuart Beal's original concern still valid?  Do these
NONMEM updates give us appropriate WRES and CWRES for plotting purposes for
individuals whose records contain BQL data?



Thank you,



Mutaz Jaber

PhD student

University of Minnesota



---

*Mutaz M. Jaber, PharmD.*

PhD student, Pharmacometrics

Experimental and Clinical Pharmacology

University of Minnesota

717 Delaware St SE; Room 468

Minneapolis, MN 55414

Email: jaber...@umn.edu

Phone: +1 651-706-5202



*~ Stay curious*


RE: [NMusers] Variability on infusion duration

2020-08-05 Thread Bill Denney
Similar to Leonid's solution, you can try using an exponential distribution:

D1 = DUR*(1-EXP(-EXP(ETA(1

The exponential within an exponential gives left skew and ensures that D1 ≤
DUR.

For subjects who you know had an incomplete infusion duration, I would add
an indicator variable (1 if incomplete, 0 if full duration) so that the
subjects with complete duration have the known complete duration.

D1 = DUR*(1 - Incomplete*EXP(-EXP(ETA(1

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Leonid Gibiansky
Sent: Wednesday, August 5, 2020 12:51 PM
To: Patricia Kleiner ; nmusers@globomaxnm.com
Subject: Re: [NMusers] Variability on infusion duration

may be
D1=DUR*EXP(ETA(1))
IF(D1.GT.DocumentedInfusionDuration) D1=DocumentedInfusionDuration

On 8/5/2020 12:18 PM, Patricia Kleiner wrote:
> Dear all,
>
> I am developing a PK model for a drug administered as a long-term
> infusion of 48 hours using an elastomeric pump. End of infusion was
> documented, but sometimes the elastomeric pump was already empty at
> this time. Therefore variability of the concentration measurements
> observed at this time is quite high.
> To address this issue, I try to include variability on infusion
> duration assigning the RATE data item in my dataset to -2 and model
> duration in the PK routine. Since the "true" infusion duration can
> only be shorter than the documented one, implementing IIV with a
> log-normal distribution
> (D1=DUR*EXP(ETA(1)) cannot describe the situation.
>
> I tried the following expression, where DUR ist the documented
> infusion
> duration:
>
> D1=DUR-THETA(1)*EXP(ETA(1))
>
> It works but does not really describe the situation either, since I
> expect the deviations from my infusion duration to be left skewed. I
> was wondering if there are any other possibilities to incorporate
> variability in a more suitable way? All suggestions will be highly
> appreciated!
>
>
> Thank you very much in advance!
> Patricia
>
>
>



RE: [NMusers] Negative concentration from simulation

2020-06-02 Thread Bill Denney
Hi Nyein,

Negative concentrations can be expected from simulations if the model
includes additive residual error.  I assume that you mean additive and
proportional error when you say "combined error model".  If the error
structure does not include additive error, then we'd need to know more.

How you will handle them in analysis depends on the goals of the analysis.
Usually, you will either simply set negative values to zero or set all
values below the limit of quantification to zero.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On
Behalf Of Nyein Hsu Maung
Sent: Tuesday, June 2, 2020 2:13 PM
To: nmusers@globomaxnm.com
Subject: [NMusers] Negative concentration from simulation


Dear NONMEM users,
I tried to simulate a new dataset by using a previously published pop pk
model. Their model was described by combined error model for residual
variability. And after simulation, I have obtained two negative
concentrations. I would like to know if there is any proper way to handle
those negative concentrations or if there are some codings to prevent
gaining negative concentrations. Thanks.

Best regards,
Nyein Hsu Maung



RE: [NMusers] error message from xpose

2020-02-14 Thread Bill Denney
Hi Mark,



The tidyselect package has recently undergone some significant changes.  I
would guess that may have caused an issue with xpose which heavily uses the
tidyverse.  As Duy indicated, I’d suggest posting an issue to GitHub so
that Ben can help.



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Duy Tran
*Sent:* Friday, February 14, 2020 4:04 PM
*To:* Mark Sale 
*Cc:* nmusers@globomaxnm.com
*Subject:* Re: [NMusers] error message from xpose



Hi Mark,

Can you try specifying more in the arguments for the function
xpose::xpose_data? I typically include these below when I run xpose on R:

xpdb <- xpose_data(runno = "001", ext = ".lst", prefix = "run", dir = "./
msale/XXX/xpose")



If it doesn't work, can you post this issue on
https://github.com/UUPharmacometrics/xpose/issues for the xpose package
developer to help?



-Duy Tran







On Fri, Feb 14, 2020 at 12:41 PM Mark Sale  wrote:

I know this is a NONMEM list server, but, need help with xpose, hope you'll
indulge me. Moving to the latest xpose (from xpose4). I ran a model (using
PSN, execute), model file called run001.mod, the $TABLE records are:



$TABLE  ID TIME TAD IPRED IWRES NOPRINT ONEHEADER FILE=sdtab001

$TABLE  ID K S2 KA CL ETA1 ETA2 ETA3 NOPRINT NOAPPEND ONEHEADER

FILE=patab001

$TABLE  ID STDY SEXN RACEN X XX NOPRINT NOAPPEND

ONEHEADER FILE=catab001

$TABLE  ID AGE  X X  X

NOPRINT NOAPPEND ONEHEADER FILE=cotab001

$TABLE  ID TIME DV EVID CWRES IWRES   IPRED NOPRINT NOAPPEND

ONEHEADER FILE=cwtab001



I'm trying to load the data in to xpose, with command:

xpdb <- xpose::xpose_data(runno = '001')



getting this ouput:



Looking for nonmem output tables.

Reading: sdtab001, patab001, catab001, cotab001, cwtab001 [$prob no.1]



Looking for nonmem output files

Reading: run001.ext, run001.phi

Warning messages:

1: No tidyselect variables were registered

2: Failed to create run summary. No tidyselect variables were registered

3: No tidyselect variables were registered



I think all the required files are there:



 Volume in drive E is New Volume

 Volume Serial Number is CC55-6FD7



 Directory of E:\msale\XXX\xpose



02/14/2020  03:23 PM   564,745 catab001

02/14/2020  03:23 PM   731,389 cotab001

02/14/2020  03:23 PM   675,841 cwtab001

02/14/2020  03:23 PM   675,841 patab001

02/14/2020  03:23 PM14 run001.cpu

02/14/2020  03:23 PM 3,615 run001.ext

02/14/2020  03:23 PM 1,857 run001.log

02/14/2020  03:23 PM18,201 run001.lst

02/14/2020  03:19 PM 3,620 run001.mod

02/14/2020  03:23 PM   173,746 run001.phi

02/14/2020  03:23 PM 1,197 run001.shk

02/14/2020  03:23 PM37,664 run001.shm

02/14/2020  03:23 PM20,400 run001.xml

02/14/2020  03:23 PM   509,197 sdtab001

  14 File(s)  3,417,327 bytes

   0 Dir(s)  59,120,144,384 bytes free



Base on a search for this error, it looks like it comes from dplyr. I'm
running dplyr version 0.8.4, xpose version 0.4.7 and R version 3.6.2 with R
Studio, under Windows 10.

I've tried it with and without the NOAPPEND in the $TABLE records.

Any suggestions would be appreciated.





Mark Sale M.D.

Senior Vice President, Pharmacometrics

Nuventra Inc.

2525 Meridian Parkway, Suite 200

Durham, NC 27713

Phone (919)-973-0383

ms...@nuventra.com 





*Upcoming Events:*

*Webinar:
*
Clinical
Pharmacology 101 | January 30 | 1pm EST

ASCPT Annual Meeting:
 March
18 - 21 | Houston, TX | Booth #205



*Check out our* *10 Best Blogs of 2019
.*



CONFIDENTIALITY NOTICE The information in this transmittal (including
attachments, if any) may be privileged and confidential and is intended
only for the recipient(s) listed above. Any review, use, disclosure,
distribution or copying of this transmittal, in any form, is prohibited
except by or on behalf of the intended recipient(s). If you have received
this transmittal in error, please notify me immediately by reply email and
destroy all copies of the transmittal.


RE: [NMusers] Using evid 0 before dosing

2019-11-06 Thread Bill Denney
Hi Carlos,

It is commonly used.  For most datasets, there will be at least one
observation that occurs before dosing to estimate the baseline value, and in
almost every scenario, the modeling dataset should mirror the real world
actions.  So, there is no issue with it, and usually you will have an EVID=0
before the first dose.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Carlos ST
Sent: Wednesday, November 6, 2019 10:03 AM
To: nmusers@globomaxnm.com
Subject: [NMusers] Using evid 0 before dosing

Dear NMUsers,

I would like advice in the best practice to use evid 0 before dosing, which
is to say an observation just before a dosing (*to estimate the value in
that compartment just before dosing event).

Thank you,

Carlos,



[NMusers] Installing NONMEM with Intel Fortran 2019.4

2019-05-14 Thread Bill Denney
Hi everyone,



With the recent discussion of installing NONMEM with Windows MPI, I have a
somewhat similar question on Linux:



Has anyone had success installing NONMEM with Intel Fortran 2019 update 4
on Linux?  I was updating Pharmacometrics-Docker (
https://github.com/billdenney/Pharmacometrics-Docker), and I hit a snag
that the current Intel MPI installation doesn’t appear to come with a
statically-linked libmpi.a.  Intel documentation suggests that it should be
there, but I don’t see it.  I asked about that on the Intel Forums
https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux-and-mac-os-x/topic/809563
.



Thanks,



Bill


RE: [NMusers] bootstrap

2019-05-10 Thread Bill Denney
Hi Niurys,

The simplest method to ensure a good bootstrap is often to simplify the data
file by removing rows that should not be used for the modeling before
running the bootstrap.  Notably, if you exclude an entire subject either
based on the ID column or another column, usually the boostrap will not work
correctly.

I believe that most if not all tools generate the bootstrap with new ID
column values (the ID is given a new sequential value based on sampling
order).  If you exclude an entire subject based on another column, the you
will not have the expected number of subjects in the analysis because all
the bootstrap tools that I know of don't account for exclusions with making
the new data file.

If this doesn't help, giving more info will help.  (What tool are you using
for bootstrap?  What command line are you running?  Can you share the model
and a snippet of the data?)

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Niurys.CS
Sent: Friday, May 10, 2019 12:17 PM
To: nmusers 
Subject: [NMusers] bootstrap

Dear nmusers,


I have a big doubt. When I used the bootstrap to evaluate my model, I had
some bugs. In my code I use IGNORE statements based on FLAGS for some
outliers. I don't know if I remove these IGNORE statements, the bootstrap
will run well. Can you give me some suggestions???


Regards

Niurys de Castro Suárez

-- 

MSc Niurys de Castro Suárez
Profesor Asistente Farmacometría
Investigador Agregado
Departamento Farmacia
Instituto de Farmacia y Alimentos, Universidad de La Habana Cuba "Una
estrella brilla en la hora de nuestro encuentro"



RE: [NMusers] NonMem and EVID=2

2019-04-23 Thread Bill Denney
Hi Johan,



Since I answered the original message, I’ll respond here, too, though with
a less specific answer at this point.  If you could provide the control
stream and data (or example of the data), that would help.



My guess is generally similar to my guess in 2016.  The integration steps
could be too long and the problem too stiff without the EVID=2 rows.
Usually, this is automatically detected by the integrator, but if there is
a major change in the problem dynamics (e.g. TMDD), the EVID=2 rows could
assist the integrator with an intermediate step.  This may show up in the
covariance step because the maximum likelihood estimate has good
convergence properties, but nearby there is a space that has poor
properties (such as the TMDD behavior changing significantly).  This could
also occur if you do not have bounds on parameters that require bounds
(such as a linear estimate of clearance which is not restricted to be > 0).



For a more specific answer, I think that I’d need to see the code and data.



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Johan Rosenborg
*Sent:* Tuesday, April 23, 2019 11:33 AM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] NonMem and EVID=2



Hello everybody,

In order to obtain predicted values in a PopPK analysis without covariates
at points in time with no actual values I have inserted extra rows
indicated with EVID=2. The covariance step is completed when including
these extra rows and precision of the parameter estimates are adequate.
When omitting the extra rows {IGNORE=(EVID.EQ.2)}, I get exactly the same
parameter estimates, the covariace step can be completed with some
difficulty, but precision of the parameter estimates are now inadequate. I
use METHOD=1 and just like Ahmed Abbas Suleiman experienced (
https://cognigencorp.com/nonmem/current/2013-February/4440.html 2
), my
outcome did not differ between the two conditions when setting METHOD=0. I
saw that William Denney has responded to a similar question in 2016 (
https://cognigencorp.com/nonmem/current/2016-February/6085.html 2
).

I cannot see any response to Ahmed’s comment; do you Bill or somebody else
have any idea why the outcomes differ with METHOD=1 but not with METHOD=0
in NonMem when including extra rows in the data set?

Thank you in advance and kind regards,

Johan


RE: [NMusers] VPCs confidence intervals?

2019-03-14 Thread Bill Denney
Hi Elena,



VPCs are accurately called prediction intervals not confidence intervals.
The difference is that a prediction interval shows what you would expect
for the next individual in a study while a confidence interval shows what
you would expect for the result of a statistic (often confidence intervals
of a mean are shown).  With many VPCs, the confidence interval of the
median and the confidence interval of the 5th and 95th percentiles are
shown.



Also, when the lines indicate the median, 5th, and 95th percentiles of the
simulations, that is the 90% prediction interval since it is the middle 90%
of the data (not the 95% confidence interval).



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Soto, Elena
*Sent:* Thursday, March 14, 2019 12:49 PM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] VPCs confidence intervals?



Dear all,



I have a question regarding visual predictive checks (VPCs).



Most of VPCs used now, include a line representing the median and 5th and 95
th percentiles of the data values and an area around the same percentiles
that is commonly define as the 95% confidence interval (of the simulations).



But is it correct, from the statistical point of view, to call confidence
interval to this area? And if this is not the case how should we define
them?



Thanks,

Elena Soto







Elena Soto, PhD

Pharmacometrician

Pharmacometrics, Global Clinical Pharmacology

Global Product Development



*Pfizer R&D UK Limited, IPC 096*

*CT13 9NJ**, Sandwich, **UK*

*Phone : +44 1304 644883*
--

Unless expressly stated otherwise, this message is confidential and may be
privileged. It is intended for the addressee(s) only. Access to  this
e-mail by anyone else is unauthorised. If you are not an addressee, any
disclosure or copying of the contents of this e-mail or any action taken
(or not taken) in reliance on it is unauthorised and may be unlawful. If
you are not an addressee, please inform the sender immediately.

Pfizer R&D UK Limited is registered in England under No. 11439437 with its
registered office at Ramsgate Road, Sandwich, Kent CT13 9NJ


RE: [NMusers] Mailing list about pharmacometrics

2019-01-31 Thread Bill Denney
Hi Sebastien,

The best I know for general PMx questions is the ISoP message boards:
https://discuss.go-isop.org/

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf
Of Sebastien Bihorel
Sent: Thursday, January 31, 2019 11:01 AM
To: nmusers@globomaxnm.com
Subject: [NMusers] Mailing list about pharmacometrics


Hi,

Besides software-dedicated user lists like this one, are you aware about the
existence of a mailing list that discusses general pharmacometric questions?

-- 
Sebastien Bihorel



RE: [NMusers] IMPMAP behavior question

2018-08-23 Thread Bill Denney
Hi Bob, Leonid, and Mark,

 

Thanks for this interesting conversation!  I think that it explains some issues 
with models I’d not gotten to the bottom of in the past.

 

Bob, can these timeouts be raised to the user in the main output files?  Or 
even better, could the timeout be automatically raised up to some 
user-configurable fold above the default on timeout, and if it happens again, 
the model is stopped with a message “Parallel processing timeout, increase 
TIMEOUT in the .pnm file or troubleshoot lost calculation nodes.”

 

It seems like ignoring a subset of the data due to a timeout should not give 
results.

 

Thanks,

 

Bill 

 

From: owner-nmus...@globomaxnm.com  On Behalf Of 
Bauer, Robert
Sent: Thursday, August 23, 2018 2:49 PM
To: Mark Sale ; Leonid Gibiansky 
; nmusers@globomaxnm.com
Subject: RE: [NMusers] IMPMAP behavior question

 

Mark:

You would also likely see in the .phi file that the OBJ values may be 0 for 
those subjects not collected.

 

The solution is as Leonid said, increase TIMEOUT in the pnm file.

 

Robert J. Bauer, Ph.D.

Senior Director

Pharmacometrics R&D

ICON Early Phase

820 W. Diamond Avenue

Suite 100

Gaithersburg, MD 20878

Office: (215) 616-6428

Mobile: (925) 286-0769

  robert.ba...@iconplc.com

  www.iconplc.com

 

From: owner-nmus...@globomaxnm.com   
[mailto:owner-nmus...@globomaxnm.com] On Behalf Of Mark Sale
Sent: Thursday, August 23, 2018 11:37 AM
To: Leonid Gibiansky; nmusers@globomaxnm.com  
Subject: Re: [NMusers] IMPMAP behavior question

 

thanks Leonid,

I looked at that, and ossilation/sampling doesn't seem to be the issue, mean 
was 20804.0136597062, SD = 2.19872089635881.

And the OBJ is very stable in the minimzation part, up until the last 
iteration/covariance iteration.  

It was run parallel, and I guess that your comment about not waiting for all 
the workers concerns me. There is a timeout event in the log file:

ITERATION   70

 STARTING SUBJECTS  1 TO8 ON MANAGER: OK

 STARTING SUBJECTS  9 TO   17 ON WORKER1: OK

 STARTING SUBJECTS 18 TO   32 ON WORKER2: OK

 STARTING SUBJECTS 33 TO   85 ON WORKER3: OK

 COLLECTING SUBJECTS1 TO8 ON MANAGER

 COLLECTING SUBJECTS   18 TO   32 ON WORKER2

 COLLECTING SUBJECTS9 TO   17 ON WORKER1

 COLLECTING SUBJECTS   33 TO   85 ON WORKER3

 ITERATION   70

 STARTING SUBJECTS  1 TO8 ON MANAGER: OK

 STARTING SUBJECTS  9 TO   17 ON WORKER1: OK

 STARTING SUBJECTS 18 TO   32 ON WORKER2: OK

 STARTING SUBJECTS 33 TO   85 ON WORKER3: OK

 COLLECTING SUBJECTS1 TO8 ON MANAGER

 TIMEOUT FROM WORKER1

 RESUBMITTING JOB TO LOCAL

 STARTING SUBJECTS  9 TO   17 ON MANAGER: OK

 COLLECTING SUBJECTS   18 TO   32 ON WORKER2

 

and no mention of collecting subjects 33 to 85 on worker 3, or subjects 9 to 17 
on worker 1.

 

so, that could be  the problem.

Bob - thoughts?

 

 

 

 

 

Mark Sale M.D.

Senior Vice President, Pharmacometrics

Nuventra Pharma Sciences, Inc.

2525 Meridian Parkway, Suite 200

Durham, NC 27713

Phone (919)-973-0383

ms...@nuventra.com 

 

CONFIDENTIALITY NOTICE The information in this transmittal (including 
attachments, if any) may be privileged and confidential and is intended only 
for the recipient(s) listed above. Any review, use, disclosure, distribution or 
copying of this transmittal, in any form, is prohibited except by or on behalf 
of the intended recipient(s). If you have received this transmittal in error, 
please notify me immediately by reply email and destroy all copies of the 
transmittal.

  _  

From: Leonid Gibiansky mailto:lgibian...@quantpharm.com> >
Sent: Thursday, August 23, 2018 11:14:51 AM
To: Mark Sale; nmusers@globomaxnm.com  
Subject: Re: [NMusers] IMPMAP behavior question 

 

Mark,

IMPMAP procedure produces run.cnv file. There you can find mean and SD 
of OF (over the last few iterations that were considered for convergence 
stop). I use these numbers for covariate assessment as 
iteration-to-iteration numbers oscillate and cannot be reliably compared.

Concerning the last iteration OF drop, cannot tell for sure but I've 
seen OF drops in some cases when the main manager do not wait for the 
slaves to return OF of their portion of the data. prn file has 
parameters TIMEOUTI and TIMEOUT, and I would try to increase them and 
see whether this fixes the problem

Thanks
Leonid




On 8/23/2018 1:54 PM, Mark Sale wrote:
> I have a model that seems to be behaving strangely, looking for 
> interpretation help
> 
> 
> in model building, the OBJ is usually ~20900. Until this model, where, on the 
> covariance step (IMPMAP method) the OBJ drops 9000  points (20798 to 11837), 
> monitoring from output f

RE: [NMusers] Context-free lexer for NM-TRAN

2018-06-14 Thread Bill Denney
Hi Ruben,



I’m also interested in a lexer-parser for NONMEM.  The regexp-based ones
that I’ve used have typically had issues (I’ve tried about 4 different ones
including one that I wrote), and they are working for many but not all
models.  I’m unaware of a reasonably complete lexer-parser for NONMEM
(though I know of at least one non-public effort; I’ve contacted that
author to see if he is interested in joining this conversation).



I’ve wanted to build the abstract syntax tree for NONMEM to help with
computational model-building, and I’ve been looking into ANTLR as well.
Three questions:  Are you interested in collaborating on the parser (can
you create a GitHub project for it)?  Why ANTLRv3 instead of v4?  Do you
have a way to get an ANTLR parse tree into R?



Thanks



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Ruben Faelens
*Sent:* Thursday, June 14, 2018 8:55 AM
*To:* Tim Bergsma 
*Cc:* nmusers@globomaxnm.com
*Subject:* Re: [NMusers] Context-free lexer for NM-TRAN



Hi Tim,



Thanks for pointing to that.

Unfortunately, nonmemica uses regular expressions to simply split the
character stream into subsections.

This is not the way to go. As an example, nonmemica would get confused by
the following input:

$PROBLEM This is a problem with special $PK section

$PK ;Refer to $ERROR for more information

CL=THETA(1)

$ERROR

Y = W*F



Probably a contextual lexer is the way to go; fortunately ANTLRv3 has
functionality for this.



Kind regards,

Ruben



On Thu, Jun 14, 2018 at 12:42 PM Tim Bergsma 
wrote:



Hi Ruben.



Related: the CRAN package “nonmemica” has a function as.model() that parses
NONMEM control streams. Type “?nonmemica” at the R prompt after loading.
See also https://github.com/MikeKSmith/rspeaksnonmem .  Happy to discuss
further.



Kind regards,



Tim



*Tim Bergsma, PhD*

Associate Director

Certara Strategic Consulting

[image: image001.png]

m.  860.930.9931 <(860)%20930-9931>

tim.berg...@certara.com



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Ruben Faelens
*Sent:* Thursday, June 14, 2018 4:33 AM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] Context-free lexer for NM-TRAN



Hi all,



Calling all computer scientists and computer language experts.

In my spare time, I am working on a lexer and parser for NM-Tran. Primarly
to teach myself about grammars and DSL, but perhaps something useful will
come out of this (e.g. a context-sensitive editor with code completion).



When lexing, I am having a hard time describing the keywords used by
nm-tran.

Let us take '.EQ.' as an example.

1) It seems that *.EQ. *is a keyword used to describe a comparison.

2) However, a filename could also be 'foo.eq.bar'

The same thing applies for keywords on the '$ESTIMATION' record. These
keywords could also be used as variable names.



Am I right in saying that NM-TRAN cannot be tokenized with a context-free
lexer? And that I should focus my efforts on building a lexer-less parser?
(Or building my own lexer-parser, see
https://en.wikipedia.org/wiki/The_lexer_hack )

I assume building a parser for NM-TRAN was already done in the DDMoRe
project, but I failed to find the source code...



Kind regards,

Ruben Faelens



*NOTICE: *The information contained in this electronic mail message is
intended only for the personal and confidential use of the designated
recipient(s) named above. This message may be an attorney-client
communication, may be protected by the work product doctrine, and may be
subject to a protective order. As such, this message is privileged and
confidential. If the reader of this message is not the intended recipient
or an agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this message in error and that
any review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately by telephone and e-mail and destroy any and
all copies of this message in your possession (whether hard copies or
electronically stored copies). Thank you.

Personal data may be transferred to the United States of America and, if
this occurs, it is possible that US governmental authorities may access
such personal data.


buSp9xeMeKEbrUze


RE: [NMusers] Cleaning Up After NONMEM

2018-06-08 Thread Bill Denney
Hi Mark,



To totally avoid them, not that I know of (others may know an option that
can help there).  With PsN, you can use -clean=4, and I think it will
remove everything.  If you have a run with a problem, you may end up with
no information to troubleshoot it, though.



Thanks,



Bill



*From:* Mark Tepeck 
*Sent:* Friday, June 8, 2018 4:36 PM
*To:* Bill Denney 
*Cc:* nmusers@globomaxnm.com
*Subject:* Re: [NMusers] Cleaning Up After NONMEM



Hi Bill,



Thank you for the tip. Is there anyway to avoid those sub-directories?  For
example, let NONMEM clean it up automatically?  let NONMEM run those
temporary files in another cache space invisible to end-users. Those
sub-directories increasingly eats a lot of my disk space.



Mark







On Fri, Jun 8, 2018 at 3:56 PM, Bill Denney 
wrote:

Hi Mark,



The simplest answer that I know of is to use PsN (
https://uupharmacometrics.github.io/PsN/).  It runs NONMEM in a
subdirectory and will only bring the most useful files back into the main
directory.



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Mark Tepeck
*Sent:* Friday, June 8, 2018 3:43 PM
*To:* nmusers@globomaxnm.com
*Subject:* Re: [NMusers] FW: testing nmusers number 12001. Please ignore



Hi All,

Is there any native way for NONMEM to opt out of generating running
file/folders.  Right now, I use some tools to post clean the NONMEM run
directory. However, it will be fantastic to have such a NONMEM built-in
option to run "cleanly". Those temporary files and folders create heavy
burdens on storing and sharing the results.

Thank you,

Mark


RE: [NMusers] Cleaning Up After NONMEM

2018-06-08 Thread Bill Denney
Hi Mark,



The simplest answer that I know of is to use PsN (
https://uupharmacometrics.github.io/PsN/).  It runs NONMEM in a
subdirectory and will only bring the most useful files back into the main
directory.



Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com  *On
Behalf Of *Mark Tepeck
*Sent:* Friday, June 8, 2018 3:43 PM
*To:* nmusers@globomaxnm.com
*Subject:* Re: [NMusers] FW: testing nmusers number 12001. Please ignore



Hi All,

Is there any native way for NONMEM to opt out of generating running
file/folders.  Right now, I use some tools to post clean the NONMEM run
directory. However, it will be fantastic to have such a NONMEM built-in
option to run "cleanly". Those temporary files and folders create heavy
burdens on storing and sharing the results.

Thank you,

Mark


RE: [NMusers] Time-Varying Bioavailability on Zero-Order Infusion

2018-03-13 Thread Bill Denney
Hi Leonid,

The biology behind it is that during a long (many day) infusion, there
appears to be adsorption to the infusion tubing and/or catheter.  The real
model I'm developing is more complex in the adsorption part (there may be
saturable adsorption as shown with the dynamics in the first days), and I
want it to be accurate as a continuous time variant IV bioavailability
because I'm trying to predict different infusion rates and durations.

The model mis-fit is both a dose-related apparent bioavailability change
(much simpler to implement than what is here) and a dose- and time-related
apparent change in bioavailability during the first portion of the dosing
due to the potential saturation of the adsorption.  The kinetics after the
end of the infusion all appear to be linear over a moderate-to-large dose
range, so I don't think that it's more complex human biology.

And for the current data set, the model runs quickly (it isn't that I'm
having to sit around forever for the solution).  The technical question was
if there was some part of NONMEM that I didn't know related to controlling
infusion rates in the $DES block.  (Feature request to Bob and Alison: Maybe
in NONMEM 7.5, the user could set R1 = -1 in $PK and have continuous control
of R1 in $DES-- generalized to include all compartments.)

Thanks,

Bill

-Original Message-
From: Leonid Gibiansky 
Sent: Tuesday, March 13, 2018 9:34 PM
To: Sebastien Bihorel ; Bill Denney

Cc: NMUsers 
Subject: Re: [NMusers] Time-Varying Bioavailability on Zero-Order Infusion

Hi Bill,

I think the proposed original solution is the only one if you would like to
implement it exactly. May be it can be approximated somehow? What is the
real reason for this questions? What is the biology behind the time-variant
IV bioavailability? Or what is the model mis-fit that you are trying to fix?

Leonid




On 3/13/2018 9:16 PM, Sebastien Bihorel wrote:
> Hi,
>
> I would suggest the following solution which should also work if you
> want to apply some covariate effect on bioavailability:
> * On the dataset side, set your RATE variable to -1 and store the
> actual infusion rates into another variable, eg IVRATE
> * On the model side:
> $PK
> ...
>
> ; assuming the IV infusion are made in compartment 1
> F1 = 
> R1 = F1*IVRATE
>
> Voila, NONMEM should take care of the dosing in the background as usual.
>
> Sebastien
>
> --
> --
> *From: *"Bill Denney" 
> *To: *"NMUsers" 
> *Sent: *Tuesday, March 13, 2018 8:58:41 PM
> *Subject: *[NMusers] Time-Varying Bioavailability on Zero-Order
> Infusion
>
> Hi NONMEMers,
>
> Is there a good way to assign a time-varying bioavailabilty on a
> zero-order rate of infusion in NONMEM?  The best I’ve been able to
> come up with is something like the below.  It seems like something
> that should be easier than what I’m doing below (I adjusted it from
> the real example as I was typing it into the email—I could have
> introduced a bug in the process).  And importantly, -9998 is well
> before any time in my database.
>
> (dosing into CMT=1 with an IV infusion)
>
> $MODEL
>
> COMP=(CENTRAL DEFDOSE DEFOBS); central
>
> COMP=(P1); peripheral 1
>
> COMP=(P2); peripheral 2
>
> $PK
>
>; Normal stuff and ...
>
>; Record the dosing time
>
>IF (NEWIND.LT.2) THEN
>
>  TDOSE = -
>
>  DOSEEND = -9998
>
>  DOSE = -999
>
>  DOSERATE = 0
>
>ENDIF
>
>IF ((EVID.EQ.1 .OR. EVID.EQ.4) .AND. RATE.GT.0) THEN
>
>  TDOSE = TIME
>
>  DOSEEND = TIME + AMT/RATE
>
>  DOSERATE=RATE
>
>  MTDIFF=1
>
>ENDIF
>
>MTIME(1)=TDOSE
>
>MTIME(2)=DOSEEND
>
>F1 = 0 ; Bioavailability is zero so that the $DES block has full
> control over the rate.
>
>RATEADJTAU=THETA(10)
>
>RATEADJMAX=THETA(11)
>
> $DES
>
>; Manually control the infusion
>
>RATEIN = 0
>
>IF (MTIME(1).LE.T .AND. T.LE.MTIME(2)) THEN
>
>  RATEADJCALC = RATEADJMAX * EXP(-(T – MTIME(1)) * RATEADJTAU)
>
>  RATEIN = DOSERATE - RATEADJCALC
>
>ENDIF
>
>DADT(1) = RATEIN - K10*A(1) - K12*A(1) + K21*A(2) - K13*A(1) +
> K31*A(3)
>
>DADT(2) = K12*A(1) - K21*A(2)
>
>DADT(3) =   K13*A(1) -
> K31*A(3)
>
> Thanks,
>
> Bill
>
>



RE: [NMusers] Time-Varying Bioavailability on Zero-Order Infusion

2018-03-13 Thread Bill Denney
Hi Sebastien,


Thanks for that suggestion!



Unfortunately, that will only update when I have observation records, so it
will change R1 in discrete steps at each observation (I could insert a lot
of observations, but it still has the same limitation).  I need it to
continuously vary—this solution does that, but I’d like to learn a simpler
way than what I suggested.



Thanks,



Bill



*From:* Sebastien Bihorel 
*Sent:* Tuesday, March 13, 2018 9:17 PM
*To:* Bill Denney 
*Cc:* NMUsers 
*Subject:* Re: [NMusers] Time-Varying Bioavailability on Zero-Order Infusion



Hi,



I would suggest the following solution which should also work if you want
to apply some covariate effect on bioavailability:

* On the dataset side, set your RATE variable to -1 and store the actual
infusion rates into another variable, eg IVRATE

* On the model side:

$PK

...



; assuming the IV infusion are made in compartment 1

F1 = 

R1 = F1*IVRATE



Voila, NONMEM should take care of the dosing in the background as usual.



Sebastien


--

*From: *"Bill Denney" 
*To: *"NMUsers" 
*Sent: *Tuesday, March 13, 2018 8:58:41 PM
*Subject: *[NMusers] Time-Varying Bioavailability on Zero-Order Infusion



Hi NONMEMers,



Is there a good way to assign a time-varying bioavailabilty on a zero-order
rate of infusion in NONMEM?  The best I’ve been able to come up with is
something like the below.  It seems like something that should be easier
than what I’m doing below (I adjusted it from the real example as I was
typing it into the email—I could have introduced a bug in the process).
And importantly, -9998 is well before any time in my database.



(dosing into CMT=1 with an IV infusion)



$MODEL

COMP=(CENTRAL DEFDOSE DEFOBS); central

COMP=(P1); peripheral 1

COMP=(P2); peripheral 2

$PK

  ; Normal stuff and ...

  ; Record the dosing time

  IF (NEWIND.LT.2) THEN

TDOSE = -

DOSEEND = -9998

DOSE = -999

DOSERATE = 0

  ENDIF

  IF ((EVID.EQ.1 .OR. EVID.EQ.4) .AND. RATE.GT.0) THEN

TDOSE = TIME

DOSEEND = TIME + AMT/RATE

DOSERATE=RATE

MTDIFF=1

  ENDIF

  MTIME(1)=TDOSE

  MTIME(2)=DOSEEND

  F1 = 0 ; Bioavailability is zero so that the $DES block has full control
over the rate.



  RATEADJTAU=THETA(10)

  RATEADJMAX=THETA(11)

$DES

  ; Manually control the infusion

  RATEIN = 0

  IF (MTIME(1).LE.T .AND. T.LE.MTIME(2)) THEN

RATEADJCALC = RATEADJMAX * EXP(-(T – MTIME(1)) * RATEADJTAU)

RATEIN = DOSERATE - RATEADJCALC

  ENDIF

  DADT(1) = RATEIN - K10*A(1) - K12*A(1) + K21*A(2) - K13*A(1) + K31*A(3)

  DADT(2) = K12*A(1) - K21*A(2)

  DADT(3) =   K13*A(1) - K31*A(3)



Thanks,



Bill


[NMusers] Time-Varying Bioavailability on Zero-Order Infusion

2018-03-13 Thread Bill Denney
Hi NONMEMers,



Is there a good way to assign a time-varying bioavailabilty on a zero-order
rate of infusion in NONMEM?  The best I’ve been able to come up with is
something like the below.  It seems like something that should be easier
than what I’m doing below (I adjusted it from the real example as I was
typing it into the email—I could have introduced a bug in the process).
And importantly, -9998 is well before any time in my database.



(dosing into CMT=1 with an IV infusion)



$MODEL

COMP=(CENTRAL DEFDOSE DEFOBS); central

COMP=(P1); peripheral 1

COMP=(P2); peripheral 2

$PK

  ; Normal stuff and ...

  ; Record the dosing time

  IF (NEWIND.LT.2) THEN

TDOSE = -

DOSEEND = -9998

DOSE = -999

DOSERATE = 0

  ENDIF

  IF ((EVID.EQ.1 .OR. EVID.EQ.4) .AND. RATE.GT.0) THEN

TDOSE = TIME

DOSEEND = TIME + AMT/RATE

DOSERATE=RATE

MTDIFF=1

  ENDIF

  MTIME(1)=TDOSE

  MTIME(2)=DOSEEND

  F1 = 0 ; Bioavailability is zero so that the $DES block has full control
over the rate.



  RATEADJTAU=THETA(10)

  RATEADJMAX=THETA(11)

$DES

  ; Manually control the infusion

  RATEIN = 0

  IF (MTIME(1).LE.T .AND. T.LE.MTIME(2)) THEN

RATEADJCALC = RATEADJMAX * EXP(-(T – MTIME(1)) * RATEADJTAU)

RATEIN = DOSERATE - RATEADJCALC

  ENDIF

  DADT(1) = RATEIN - K10*A(1) - K12*A(1) + K21*A(2) - K13*A(1) + K31*A(3)

  DADT(2) = K12*A(1) - K21*A(2)

  DADT(3) =   K13*A(1) - K31*A(3)



Thanks,



Bill


RE: [NMusers] $ABBR REPLACE Limitations?

2018-01-24 Thread Bill Denney
Hi Bob,



It looks like that was my issue.



Thanks,



Bill



*From:* Bauer, Robert [mailto:robert.ba...@iconplc.com]
*Sent:* Wednesday, January 24, 2018 12:35 PM
*To:* William Denney 
*Cc:* Luann Phillips ; nmusers <
nmusers@globomaxnm.com>
*Subject:* RE: [NMusers] $ABBR REPLACE Limitations?



Bill:

The straight substitutions should work for any variable.  Alison and I
discovered a minor glitch in nm74 (it works okay in nm73), where
substitution does not occur if there is a space between the name and the =
sign in the execution statement.  So, instead of



FCMTD1GUT = THETA(1)



try



FCMTD1GUT= THETA(1)



and see if that works.



Robert J. Bauer, Ph.D.

Senior Director

Pharmacometrics R&D

ICON Early Phase

820 W. Diamond Avenue

Suite 100

Gaithersburg, MD 20878

Office: (215) 616-6428

Mobile: (925) 286-0769

robert.ba...@iconplc.com

www.iconplc.com



*From:* William Denney [mailto:wden...@humanpredictions.com
]
*Sent:* Wednesday, January 24, 2018 4:08 AM
*To:* Bauer, Robert
*Cc:* Luann Phillips; nmusers
*Subject:* Re: [NMusers] $ABBR REPLACE Limitations?



Hi Bob,



Thanks for this confirmation.  I'm using 7.4.1.



For my other question, is there a limitation of $ABBR REPLACE which
prevents its use for parameters like F1 and ALAG1?



Thanks,



Bill


On Jan 24, 2018, at 00:43, Bauer, Robert  wrote:

This bug has been  fixed since NONMEM 7.4





Bug #1, nm730_bug_list.pdf in https://nonmem.iconplc.com/nonmem730

Two variables with names longer than six characters, and identical in the
first six characters, defined in $PK, and used in $DES, will be seen as the
same variable.  Use variable names that differ in the first six
characters. This occurs in NONMEM 7.1.0, 7.1.2, 7.2.0, and 7.3.0.  A
workaround is to move all assignment statements for variables whose first 6
characters match to $DES.



Robert J. Bauer, Ph.D.

Senior Director

Pharmacometrics R&D

ICON Early Phase

820 W. Diamond Avenue

Suite 100

Gaithersburg, MD 20878

Office: (215) 616-6428

Mobile: (925) 286-0769

robert.ba...@iconplc.com

www.iconplc.com



*From:* owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com
] *On Behalf Of *Bill Denney
*Sent:* Tuesday, January 23, 2018 4:40 PM
*To:* Luann Phillips
*Cc:* nmusers
*Subject:* RE: [NMusers] $ABBR REPLACE Limitations?



Hi Luann,



I’m not having similar problems with other parameters with >6 characters.
I had a similar issue with ALAG1, so I think that there are limitations on
what types of variables you can remap.  (Perhaps Allison or Bob could
confirm?)



Thanks,



Bill



*From:* Luann Phillips [mailto:lu...@cognigencorp.com]
*Sent:* Tuesday, January 23, 2018 3:51 PM
*To:* Bill Denney 
*Cc:* nmusers 
*Subject:* Re: [NMusers] $ABBR REPLACE Limitations?



Bill,



I know at one time NM had a bug (haven't checked to see if it has been
fixed) that would cause variables with the first 6 characters the same to
be interpreted as the same variable even when later characters differed. I
wonder if this is a similar issue?

Have you tried a shorter variable name, such as FCD1GUT=F1?



Just a thought,

Luann


------

*From: *"Bill Denney" 
*To: *"nmusers" 
*Sent: *Tuesday, January 23, 2018 2:15:32 PM
*Subject: *[NMusers] $ABBR REPLACE Limitations?



Hi,



I’m working in NONMEM 7.4.1 with a model where I need to track 4 drugs
simultaneously.  To increase the model quality, I’m making heavy use of
$ABBR REPLACE to keep track of THETA, ETA, and CMT numbers.  I traced an
issue with my model where there was no gradient on a bioavailability term
to the fact that this substitution didn’t seem to be working:



$ABBR REPLACE FCMTD1GUT=F1

…

$PK

…

FCMTD1GUT = THETA(1)



FCMTD1GUT didn’t seem to be effectively replaced to F1, so the
bioavailability parameters had no effect.  When I changed it to just
directly name F1, it worked as expected.



$PK

…

F1 = THETA(1)



It’s not obvious from the $ABBR REPLACE section of the manual that this is
a limitation of $ABBR REPLACE.  Is this an issue or am I missing something?



Thanks,



Bill


--

 <http://www.humanpredictions.com/>*William S. Denney, PhD*
Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the
individual(s) or entity named above and others who have been specifically
authorized to receive it. If you are not the intended recipient, please do
not read, copy, use or disclose the contents of this communication to
others. Please notify the sender that you have received this e-mail in
error by replying to the e-mail or by calling +1-617-899-8123. Please then
delete the e-mail and any copies of it. Thank you.





ICON plc made the following annotations.
--

This e-mail transmission may contain confidential or legally privileged

RE: [NMusers] $ABBR REPLACE Limitations?

2018-01-23 Thread Bill Denney
Hi Luann,



I’m not having similar problems with other parameters with >6 characters.
I had a similar issue with ALAG1, so I think that there are limitations on
what types of variables you can remap.  (Perhaps Allison or Bob could
confirm?)



Thanks,



Bill



*From:* Luann Phillips [mailto:lu...@cognigencorp.com]
*Sent:* Tuesday, January 23, 2018 3:51 PM
*To:* Bill Denney 
*Cc:* nmusers 
*Subject:* Re: [NMusers] $ABBR REPLACE Limitations?



Bill,



I know at one time NM had a bug (haven't checked to see if it has been
fixed) that would cause variables with the first 6 characters the same to
be interpreted as the same variable even when later characters differed. I
wonder if this is a similar issue?

Have you tried a shorter variable name, such as FCD1GUT=F1?



Just a thought,

Luann


--

*From: *"Bill Denney" 
*To: *"nmusers" 
*Sent: *Tuesday, January 23, 2018 2:15:32 PM
*Subject: *[NMusers] $ABBR REPLACE Limitations?



Hi,



I’m working in NONMEM 7.4.1 with a model where I need to track 4 drugs
simultaneously.  To increase the model quality, I’m making heavy use of
$ABBR REPLACE to keep track of THETA, ETA, and CMT numbers.  I traced an
issue with my model where there was no gradient on a bioavailability term
to the fact that this substitution didn’t seem to be working:



$ABBR REPLACE FCMTD1GUT=F1

…

$PK

…

FCMTD1GUT = THETA(1)



FCMTD1GUT didn’t seem to be effectively replaced to F1, so the
bioavailability parameters had no effect.  When I changed it to just
directly name F1, it worked as expected.



$PK

…

F1 = THETA(1)



It’s not obvious from the $ABBR REPLACE section of the manual that this is
a limitation of $ABBR REPLACE.  Is this an issue or am I missing something?



Thanks,



Bill


--

<http://www.humanpredictions.com/>[image: Human Predictions Logo]
<http://www.humanpredictions.com/> <http://www.humanpredictions.com/>*William
S. Denney, PhD*
Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the
individual(s) or entity named above and others who have been specifically
authorized to receive it. If you are not the intended recipient, please do
not read, copy, use or disclose the contents of this communication to
others. Please notify the sender that you have received this e-mail in
error by replying to the e-mail or by calling +1-617-899-8123. Please then
delete the e-mail and any copies of it. Thank you.


RE: [NMusers] CMT Remapping

2018-01-23 Thread Bill Denney
Hi Leonid,

Thanks, you're right that I'm making it more complicated than needed.  I
like the CMT1, CMT2, ... CMTALL solution.

Related to my other post this afternoon with $ABBR, I'm trying to minimize
parameter name recoding, too.

Thanks,

Bill

-Original Message-
From: Leonid Gibiansky [mailto:lgibian...@quantpharm.com]
Sent: Tuesday, January 23, 2018 5:46 PM
To: Bill Denney ; nmusers@globomaxnm.com
Subject: Re: [NMusers] CMT Remapping

Hi Bill,

I do not think this is possible, but you are making things more complicated
than needed. You can slice the data set as needed using IGNORE/ACCEPT
options. For analytical ADVAN models, you often do not need CMT item, and if
needed, you can introduce extra columns CMT1, CMT2, CMT3, and CMT4, for each
of the 4 drugs, and CMTALL for the combined model. So the only thing that
would be needed is to re-code parameter names and theta-eta-eps indices when
you combine 4 models in one. This step would be needed any way.

Best
Leonid




On 1/23/2018 2:26 PM, Bill Denney wrote:
> Hi,
>
> In a follow-up to the previous email, I’m still working on the same
> model where I need to track 4 drugs simultaneously.
>
> For this, I’m working from a single data file, and my initial modeling
> is drug 1 (first control stream), drug 2 (second control stream), drug
> 3 (third control stream), drug 4 (fourth control stream).  As I
> progress through modeling, I intend to combine to a single control
> stream with all 4 drugs.
>
> I’d like to be able to speed up my modeling by taking advantage of the
> algebraic ADVAN/TRANS combinations while feasible, but I would also
> like to keep working within the same data file.  Is there any method
> to remap compartment numbers in a model file to an ADVAN-expected
> compartment?
> I’d like to do something like the following (which doesn’t work):
>
> $SUBROUTINES ADVAN4 TRANS4 CMTMAP=6,11,12,13
>
> Where CMTMAP would remap compartment number 6=1, 11=2, 12=3, and 13=4
> for the expected ADVAN numbering.
>
> Thanks,
>
> Bill
>
> --
> --
>
> <http://www.humanpredictions.com>Human Predictions Logo
> <http://www.humanpredictions.com/><http://www.humanpredictions.com>*Wi
> lliam
> S. Denney, PhD*
> Chief Scientist, Human Predictions LLC
> +1-617-899-8123
> wden...@humanpredictions.com <mailto:wden...@humanpredictions.com>
>
> This e-mail communication is confidential and is intended only for the
> individual(s) or entity named above and others who have been
> specifically authorized to receive it. If you are not the intended
> recipient, please do not read, copy, use or disclose the contents of
> this communication to others. Please notify the sender that you have
> received this e-mail in error by replying to the e-mail or by calling
> +1-617-899-8123. Please then delete the e-mail and any copies of it.
> Thank you.
>



[NMusers] CMT Remapping

2018-01-23 Thread Bill Denney
Hi,



In a follow-up to the previous email, I’m still working on the same model
where I need to track 4 drugs simultaneously.



For this, I’m working from a single data file, and my initial modeling is
drug 1 (first control stream), drug 2 (second control stream), drug 3
(third control stream), drug 4 (fourth control stream).  As I progress
through modeling, I intend to combine to a single control stream with all 4
drugs.



I’d like to be able to speed up my modeling by taking advantage of the
algebraic ADVAN/TRANS combinations while feasible, but I would also like to
keep working within the same data file.  Is there any method to remap
compartment numbers in a model file to an ADVAN-expected compartment?  I’d
like to do something like the following (which doesn’t work):



$SUBROUTINES ADVAN4 TRANS4 CMTMAP=6,11,12,13



Where CMTMAP would remap compartment number 6=1, 11=2, 12=3, and 13=4 for
the expected ADVAN numbering.



Thanks,



Bill


--

[image: Human Predictions Logo]
 *William
S. Denney, PhD*
Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the
individual(s) or entity named above and others who have been specifically
authorized to receive it. If you are not the intended recipient, please do
not read, copy, use or disclose the contents of this communication to
others. Please notify the sender that you have received this e-mail in
error by replying to the e-mail or by calling +1-617-899-8123. Please then
delete the e-mail and any copies of it. Thank you.


[NMusers] $ABBR REPLACE Limitations?

2018-01-23 Thread Bill Denney
Hi,



I’m working in NONMEM 7.4.1 with a model where I need to track 4 drugs
simultaneously.  To increase the model quality, I’m making heavy use of
$ABBR REPLACE to keep track of THETA, ETA, and CMT numbers.  I traced an
issue with my model where there was no gradient on a bioavailability term
to the fact that this substitution didn’t seem to be working:



$ABBR REPLACE FCMTD1GUT=F1

…

$PK

…

FCMTD1GUT = THETA(1)



FCMTD1GUT didn’t seem to be effectively replaced to F1, so the
bioavailability parameters had no effect.  When I changed it to just
directly name F1, it worked as expected.



$PK

…

F1 = THETA(1)



It’s not obvious from the $ABBR REPLACE section of the manual that this is
a limitation of $ABBR REPLACE.  Is this an issue or am I missing something?



Thanks,



Bill


--

[image: Human Predictions Logo]
 *William
S. Denney, PhD*
Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the
individual(s) or entity named above and others who have been specifically
authorized to receive it. If you are not the intended recipient, please do
not read, copy, use or disclose the contents of this communication to
others. Please notify the sender that you have received this e-mail in
error by replying to the e-mail or by calling +1-617-899-8123. Please then
delete the e-mail and any copies of it. Thank you.


Re: [NMusers] Failure to execute NONMEM from R

2017-12-20 Thread Bill Denney
Hi Matthew,

Since everything is working when run directly from the command line (cmd), I 
would guess that it's something like a path issue.

Check your path at the command prompt (type "path") and in RStudio 
("Sys.getenv("PATH")").

Thanks,

Bill

> On Dec 20, 2017, at 21:54, HUI, Ka Ho  wrote:
> 
> Dear NMusers,
>  
> Recently I need to execute NONMEM 7.4 from Rstudio terminal. I have tried the 
> following scripts:
> `system2(“execute FIT_01.mod”)`#Through PsN, 
> using `system2`
> `shell(“execute FIT_01.mod”)`  #Through PsN, 
> using `shell`
> `write(“execute FIT_01.mod”, “FIT_01.bat”)`
> `shell(“FIT_01.bat”)` 
> #Through PsN, using `shell` to call a batch file
> `write(“execute FIT_01.mod”, “FIT_01.bat”)`
> `shell.exec(“FIT_01.bat”)` #Through 
> PsN, using `shell.exec` to call a batch file
> The above commands, but call through C:/nm74g64/run/nmfe74.bat directly
>  
> The above were also tested using the test dataset `CONTROL5`
>  
> Results:
> gave me the `running command '"execute FIT_01.mod"' had status 127` warning 
> without any execution
>  
> (2)-(4) gave me the following messages:
> Starting 1 NONMEM executions. 1 in parallel.
> S:1 ..
> All executions started.
> Starting NMTRAN
> NMtran failed. There is no output for model 1. Contents of FMSG:
> Not restarting this model.
> F:1 ..
> execute done
> 
> (5) similar to (1) but the status/error code became 107
>  
> None of the above has led to a successful NONMEM run. But the above command 
> lines work when outside R (i.e. through `cmd` +/- PsN)
> Is there anyone experiencing the same issue?
>  
> System/software info:
> OS: Windows 10
> NONMEM v7.4
> R3.4.2
> Rstudio v1.1.383
>  
> Best regards,
> Matthew Hui
> PhD Student
> School of Pharmacy
> Faculty of Medicine
> The Chinese University of Hong Kong
>


RE: [NMusers] Use of ACCEPT in $DATA

2017-08-24 Thread Bill Denney
Hi Dennis,



I don’t have an elegant solution for you (and I’ve been pining for the use
of combined Boolean operations like “TIME.GT.5.9.AND.TIME.LT.6.1” for a
long time).



An inelegant solution could be to run the model once with a write statement
to see if you can identify the value like 6.001 and use it.  That would
probably be fragile to different processor/compiler/math library
combinations, so I’d probably end up making the additional indicator column
for certainty.


Thanks,



Bill



*From:* owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] *On
Behalf Of *Dennis Fisher
*Sent:* Thursday, August 24, 2017 6:16 PM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] Use of ACCEPT in $DATA



NONMEM 7.4.1



Colleagues



I am trying to use the ACCEPT option in $DATA in order to select a subset
of records (to evaluate the impact of the # of samples/subject on
confidence intervals).



I used the following code:

   ACCEPT=(TIME=0, TIME=1, TIME=2, TIME=4, TIME=6, TIME=24)



NMTRAN then creates a dataset but — to my surprise — TIME=6 is not in the
dataset (all the others are).



I am copying the first few rows of the input dataset so that you can see
what is being provided to NMTRAN:



ID,AGE,MONTHS,SEX,WT,AMT,RATE,*TIME*
,EVID,MDV,REPLICATE,IPRED,CWRES,DV,PRED,RES,WRES
1101,12,144,1,30.054,210.38,841.51,0,1,1,1,0,0,0,0,0,0
1101,12,144,1,30.054,0,0,1,0,0,1,187.42,0,179.28,199.26,-19.979,0
1101,12,144,1,30.054,0,0,2,0,0,1,180.92,0,187.92,194.09,-6.1659,0
1101,12,144,1,30.054,0,0,4,0,0,1,169.84,0,177.66,184.37,-6.712,0
1101,12,144,1,30.054,0,0,*6*,0,0,1,160.61,0,153.43,175.39,-21.96,0



The underlined / boldfaced value (6) in the final row is the problem.



I assume that NMTRAN is reading that value as something other than 6.0
(e.g., 6.01) and thereby omitting it.



I have reviewed NMHELP to see if there is some other way to accomplish
this.  Ideally, there would be something like:

   TIME.GT.5.9.AND.TIME.LT.6.1

but that does not appear to be supported.



The alternative is to modify the dataset to include many possible MDV/EVID
columns.  However, it would be more elegant to do this in the control
stream.

Or, if there is some way to find out the exact value that NMTRAN sees, I
could specify that value.



Any help would be appreciated.



Dennis



Dennis Fisher MD
P < (The "P Less Than" Company)
Phone / Fax: 1-866-PLessThan (1-866-753-7784)
www.PLessThan.com 


[NMusers] ISoP New England: Novel Drug Modalities and Cookout! August 29 (Sign-up link!)

2017-08-12 Thread Bill Denney
View this email in your browser


[image:
https://gallery.mailchimp.com/4dfcc4c80e1a6d31f8e6e4332/images/71f6dff6-9d6c-44d3-a56b-6d36e30f2533.png]
Novel Drug Modalities, PMx Practice, and Cookout!  (Sign-up Link Below!)

ISoP New England will be hosting an event at AstraZeneca on August 29.  The
event will include a combination of scientific presentations, a cookout,
and yard games.

Two scientific sessions will be part of the event Unusual Drug Modalities
and a general pharmacometrics session.  If you would like to present some
of your work, please submit an abstract by August 14 to
new.engl...@go-isop.org
.
Abstracts should be 250 words or less for a 20 to 30 minute presentation.

The draft details of the event are:
*Date and Time:*  August 29, 1pm to 7pm (rain or shine)
*Location:*  AstraZeneca, 35 Gatehouse Dr, Waltham, MA 02451 (parking and
public transportation available)
*Content:*  Food, drinks, and science!
*Cost:*  $25/person, registration at
http://www.go-isop.org/isop-new-england-local-event--aug-29-2017





--

[image: Human Predictions Logo] *William
S. Denney, PhD*
Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the
individual(s) or entity named above and others who have been specifically
authorized to receive it. If you are not the intended recipient, please do
not read, copy, use or disclose the contents of this communication to
others. Please notify the sender that you have received this e-mail in
error by replying to the e-mail or by calling +1-617-899-8123. Please then
delete the e-mail and any copies of it. Thank you.


Re: [NMusers] DEFDOSE/DEFOBS Required for Many Compartments?

2017-05-01 Thread Bill Denney

Hi Alison,

Thanks for the confirmation that I wasn't missing something on my end.  
For this model, DEFOBS,DEFDOSE works, but I'll remember PCMT in case 
it's needed in the future.


Thanks,

Bill

On 5/1/2017 3:28 PM, Alison Boeckmann wrote:

Bill,
Here's how it looks to me right now.

I put together a little test case:
$INPUT  ID DOSE=AMT TIME CP=DV WT CMT EVID
$MODEL
COMP=(One,INITIALOFF,NODOSE)
COMP=(PKCENT)

I ran this with a data set in which CMT=2 for both dose and observation
records.
PREDPP gives me the same error message that you reported:
DATA REC1: COMPARTMENT ASSOCIATED WITH THE PREDICTION IS OFF

NMTRAN makes the assignment of "Default for observations"
inappropriately:

COMPT. NO.   FUNCTION   INITIALON/OFF  DOSE  DEFAULT
DEFAULT
  STATUS ALLOWEDALLOWEDFOR DOSE   FOR OBS.
 1 ONE  OFFYESNO NO YES
 2 PKCENT   ON YESYESYESNO
 3 OUTPUT   OFFYESNO NO NO

The PCMT data item is needed with dose records to identify the
prediction compartment associated with the dose record, and, when PCMT
is not
defined, this is defaulting to "Default for observations," which is
compartment 1,
which is initially off.

The DEFAULT FOR OBS compartment should have the attribute "INITIAL ON".

Your fix is a good one (specify both DEFDOSE and DEFOBS).
Another fix is to supply the PCMT data item in the data set, and (in my
case) set PCMT=2

This is something that can be fixed with a future version of NONMEM (its
too late for 7.4).

-- Alison

On Mon, May 1, 2017, at 09:33 AM, Bill Denney wrote:

Hi,



I have a model with many compartments (42 different compartments

   are defined).  It is currently setup this way to future-proof the
   dataset in case we want to combine analyses of many drugs (about
   20 different drugs with central/peripheral compartments) and
   several endpoints.



In the current model, I'm only fitting the PK of one drug, so

   only two of the compartments are enabled:  compartments 39 and
   42.  After the IGNORE statement, the data is relatively small with
   400 records between 16 subjects, and all the data are in
   compartment 39 (the central compartment for the current drug).



CMT, EVID, and MDV (along with everything else) is correctly

   specified.  When I run the model with CMT 39 defined by:



COMP=(PKCENT)  ; 39 central PK compartment (ug/mL)



the model fails with the error "COMPARTMENT ASSOCIATED WITH THE

   PREDICTION IS OFF" noted for all the dosing rows, but when I
   specify that compartment 39 is the default observation and dose
   compartment (due to IV dosing), the model works.



COMP=(PKCENT,DEFOBS,DEFDOSE)  ; 39 central PK compartment (ug/mL)



I thought that when CMT and EVID are specified, DEFOBS and

   DEFDOSE are not important.



Does anyone know why adding DEFOBS and DEFDOSE fixed my model?



Thanks,



Bill



Human

   Predictions Logo[http://www.humanpredictions.com/]*William S.
   Denney, PhD*

  Chief Scientist, Human Predictions LLC
  +1-617-899-8123
  wden...@humanpredictions.com
This e-mail communication is

   confidential and is intended only for the individual(s) or
   entity named above and others who have been specifically
   authorized to receive it. If you are not the intended
   recipient, please do not read, copy, use or disclose the
   contents of this communication to others. Please notify the
   sender that you have received this e-mail in error by replying
   to the e-mail or by calling +1-617-899-8123. Please then
   delete the e-mail and any copies of it. Thank you.



--
   Alison Boeckmann
   alisonboeckm...@fastmail.fm




Human Predictions Logo <http://www.humanpredictions.com>*William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the 
individual(s) or entity named above and others who have been 
specifically authorized to receive it. If you are not the intended 
recipient, please do not read, copy, use or disclose the contents of 
this communication to others. Please notify the sender that you have 
received this e-mail in error by replying to the e-mail or by calling 
+1-617-899-8123. Please then delete the e-mail and any copies of it. 
Thank you.




[NMusers] DEFDOSE/DEFOBS Required for Many Compartments?

2017-05-01 Thread Bill Denney

Hi,

I have a model with many compartments (42 different compartments are 
defined).  It is currently setup this way to future-proof the dataset in 
case we want to combine analyses of many drugs (about 20 different drugs 
with central/peripheral compartments) and several endpoints.


In the current model, I'm only fitting the PK of one drug, so only two 
of the compartments are enabled:  compartments 39 and 42.  After the 
IGNORE statement, the data is relatively small with 400 records between 
16 subjects, and all the data are in compartment 39 (the central 
compartment for the current drug).


CMT, EVID, and MDV (along with everything else) is correctly specified.  
When I run the model with CMT 39 defined by:


COMP=(PKCENT)  ; 39 central PK compartment (ug/mL)

the model fails with the error "COMPARTMENT ASSOCIATED WITH THE 
PREDICTION IS OFF" noted for all the dosing rows, but when I specify 
that compartment 39 is the default observation and dose compartment (due 
to IV dosing), the model works.


COMP=(PKCENT,DEFOBS,DEFDOSE)  ; 39 central PK compartment (ug/mL)

I thought that when CMT and EVID are specified, DEFOBS and DEFDOSE are 
not important.


Does anyone know why adding DEFOBS and DEFDOSE fixed my model?

Thanks,

Bill


Human Predictions Logo *William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the 
individual(s) or entity named above and others who have been 
specifically authorized to receive it. If you are not the intended 
recipient, please do not read, copy, use or disclose the contents of 
this communication to others. Please notify the sender that you have 
received this e-mail in error by replying to the e-mail or by calling 
+1-617-899-8123. Please then delete the e-mail and any copies of it. 
Thank you.




Re: [NMusers] Generating TAD with ADDL dosing format

2016-12-20 Thread Bill Denney

Hi Camila,

It sounds like you've got two questions here-- one related to NONMEM and 
one related to the other program you're using to create your VPC.  The 
NONMEM question appears to be "How do I get TAD in my data without 
changing my dataset?"  The second question appears to be "How to I 
stratify my VPC based on that new TAD variable instead of TIME?"


For the second question, what program are you using for VPC?

For the first question, others may have a more elegant answer, but I 
think that you're right:  ADDL makes many dose-related events 
difficult.  The simplest answer is to revise your dataset adding a TAD 
column.  If you can't do that, then something like this will work 
assuming that you only have one dosing record per subject.  (Note that 
the code below was typed directly into email, so it may have typos.)


; Set TAD to -1 before any dose record for the subject

IF (NEWIND.EQ.2) THEN
  DOSETIME = -1
  ADDLREC = 0
  IIREC = 0
ENDIF
; Capture the (most recent) dosing information for this subject
IF (EVID.EQ.1 .OR. EVID.EQ.4) THEN
  DOSETIME = TIME
  ADDLREC = ADDL
  IIREC = II
ENDIF
; Calculate TAD from the single dose record for a subject in the data 
set assuming
; that there is only one dose record per subject or that the dose 
records occur
; such that the most recent dose record is the only one important for 
calculating
; TAD for a subject.  This assumption would not hold if there is a dose 
record
; with ADDL that has a dose record for a time in the middle of those 
ADDL doses.

IF (DOSETIME.LT.0) THEN
  ; This subject has not received a dose yet, set TAD to -1
  TAD = -1
ELSEIF ((TIME-DOSETIME) .LE. ((ADDLREC+1)*IIREC)) THEN
  ; This subject is in the middle of the ADDL records for this dose,
  ; calculate time since most recent dose.
  TAD = (TIME-DOSETIME) - INT((TIME-DOSETIME)/IIREC)*IIREC
ELSE
  ; This subject is are after the last ADDL dose, calculate time since 
the final

  ; dose (observed so far).
  TAD = (TIME-DOSETIME) - ((ADDLREC+1)*IIREC)
ENDIF

Thanks,

Bill


On 12/20/2016 6:45 AM, de Almeida, Camila wrote:


Hello,

I was wondering if I could get some guidance from this great group. My 
issue is primarily with some diagnostic analysis, but this is taking 
me back to an old NONMEM problem.


My aim is to run a VPC on a model I implemented, and if possible 
change the idv to TAD instead of TIME. The reason for that is the VPC 
graph based on TIME looks dreadful as the data is sparse and from 
different studies of different lengths.


I’m having issues generating the TAD output column from my NONMEM run. 
I naively assumed I could easily do that, but looking at the NONMEM 
archives it seems this gets tricky when your dosing events are written 
using ADDL. Has anyone ever managed to find a solution for this? And 
if not, is there an alternative way to run the VCP on TAD, do we 
really need to get this column from NONMEM’s output?


Thanks all,

*Camila de Almeida, PhD*

PKPD Scientist,

*Modelling & Simulation, IMED Oncology DMPK*

**

*AstraZeneca UK Limited*

*R&D, Innovative Medicines*

**

P Please consider the environment before printing this e-mail



AstraZeneca UK Limited is a company incorporated in England and Wales 
with registered number:03674842 and its registered office at 1 Francis 
Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.


This e-mail and its attachments are intended for the above named 
recipient only and may contain confidential and privileged 
information. If they have come to you in error, you must not copy or 
show them to anyone; instead, please reply to this e-mail, 
highlighting the error to the sender and then immediately delete the 
message. For information about how AstraZeneca UK Limited and its 
affiliates may process information, personal data and monitor 
communications, please see our privacy notice at www.astrazeneca.com 






Human Predictions Logo *William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com


Re: [NMusers] Rcpp alongside NONMEM

2016-11-03 Thread Bill Denney

Hi Mike,

Similar to Itziar, I've run into near-equivalent problems.  I've not 
tested this as a solution, but my thought is to setup a specific batch 
file that runs NONMEM for me, and at the top of the batch file it 
updates the PATH variable to point to the NONMEM copy of gfortran, and 
at the end, it returns the PATH to its original state.


Currently, I don't have similar problems with my Docker setup since 
NONMEM and gfortran are within the Docker container and can't reasonably 
interact with the outside environment.


Thanks,

Bill

On 11/3/2016 1:48 PM, Itziar Irurzun Arana wrote:

Hi Mike,

I had the same problem some months ago and what I finally did was 
to update the path environment variable for the current R session only 
with the following code:


path <- "C:\\RBuildTools" #Write the path to your Rtools or 
RBuildTools folder

rtools <- paste(path, "\\bin", sep = "")
gcc <- paste(path, "\\gcc-4.6.3\\bin", sep = "")
path <- strsplit(Sys.getenv("PATH"), ";")[[1]]
new_path <- c(rtools, gcc, path)
new_path <- new_path[!duplicated(tolower(new_path))]
Sys.setenv(PATH = paste(new_path, collapse = ";"))


I always run this before using simulation tools which rely on Rcpp and 
it works. When you close your R session this information is lost, so 
you wouldn't have problems with other compilers. Then you can leave 
the NONMEM gfortran compiler in the first place on the PATH.


If you have more questions, don't hesitate to ask.

Itziar Irurzun

2016-11-03 17:42 GMT+01:00 Smith, Mike K >:


Hi,

My colleagues and I are running into problems using modelling and
simulation tools which rely on Rcpp (e.g. Stan, mrgsolve, PKPDsim)
alongside an existing NONMEM installation. The problem is that we
have TWO versions of compilers installed – one for NONMEM and one
from the Rtools set.

For NONMEM we’re using gfortran 4.5.0 installed in C:\Program
Files (x86)\gfortran\libexec\gcc\i586-pc-mingw32\4.5.0. (Running
Windows 8).

When Rtools installs  it allows me to configure where the
RBuildTools sits on the PATH, but if I put it first (as
recommended) then it stops NONMEM picking up the gfortran compiler
above, and NONMEM doesn’t run successfully. But if I put the
RBuildTools further down the path, then it doesn’t set up c++ as a
command line executable.

If anyone has experience at making these play nicely together
without having to hack the PATH, I’d be really interested to hear
from you. While hacking PATH variables is fine individually, it’s
not a recipe that’s easily rolled out and supported across an
organisation… (Although I’d still like to know how you might hack
the PATH info to get this to work!).

M

*Mike K. Smith*
/Pharmacometrics/

/Pfizer WRD, Sandwich//(IPC 096)/
Tel: +44 (0)1304 643561 

LEGAL NOTICE

Unless expressly stated otherwise, this message is confidential
and may be privileged. It is intended for the addressee(s) only.
Access to this e-mail by anyone else is unauthorised. If you are
not an addressee, any disclosure or copying of the contents of
this e-mail or any action taken (or not taken) in reliance on it
is unauthorised and may be unlawful. If you are not an addressee,
please inform the sender immediately.

Pfizer Limited is registered in England under No. 526209 with its
registered office at Ramsgate Road, Sandwich, Kent CT13 9NJ




--

Itziar Irurzun Arana, PhD Student
Department of Pharmacy and Pharmaceutical Technology
School of Pharmacy, University of Navarra
Pamplona, Spain



Human Predictions Logo *William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the 
individual(s) or entity named above and others who have been 
specifically authorized to receive it. If you are not the intended 
recipient, please do not read, copy, use or disclose the contents of 
this communication to others. Please notify the sender that you have 
received this e-mail in error by replying to the e-mail or by calling 
+1-617-899-8123. Please then delete the e-mail and any copies of it. 
Thank you.




[NMusers] Dynamically Sized MPI NONMEM Runs?

2016-09-19 Thread Bill Denney

Hi,

I've been setting up Docker for NONMEM with NMQual and PsN 
(https://github.com/billdenney/Pharmacometrics-Docker). The docker 
containers work with MPI in a static fashion-- as an example, I can 
start a run with 4 cores, but I can't add or remove cores from the run.


MPI appears to have the ability to dynamically size jobs [1], but I 
don't see a way to do this within NONMEM.  I'd like to maximize my 
NONMEM license use by always using all the licensed cores.  I recognize 
that I can just run jobs sequentially using 4 cores each, but if one 
model takes significantly longer than the others, I'd like the others to 
go ahead and finish while the long-running job works.  Examples are:


Example 1:  I start 1 NONMEM job with 4 parallel threads, and all 
threads are in use through completion of the job (this currently works).


Example 2:  I start 4 NONMEM jobs with 1 thread each, and each job runs 
to completion (this currently works).


Example 3:  I start 4 NONMEM jobs with up to 4 threads each, but I want 
to stay within my license, so I only want 1 core used until some of the 
jobs finish.  Job 1 completes first, so I'd like to have Job 2 expand to 
2 cores.  Job 2 completes, so I'd like to have Job 3 and Job 4 expand to 
2 cores each.  Job 3 completes, so I'd like to have Job 4 expand to 4 
cores.  (This doesn't work right now.)


Example 4:  I start 3 NONMEM jobs with up to 4 threads each, but I want 
to stay within my license, so I want Job 1 to start with 2 cores and 
Jobs 3 and 4 to start with 1 core.  As jobs finish, they expand 
similarly to Example 3.


[1] 
http://www.netlib.org/utk/people/JackDongarra/WEB-PAGES/SPRING-2012/Lect05-dynamicprocesses.pdf


Thanks,

Bill


Human Predictions Logo *William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com


[NMusers] SAEM Problem without Convergence Reports OFV

2016-08-06 Thread Bill Denney

Hi,

I have a challenging PD model that I recently completed working on.  In 
some of the runs, I got a likelihood value when I think NONMEM should 
have aborted without providing the likelihood.  The complete story is below.


I was using SAEM, and during some of the runs, I got the following error 
at the bottom of the .lst file:


0PROGRAM TERMINATED BY FNLETA
0PRED EXIT CODE = 1
0INDIVIDUAL NO.   1   ID= 1.00E+00 (WITHIN-INDIVIDUAL) 
DATA REC NO.   2

 THETA=
   NaNNaN   4.75E+00  -3.91E+00  -3.30E+00 -3.30E+00
NaN   3.00E-01  -1.00E+00   0.00E+00
   NaNNaNNaN  -2.00E+00   0.00E+00 NaN
NaNNaN  -2.00E+00   0.00E+00

   NaN   0.00E+00   0.00E+00   0.00E+00   0.00E+00
 OCCURS DURING SEARCH FOR ETA AT INITIAL VALUE, ETA=0
 F OR DERIVATIVE RETURNED BY PRED IS INFINITE (INF) OR NOT A NUMBER (NAN).
0PROGRAM TERMINATED BY FNLETA
 MESSAGE ISSUED FROM TABLE STEP
 #CPUT: Total CPU Time in Seconds, 4091.372
Stop Time:
Fri Jul  1 21:32:38 UTC 2016

This is expected due to the model convergence properties.  At the point 
I ran this model during model development, I was still developing my 
parameter and calculation protections.  But, the issues is that NONMEM 
kept running to the end of the model through all SAEM burn-in and 
accumulation iterations:


 Stochastic/Burn-in Mode
 iteration-1000  SAEMOBJ=   11412.435172982327
 iteration -999  SAEMOBJ=   0.
 iteration -998  SAEMOBJ=   0.
(etc.)

The continued run could potentially be expected because my $ESTIMATION 
record specified NOABORT:


$ESTIMATION SIGL=9 NSIG=3 NOABORT METHOD=SAEM INTERACTION LAPLACIAN
NBURN=1000 CTYPE=3 GRD=TS(1-2) NITER=1 IACCEPT=0.2
ISAMPLE=5 PRINT=1 NOPRIOR=1 SEED=5 MSFO=run15.msf
$COVARIANCE PRINT=E

But the real issue is that I got a SAEM final value of the likelihood 
function which should have been NAN if reported at all:


 

  
    STOCHASTIC APPROXIMATION 
EXPECTATION-MAXIMIZATION (NO PRIOR) 
 #OBJT:**FINAL VALUE OF LIKELIHOOD 
FUNCTION  

  
 


 #OBJV: 0.000   
**


What are peoples' thoughts?  Is this expected because of the NOABORT (or 
another reason) or should it be updated in a future NONMEM version?


Thanks,

Bill


Human Predictions Logo *William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com

This e-mail communication is confidential and is intended only for the 
individual(s) or entity named above and others who have been 
specifically authorized to receive it. If you are not the intended 
recipient, please do not read, copy, use or disclose the contents of 
this communication to others. Please notify the sender that you have 
received this e-mail in error by replying to the e-mail or by calling 
+1-617-899-8123. Please then delete the e-mail and any copies of it. 
Thank you.




Re: [NMusers] Laplacian, BQL and initial estimates

2016-07-17 Thread Bill Denney

Hi Mark,


Might you have more than one $EST step where Laplacian is not used in 
one of them?  And, can you share your $EST step options?



Thanks,


Bill


On 7/17/2016 4:28 PM, Mark Sale wrote:


Question:

NONMEM has a function to run a grid search to obtain initial estimates 
for THETA. But, when I try to do this with BQL data/Laplacian I get 
this run time error message:


0PROGRAM TERMINATED BY OBJ
 WITH INDIVIDUAL1   ID= 1.00E+00DATA REC.13
 WITH NMPR17, LAPLACIAN ESTIMATION METHOD MUST BE USED

It seems the Laplacian isn't used for the grid search for initial 
estimates (even though specified in $EST).


any ideas?




Mark Sale M.D.

Vice President, Modeling and Simulation

Nuventra Pharma Sciences, Inc.
2525 Meridian Parkway, Suite 280
Durham, NC 27713

Phone (919)-973-0383

ms...@nuventra.com 

CONFIDENTIALITY NOTICE The information in this transmittal (including 
attachments, if any) may be privileged and confidential and is 
intended only for the recipient(s) listed above. Any review, use, 
disclosure, distribution or copying of this transmittal, in any form, 
is prohibited except by or on behalf of the intended recipient(s). If 
you have received this transmittal in error, please notify me 
immediately by reply email and destroy all copies of the transmittal.





Human Predictions Logo *William S. 
Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com


[NMusers] Introduction to PKNCA: Automation of Noncompartmental Analysis in R

2016-05-23 Thread Bill Denney

Hello,

In the ISoP study group webinar series, I will be giving a webinar on 
using the PKNCA R library this Wednesday at noon US Eastern time.  If 
you're interested in automation of NCA in your R-based workflow with 
open-sourced software, check it out!


Thank you,

Bill

*Title:*Introduction to PKNCA: Automation of Noncompartmental Analysis in R
*General Topic:*Introduction to PKNCA: Automation of Noncompartmental 
Analysis in R

*Presenter:*William S. Denney
*Date(s):*25-May-2016

*Time:*Noon EDT
*Livelink Feed:*http://www.youtube.com/watch?v=WCmFrheYtcc
*Background Resources and Code for the 
Audience:*https://cran.r-project.org/web/packages/PKNCA


*Discussion During after the Presentation: 
http://discuss.go-isop.org/t/introduction-to-pknca-automation-of-noncompartmental-analysis-in-r-may-25th-noon-edt/558 
*


*Background*

The PKNCA R package is designed to perform all noncompartmental analysis 
(NCA) calculations for pharmacokinetic (PK) data. The package is broadly 
separated into two parts (calculation and summary) with some additional 
housekeeping functions.


The primary and secondary goals of the PKNCA package are to 1) only give 
correct answers to the specific questions being asked and 2) automate as 
much as possible to simplify the task of the analyst. When automation 
would leave ambiguity or make a choice that the analyst may have an 
alternate preference for, it is either not used or is possible to override.


[NMusers] Minimizing NONMEM Installation Size

2016-05-09 Thread Bill Denney

Hi,

I'm installing NONMEM 7.3.0 in a Docker container under Ubuntu Linux 
with gfortran.  To minimize network bandwidth and as a general best 
practice, I want to minimize the image size.  To do so, I deleted the 
installation media after installation is complete, and I've also deleted 
as many files as possible from the installation directory.  That change 
has cut the size of the image almost in half.


The files I deleted from the installation directory are listed below.  
Does anyone else know if I can cut more deeply (deleting more files from 
the installation directory)?


  examples/
  guides/
  help/
  html/
  *.pdf
  *.txt
  *.zip
  SETUP*
  run/*.bat
  run/*.EXE
  run/*.LNK
  run/CONTROL*
  run/DATA*
  run/REPORT*
  run/fpiwin*
  run/mpiwin*
  run/FCON
  run/FDATA
  run/FREPORT
  run/FSIZES
  run/FSTREAM
  run/FSUBS
  run/INTER
  run/garbage.out
  run/gfortran.txt
  util/*.LNK
  util/*.bat
  util/*.exe
  util/*~
  util/CONTROL*
  util/F*
  util/DATA3
  util/ERROR1
  util/INTER
  util/finish_Darwin*
  util/finish_Linux_f95
  util/finish_Linux_g95
  util/finish_SunOS*)

Thanks,

Bill



Re: [NMusers] MMRM implementation in nonmem?

2016-04-12 Thread Bill Denney

Hi Matts,

In my experience, a typical MMRM model is made of many unconnected model 
parameters usually with an unstructured variance/covariance matrix of 
residual error and a single inter-individual parameter. It would be 
relatively painful to copy and paste everything, but the general model 
could look like this (this was typed directly into email, so it may not 
work precisely).


Thanks,

Bill

$PRED
USE11 = 0 ; Trial arm 1, time point 1
USE12 = 0 ; Trial arm 1, time point 2
USE21 = 0 ; Trial arm 2, time point 1
USE22 = 0 ; Trial arm 2, time point 2
IF (TIME .EQ. 0 .AND. ARM .EQ. 1) USE11 = 1
IF (TIME .EQ. 1 .AND. ARM .EQ. 1) USE12 = 1
IF (TIME .EQ. 0 .AND. ARM .EQ. 2) USE21 = 1
IF (TIME .EQ. 1 .AND. ARM .EQ. 2) USE22 = 1

EFFECT = USE11 * THETA(1) + USE12 * THETA(2) + USE21 * THETA(3) + USE22 
* THETA(4) + ETA(1)


$SIGMA BLOCK(4)
0.1
0.01 0.1
0.01 0.01 0.1
0.01 0.01 0.01 0.1

$THETA
1
1
1
1

$OMEGA
0.1

$ERROR
Y = EFFECT + USE11 * EPS(1) + USE12 * EPS(2) + USE21 * EPS(3) + USE22 * 
EPS(4)


On 4/12/2016 7:10 PM, Matts Kågedal wrote:

Hi all,
Would any one be willing to share a nonmem control stream where mixed 
model repeated measures (MMRM) is implemented.


Best,
Matts Kagedal