Hi All,
Simon Davis - It is so exciting to see the release of Phoenix® 8.1. I will
check out its features.
Ruben Faelens - I have been using PsN for years but never thought that PsN
would help for the BOM issue. That workaround will mostly benefit our Psn
users.
Robert Bauer - This is absol
Mark:
NONMEM does not use or interpret extended UTF-8 code, but perhaps you mean that
NONMEM could filter out bytes>127 when reading in data files that are UTF-8
encoded, and process only bytes <=127. This can certainly be done, and I can
add it to the list of improvements for the next release.
I take the post-hoc PK parameters table, read them into SAS and execute the ODE
model with small time steps using proc model.
Sort the results by Cmax descending and select first observation for Cmax and
Tmax.
Precision is based solely on the number of time points in the data step. The
table o
Mark, et al., I’ve been following this conversation on NM Users with interest
when you ask about a more user friendly interface and wondered if you had ever
looked at Phoenix WinNonlin’s NLME module and it’s capabilities? I can’t send
you the attachments on the NM user list but as we just rel
Dear Mark,
It is definitely possible to add functionality in PsN to remove the BoM of
a CSV file automatically.
I suggest you add it in lib/tool/modelfit.pm on line #2934 (sub
copy_model_and_input, search for invokations of the cp() function).
Either you program something manually to detect a BoM,