Alas it was a long shot - good to know it won't work. Thanks Ron/Erik. I also wouldn't think I/O would outstrip CPU with how computationally intensive nonmem runs are.
On Mon, Jun 22, 2015 at 3:30 PM Ron Keizer <[email protected]> wrote: > >> The problem with those solutions is they don't actually address the I/O >> concern. >> > > Yes, only ICON can address disk I/O, the solutions from Erik and me > concern the file size. It really doesn't increase I/O relevantly. > > > Bill, a little bit of an out there solution (maybe) could be to create a >> dummy file >> > > won't work, NONMEM will crash when blocking file access. > > Ron > > >> >> Devin >> >> On Mon, Jun 22, 2015 at 2:42 PM <[email protected]> wrote: >> >>> Dear Bill and Ron, >>> >>> I was thinking about: >>> >>> $ ln -s /dev/null <control_stream>.log >>> >>> (After deleting the log if one already exists) >>> >>> Data written to the log file is actually written to the null device, >>> which discards the data - two tests seem to indicate that it could work. >>> >>> Best regards, >>> >>> Erik >>> ------------------------------ >>> *From:* [email protected] [[email protected]] on >>> behalf of Ron Keizer [[email protected]] >>> *Sent:* Monday, June 22, 2015 7:32 PM >>> *To:* Bill Gillespie >>> *Cc:* [email protected] >>> *Subject:* Re: [NMusers] Humongous log file with parallel NONMEM >>> >>> hi Bill, >>> >>> a simple hack is to delete the file continually, by running the >>> following command before you start NONMEM: >>> >>> watch -n 60 'find . -name *.log -delete' &>/dev/null & >>> >>> Notes: >>> - will delete all log files every minute >>> - I'm using 'find ...' instead of just 'rm *.log' here to ensure that >>> log-files in subfolders will also be deleted, e.g. useful when using via PsN >>> - run the command only once, the watch process will stay active >>> - only works on linux >>> >>> best regards, >>> Ron >>> >>> >>> ---------------------------------------------- >>> Ron Keizer, PharmD PhD >>> Pirana Software & Consulting BV >>> California / the Netherlands >>> www.pirana-software.com >>> ---------------------------------------------- >>> >>> On Mon, Jun 22, 2015 at 6:22 AM, Bill Gillespie < >>> [email protected]> wrote: >>> >>>> Hi all, >>>> >>>> I'm running NONMEM (METHOD = BAYES) in parallel on 32 cores and it >>>> generates a humongous log file with repeated entries like the following: >>>> >>>> ITERATION -577 >>>> STARTING SUBJECTS 1 TO 4 ON MANAGER: OK >>>> STARTING SUBJECTS 5 TO 8 ON WORKER1: OK >>>> STARTING SUBJECTS 9 TO 11 ON WORKER2: OK >>>> STARTING SUBJECTS 12 TO 15 ON WORKER3: OK >>>> STARTING SUBJECTS 16 TO 18 ON WORKER4: OK >>>> STARTING SUBJECTS 19 TO 20 ON WORKER5: OK >>>> STARTING SUBJECTS 21 TO 24 ON WORKER6: OK >>>> STARTING SUBJECTS 25 TO 27 ON WORKER7: OK >>>> STARTING SUBJECTS 28 TO 29 ON WORKER8: OK >>>> STARTING SUBJECTS 30 TO 32 ON WORKER9: OK >>>> STARTING SUBJECTS 33 TO 35 ON WORKER10: OK >>>> STARTING SUBJECTS 36 TO 39 ON WORKER11: OK >>>> STARTING SUBJECTS 40 TO 42 ON WORKER12: OK >>>> STARTING SUBJECTS 43 TO 46 ON WORKER13: OK >>>> STARTING SUBJECTS 47 TO 50 ON WORKER14: OK >>>> STARTING SUBJECTS 51 TO 53 ON WORKER15: OK >>>> STARTING SUBJECTS 54 TO 58 ON WORKER16: OK >>>> STARTING SUBJECTS 59 TO 62 ON WORKER17: OK >>>> STARTING SUBJECTS 63 TO 66 ON WORKER18: OK >>>> STARTING SUBJECTS 67 TO 70 ON WORKER19: OK >>>> STARTING SUBJECTS 71 TO 71 ON WORKER20: OK >>>> STARTING SUBJECTS 72 TO 74 ON WORKER21: OK >>>> STARTING SUBJECTS 75 TO 77 ON WORKER22: OK >>>> STARTING SUBJECTS 78 TO 80 ON WORKER23: OK >>>> STARTING SUBJECTS 81 TO 84 ON WORKER24: OK >>>> STARTING SUBJECTS 85 TO 86 ON WORKER25: OK >>>> STARTING SUBJECTS 87 TO 88 ON WORKER26: OK >>>> STARTING SUBJECTS 89 TO 90 ON WORKER27: OK >>>> STARTING SUBJECTS 91 TO 93 ON WORKER28: OK >>>> STARTING SUBJECTS 94 TO 96 ON WORKER29: OK >>>> STARTING SUBJECTS 97 TO 99 ON WORKER30: OK >>>> STARTING SUBJECTS 100 TO 103 ON WORKER31: OK >>>> COLLECTING SUBJECTS 1 TO 4 ON MANAGER >>>> COLLECTING SUBJECTS 5 TO 8 ON WORKER1 >>>> COLLECTING SUBJECTS 9 TO 11 ON WORKER2 >>>> COLLECTING SUBJECTS 12 TO 15 ON WORKER3 >>>> COLLECTING SUBJECTS 16 TO 18 ON WORKER4 >>>> COLLECTING SUBJECTS 19 TO 20 ON WORKER5 >>>> COLLECTING SUBJECTS 21 TO 24 ON WORKER6 >>>> COLLECTING SUBJECTS 25 TO 27 ON WORKER7 >>>> COLLECTING SUBJECTS 28 TO 29 ON WORKER8 >>>> COLLECTING SUBJECTS 30 TO 32 ON WORKER9 >>>> COLLECTING SUBJECTS 33 TO 35 ON WORKER10 >>>> COLLECTING SUBJECTS 36 TO 39 ON WORKER11 >>>> COLLECTING SUBJECTS 40 TO 42 ON WORKER12 >>>> COLLECTING SUBJECTS 43 TO 46 ON WORKER13 >>>> COLLECTING SUBJECTS 47 TO 50 ON WORKER14 >>>> COLLECTING SUBJECTS 51 TO 53 ON WORKER15 >>>> COLLECTING SUBJECTS 54 TO 58 ON WORKER16 >>>> COLLECTING SUBJECTS 59 TO 62 ON WORKER17 >>>> COLLECTING SUBJECTS 63 TO 66 ON WORKER18 >>>> COLLECTING SUBJECTS 67 TO 70 ON WORKER19 >>>> COLLECTING SUBJECTS 71 TO 71 ON WORKER20 >>>> COLLECTING SUBJECTS 72 TO 74 ON WORKER21 >>>> COLLECTING SUBJECTS 75 TO 77 ON WORKER22 >>>> COLLECTING SUBJECTS 78 TO 80 ON WORKER23 >>>> COLLECTING SUBJECTS 81 TO 84 ON WORKER24 >>>> COLLECTING SUBJECTS 85 TO 86 ON WORKER25 >>>> COLLECTING SUBJECTS 87 TO 88 ON WORKER26 >>>> COLLECTING SUBJECTS 89 TO 90 ON WORKER27 >>>> COLLECTING SUBJECTS 91 TO 93 ON WORKER28 >>>> COLLECTING SUBJECTS 94 TO 96 ON WORKER29 >>>> COLLECTING SUBJECTS 97 TO 99 ON WORKER30 >>>> COLLECTING SUBJECTS 100 TO 103 ON WORKER31 >>>> >>>> The result is a lot of disk I/O and a file in the GB+ range. It >>>> dwarfs the file containing the MCMC samples. Is there some way to suppress >>>> that file or reduce what gets written to it? >>>> >>>> Thanks, >>>> Bill >>>> >>>> William R Gillespie, VP Strategic Modeling & Simulation >>>> Metrum Research Group LLC >>>> 2 Tunxis Road, Tariffville, CT 06081 >>>> Direct & FAX: 919-371-2786, Main: 860-735-7043 >>>> [email protected] >>>> www.metrumrg.com >>>> >>> >>>
