Re: [gmx-users] restarting a simulation

2020-04-07 Thread Justin Lemkul
On 4/7/20 11:47 AM, Daniel Burns wrote: You should also have an "md10.cpt" in addition to the "md10_prev.cpt"? If so, replace it in your command with just the md10.cpt file. If you don't have it, try renaming your "_prev.cpt" file to just "md10.cpt". The issue is not with the naming of the

Re: [gmx-users] restarting a simulation

2020-04-07 Thread Daniel Burns
You should also have an "md10.cpt" in addition to the "md10_prev.cpt"? If so, replace it in your command with just the md10.cpt file. If you don't have it, try renaming your "_prev.cpt" file to just "md10.cpt". Dan On Mon, Apr 6, 2020 at 11:06 AM Sadaf Rani wrote: > Dear Gromacs users > I am

Re: [gmx-users] restarting a simulation

2020-04-07 Thread Quyen V. Vu
Hi, Have you read and try what mdrun tells you, like -deffnm option? in recent version of gromacs, if you do not want to append the output to old output files, use option -noappend. On Mon, Apr 6, 2020 at 6:06 PM Sadaf Rani wrote: > Dear Gromacs users > I am restarting a simulation with the fol

[gmx-users] restarting a simulation

2020-04-06 Thread Sadaf Rani
Dear Gromacs users I am restarting a simulation with the following command:- mpirun gmx_mpi mdrun -s md10.tpr -cpi md10_prev.cpt -append However, I am getting following error message. All the below-named files are there in my directory but it still complains the same. Inconsistency in user input:

Re: [gmx-users] Restarting a simulation: failed to lock the log file

2019-01-07 Thread Kutzner, Carsten
> On 7. Jan 2019, at 11:55, morpheus wrote: > > Hi, > > I am running simulations on a cluster that terminates jobs after a hard > wall clock time limit. Normally this is not a problem as I just restart the > simulations using -cpi state.cpt but for the last batch of simulations I > got (for m

[gmx-users] Restarting a simulation: failed to lock the log file

2019-01-07 Thread morpheus
Hi, I am running simulations on a cluster that terminates jobs after a hard wall clock time limit. Normally this is not a problem as I just restart the simulations using -cpi state.cpt but for the last batch of simulations I got (for most but not all of them) the error: "Failed to lock: 1uao.md.l

Re: [gmx-users] Restarting a simulation when checkpoint files are corrupted

2017-03-17 Thread David Dotson
Excellent! Thanks Mark! Yeah, already had a discussion with the sysadmin of this particular system; it was definitely not something that should happen. Cheers! David On 03/17/2017 10:11 AM, Mark Abraham wrote: > Hi, > > > On Fri, Mar 17, 2017 at 6:00 PM David Dotson wrote: > >> Greetings, >>

Re: [gmx-users] Restarting a simulation when checkpoint files are corrupted

2017-03-17 Thread Mark Abraham
Hi, On Fri, Mar 17, 2017 at 6:00 PM David Dotson wrote: > Greetings, > > I have a simulation that has been running for a long time, with many > trajectory segments (counting up to about 190). One of the segments ran on > a cluster that experienced a filesystem outage such that some of the files

[gmx-users] Restarting a simulation when checkpoint files are corrupted

2017-03-17 Thread David Dotson
Greetings, I have a simulation that has been running for a long time, with many trajectory segments (counting up to about 190). One of the segments ran on a cluster that experienced a filesystem outage such that some of the files for that run were corrupted, including its checkpoint files (both