On 4/7/20 11:47 AM, Daniel Burns wrote:
You should also have an "md10.cpt" in addition to the "md10_prev.cpt"? If
so, replace it in your command with just the md10.cpt file. If you don't
have it, try renaming your "_prev.cpt" file to just "md10.cpt".
The issue is not with the naming of the
You should also have an "md10.cpt" in addition to the "md10_prev.cpt"? If
so, replace it in your command with just the md10.cpt file. If you don't
have it, try renaming your "_prev.cpt" file to just "md10.cpt".
Dan
On Mon, Apr 6, 2020 at 11:06 AM Sadaf Rani wrote:
> Dear Gromacs users
> I am
Hi,
Have you read and try what mdrun tells you, like -deffnm option?
in recent version of gromacs, if you do not want to append the output to
old output files, use option -noappend.
On Mon, Apr 6, 2020 at 6:06 PM Sadaf Rani wrote:
> Dear Gromacs users
> I am restarting a simulation with the fol
Dear Gromacs users
I am restarting a simulation with the following command:-
mpirun gmx_mpi mdrun -s md10.tpr -cpi md10_prev.cpt -append
However, I am getting following error message. All the below-named files
are there in my directory but it still complains the same.
Inconsistency in user input:
> On 7. Jan 2019, at 11:55, morpheus wrote:
>
> Hi,
>
> I am running simulations on a cluster that terminates jobs after a hard
> wall clock time limit. Normally this is not a problem as I just restart the
> simulations using -cpi state.cpt but for the last batch of simulations I
> got (for m
Hi,
I am running simulations on a cluster that terminates jobs after a hard
wall clock time limit. Normally this is not a problem as I just restart the
simulations using -cpi state.cpt but for the last batch of simulations I
got (for most but not all of them) the error:
"Failed to lock: 1uao.md.l
Excellent! Thanks Mark! Yeah, already had a discussion with the sysadmin of
this particular system; it was definitely not something that should happen.
Cheers!
David
On 03/17/2017 10:11 AM, Mark Abraham wrote:
> Hi,
>
>
> On Fri, Mar 17, 2017 at 6:00 PM David Dotson wrote:
>
>> Greetings,
>>
Hi,
On Fri, Mar 17, 2017 at 6:00 PM David Dotson wrote:
> Greetings,
>
> I have a simulation that has been running for a long time, with many
> trajectory segments (counting up to about 190). One of the segments ran on
> a cluster that experienced a filesystem outage such that some of the files
Greetings,
I have a simulation that has been running for a long time, with many trajectory
segments (counting up to about 190). One of the segments ran on a cluster that
experienced a filesystem outage such that some of the files for that run were
corrupted, including its checkpoint files (both