On 12/24/12 11:54 AM, Ankita naithani wrote:
Hi again,

Sorry for the repetition in email.

When I ran the mdrun command, I got an error of

"Attempting to read a checkpoint file of version 13 with code of version 12"

Can anyone please help me with this error too?


This means you are not using the same version of the code as the previous run.

On Mon, Dec 24, 2012 at 4:35 PM, Ankita naithani
<ankitanaith...@gmail.com> wrote:
Hi,

I am running a protein simulation for 70 ns. I had a MPI run but due
to my time constraints on the server, it stopped after 28ns. Now, I
want to continue the simulation from the same point  up until I use up
my server time.

I just wanted to confirm that there are two checkpoint files written,
md.cpt and md_prev.cpt. It would be really helpful if anyone could
advice as to which file would be better to choose to continue the
simulation?


Checkpoint files are written every 15 minutes by default and the names are recycled between (prefix).cpt and (prefix)_prev.cpt to indicate the most recently saved state and the previous one. Either can be used for continuing the stopped run, but the current state is most advantageous to avoid wasted time. The previous state is saved as a backup in case of a corrupted checkpoint file or frame in the trajectory that would require starting from a previous point.

Also, I wanted a confirmation that if I use:

mdrun -s topol.tpr -cpi md.cpt -append

Do I also need to add -deffnm md?


That depends entirely upon the file names present and what the original invocation of mdrun was.

and if I run mdrun, would it then continue from say 28ns and up until
the time specified in the .mdp file? The reason I wanted to confirm
this is that before submitting it to the server, I ran it in my local
machine and when I see the log file, it shows step 0 and Time 0.0000,
does that mean it is starting the simulation from scratch because I
had expected it to show me step from wherever it exited last and
continue from there on.


The run will start from scratch if for some reason the checkpoint file cannot be found or read for whatever reason. If you are trying to run on a local machine with a different number of processors, for instance, the checkpoint state will not be the same so the run will start over again.

Note that most of this information is online for quick reference:

http://www.gromacs.org/Documentation/How-tos/Doing_Restarts#Version_4.x

-Justin

--
========================================

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to