Hi all,

I got the error message when I am extending the simulation using the
following command:
mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi
pre.cpt -append

The previous simuluation is succeeded. I wonder why pre.log is locked,
and the strange warning of "*Function not implemented*"?

Any suggestion is appreciated!

*********************************************************************
Getting Loaded...
Reading file pre.tpr, VERSION 4.5.3 (single precision)

Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010

-------------------------------------------------------
Program mdrun, VERSION 4.5.3
Source code file: checkpoint.c, line: 1750

Fatal error:
*Failed to lock: pre.log. Function not implemented.*
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

"It Doesn't Have to Be Tip Top" (Pulp Fiction)

Error on node 0, will try to stop all the nodes
Halting parallel program mdrun on CPU 0 out of 64

gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpiexec has exited due to process rank 0 with PID 32758 on

-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to