The only real way to troubleshot this kind of problems is that someone
here starts your system at his local PC and sees the problem by the
own eyes.
As no one confirmed the same issue as yours, it is most likely that
the cause of problem lies outside gromacs code. Either something is
wrong with yo
Run scripts, log files would be a good start!
On Feb 27, 2014 1:39 PM, "Marcelo Depólo" wrote:
> Dear Dr,
>
> Which details or files do you need? I would be very happy to solve this
> question by posting any kind of files that you request.
>
>
>
> 2014-02-23 22:21 GMT+01:00 Dr. Vitaly Chaban :
>
Dear Dr,
Which details or files do you need? I would be very happy to solve this
question by posting any kind of files that you request.
2014-02-23 22:21 GMT+01:00 Dr. Vitaly Chaban :
> You do not provide all the details. As was pointed at the very
> beginning, most likely you have incorrect p
You do not provide all the details. As was pointed at the very
beginning, most likely you have incorrect parallelism in this case.
Can you post all the files you obtain for people to inspect?
Dr. Vitaly V. Chaban
On Sun, Feb 23, 2014 at 9:04 PM, Marcelo Depólo wrote:
> Justin, as far as I rea
Justin, as far as I realized, the next log file starts at 0ps what would
mean that it is re-starting for some reason. At first, I imagined that it
was only splitting the data among files due to some kind of size limit, as
you said, but when I tried to concatenate the trajectories, it gives me a
no
On Sun, Feb 23, 2014 at 6:48 PM, Marcelo Depólo wrote:
> Justin, the other runs with the very same binary do not produce the same
> problem.
>
> Mark, I just omitted the _mpi of the line here, but is was compiled as
> _mpi.
>
OK, that rules that problem out, but please don't simplify and approxim
Justin, the other runs with the very same binary do not produce the same
problem.
Mark, I just omitted the _mpi of the line here, but is was compiled as _mpi.
My log file top:
*Gromacs version:VERSION 4.6.1Precision: singleMemory
model: 64 bitMPI library:
On 2/23/14, 12:10 PM, Marcelo Depólo wrote:
Pretty sure. I ran other simulations in the same system and worked just
fine.
About the frames, each file contains different number of frames, apparently
random numbers (one file contains 400ns of data and other contains 10ns)
What are the startin
Normally an MPI-enabled mdrun would be named mdrun_mpi, and running a
non-MPI mdrun would produce symptoms like yours depending exactly how your
filesystem chooses to do things, so Justin and Vitaly's theory is sound.
Look at the top section of your .log file for what mdrun thinks about MPI!
Mark
On 2/23/14, 11:32 AM, Marcelo Depólo wrote:
Maybe I should explain it better.
I am using "*mpirun -np 24 mdrun -s prt.tpr -e prt.edr -o prt.trr*", pretty
much a standard line. This job in a batch creates the outputs and, after
some (random) time, a back up is done and new files are written, bu
Pretty sure. I ran other simulations in the same system and worked just
fine.
About the frames, each file contains different number of frames, apparently
random numbers (one file contains 400ns of data and other contains 10ns)
2014-02-23 17:54 GMT+01:00 Dr. Vitaly Chaban :
> are you sure that y
are you sure that your binary is parallel?
how many frames do those trajectory files contain?
Dr. Vitaly V. Chaban
On Sun, Feb 23, 2014 at 5:32 PM, Marcelo Depólo wrote:
> Maybe I should explain it better.
>
> I am using "*mpirun -np 24 mdrun -s prt.tpr -e prt.edr -o prt.trr*", pretty
> much a
Maybe I should explain it better.
I am using "*mpirun -np 24 mdrun -s prt.tpr -e prt.edr -o prt.trr*", pretty
much a standard line. This job in a batch creates the outputs and, after
some (random) time, a back up is done and new files are written, but the
job itself do not finish.
2014-02-23 17:
On 2/23/14, 11:00 AM, Marcelo Depólo wrote:
But it is not quite happening simultaneously, Justin.
It is producing one after another and, consequently, backing up the files.
You'll have to provide the exact commands you're issuing. Likely you're leaving
the output names to the default, whi
But it is not quite happening simultaneously, Justin.
It is producing one after another and, consequently, backing up the files.
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://ww
On 2/23/14, 10:43 AM, Marcelo Depólo wrote:
Hey,
I am running this 1000ns simulation but for some reason mdrun is backing up
the data in multiple files (.edr.1# - .edr.9#, for instance).
Is it a normal behavior?
No, that means rather than launching a parallel mdrun process, you're running
Hey,
I am running this 1000ns simulation but for some reason mdrun is backing up
the data in multiple files (.edr.1# - .edr.9#, for instance).
Is it a normal behavior?
Thanks!
--
Marcelo
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_List
17 matches
Mail list logo