Re: [gmx-users] Problem using MPIRUN MDRUN

2009-02-16 Thread Mark Abraham

fpri...@nimr.mrc.ac.uk wrote:

I did not stop the simulation manually and it did not give a segmentation fault 
message, but something else that I have never seen before (and actually, I am 
still not understanding the error message).
Now I'm running the dynamics on my machine, instead of running it on the 
cluster, and there are no problems at all. This means that the problem is not 
related to my protein or to the parameters of the dynamic, but it must be 
somewhere else in the mpirun command line or something like that.


OK, so there's probably some problem with file system availability on 
the cluster, and/or returning output files to the user. We can't help 
you there.


This sort of detail and differential diagnosis would have been a good 
thing to say the first time you described your problem. See the final 
link here - http://wiki.gromacs.org/index.php/Support



grompp_mpi_d -f md.mdp -c file_pr.gro -p file.top -o file.tpr -np 32
mpirun -np 32 /dms/prog/bin/mdrun_mpi_s -s file.tpr -o file.trr -c file.gro  
-np 32


This looks like asking for trouble, if _s and _d are your suffixes for 
single and double. Don't mix them. If you've erroneously retyped them, 
then that's not helpful information for us. Computers are literal, and 
we can only help you work out what you've told it wrongly if you tell us 
the same thing - so write a script and/or copy-paste things so that you 
are reproducible.


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Problem using MPIRUN MDRUN

2009-02-13 Thread fprisch
Dear users,

I'm trying to run gromacs v.3.3.3. on a cluster of 4 machines with 8 CPUs each. 
I've checked the mailing list and the manual in order to find the correct way 
to write the command line and I've found that the commands listed below are 
supposed to be correct and to work.
 grompp_mpi_d -f md.mdp -c file_pr.gro -p file.top -o file_md.tpr -np 32
 mpirun -np 32 /dms/prog/bin/mdrun_mpi_d -s file_md.tpr -o file_md.trr -c 
 file_md.gro -np 32

The dynamics starts, but after few minutes it crushes and I've got this message 
of error: 

One of the processes started by mpirun has exited with a nonzero exit code.  
This typically indicates that the process finished in error. If your process 
did not finish in error, be sure to include a return 0 or exit(0) in your C 
code before exiting the application.
PID 24320 failed on node n0 (195.195.124.134) with exit status 1.

Does anyone can help me to sort out this problem?Does anyone know what I'm 
doing wrong? 

Thanks a lot,

Filippo
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] Problem using MPIRUN MDRUN

2009-02-13 Thread osmair oliveira

Hi Filippo,

Do you have checked your trajectory ( *.trr or *.xtc) and  *.log ?
You system may have 'exploded'...

Osmair


 Date: Fri, 13 Feb 2009 13:27:09 +
 From: fpri...@nimr.mrc.ac.uk
 To: gmx-users@gromacs.org
 Subject: [gmx-users] Problem using MPIRUN MDRUN
 
 Dear users,
 
 I'm trying to run gromacs v.3.3.3. on a cluster of 4 machines with 8 CPUs 
 each. I've checked the mailing list and the manual in order to find the 
 correct way to write the command line and I've found that the commands listed 
 below are supposed to be correct and to work.
  grompp_mpi_d -f md.mdp -c file_pr.gro -p file.top -o file_md.tpr -np 32
  mpirun -np 32 /dms/prog/bin/mdrun_mpi_d -s file_md.tpr -o file_md.trr -c 
  file_md.gro -np 32
 
 The dynamics starts, but after few minutes it crushes and I've got this 
 message of error: 
 
 One of the processes started by mpirun has exited with a nonzero exit code.  
 This typically indicates that the process finished in error. If your process 
 did not finish in error, be sure to include a return 0 or exit(0) in your 
 C code before exiting the application.
 PID 24320 failed on node n0 (195.195.124.134) with exit status 1.
 
 Does anyone can help me to sort out this problem?Does anyone know what I'm 
 doing wrong? 
 
 Thanks a lot,
 
 Filippo
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

_
Windows Live Messenger. O melhor em multitarefa.
http://www.microsoft.com/windows/windowslive/products/messenger.aspx___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Problem using MPIRUN MDRUN

2009-02-13 Thread fprisch
Thanks a lot, I'll check stderr and stdout, because I think that the log file 
is correct (it looks like a normal dynamic that has been manually interrupted). 
The system is not exploded cause it does not generate the gro file and step 
files.
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Problem using MPIRUN MDRUN

2009-02-13 Thread Mark Abraham

fpri...@nimr.mrc.ac.uk wrote:

Thanks a lot, I'll check stderr and stdout, because I think that the log file 
is correct (it looks like a normal dynamic that has been manually interrupted).


If you've manually interrupted the simulation, then you cannot expect 
the buffered I/O to be correctly formed.



The system is not exploded cause it does not generate the gro file and step 
files.


If there's no final coordinate file (.gro by default), then the 
simulation did not complete correctly.


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] problem with mpirun

2008-09-02 Thread huifang liu
Hi, Gromacs users,

   This command grompp_ompi -np 6 -f pr_10_200.mdp -c
after_em_newton_1000.gro -p all.top -o pr_10_100.tpr -po
mdrun_pr_10_100.mdp  run normally. But when i run the next command mpirun
-np 6 mdrun_ompi -s pr_10_100.tpr -c after_pr_10_100.gro -o
after_pr_10_100.trr -e after_pr_10_100.edr -g after_pr_10_100.log -v. It
gave out as follows:

Wrote pdb files with previous and current coordinates
step 0
[node1:13598] *** Process received signal ***
[node1:13598] Signal: Segmentation fault (11)
[node1:13598] Signal code: Address not mapped (1)
[node1:13598] Failing at address: 0x2cc3f38
[node1:13600] *** Process received signal ***
[node1:13600] Signal: Segmentation fault (11)
[node1:13600] Signal code: Address not mapped (1)
[node1:13600] Failing at address: 0x6063518
[node1:13602] *** Process received signal ***
[node1:13602] Signal: Segmentation fault (11)
[node1:13602] Signal code: Address not mapped (1)
[node1:13602] Failing at address: 0xc6519968
[node1:13599] *** Process received signal ***
[node1:13599] Signal: Segmentation fault (11)
[node1:13599] Signal code: Address not mapped (1)
[node1:13599] Failing at address: 0x55dabc8
[node1:13601] *** Process received signal ***
[node1:13601] Signal: Segmentation fault (11)
[node1:13601] Signal code: Address not mapped (1)
[node1:13601] Failing at address: 0x21ae2148
[node1:13598] [ 0] /lib64/tls/libpthread.so.0 [0x3078e0c5b0]
[node1:13598] [ 1] mdrun_ompi(inl3100+0x248) [0x524b58]
[node1:13598] [ 2] mdrun_ompi(do_fnbf+0xfe7) [0x4a3d97]
[node1:13598] [ 3] mdrun_ompi(force+0x120) [0x4432f0]
[node1:13598] [ 4] mdrun_ompi(do_force+0xb8b) [0x471afb]
[node1:13598] [ 5] mdrun_ompi(do_md+0x139f) [0x426fdf]
[node1:13598] [ 6] mdrun_ompi(mdrunner+0xb9c) [0x42a6dc]
[node1:13598] [ 7] mdrun_ompi(main+0x1dd) [0x42aabd]
[node1:13598] [ 8] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
[0x307851c3fb]
[node1:13598] [ 9] mdrun_ompi [0x412e8a]
[node1:13598] *** End of error message ***
[node1:13600] [ 0] /lib64/tls/libpthread.so.0 [0x3078e0c5b0]
[node1:13600] [ 1] mdrun_ompi(inl3120+0x4a7) [0x525bb7]
[node1:13600] [ 2] mdrun_ompi(do_fnbf+0xe96) [0x4a3c46]
[node1:13600] [ 3] mdrun_ompi(force+0x120) [0x4432f0]
[node1:13600] [ 4] mdrun_ompi(do_force+0xb8b) [0x471afb]
[node1:13600] [ 5] mdrun_ompi(do_md+0x139f) [0x426fdf]
[node1:13600] [ 6] mdrun_ompi(mdrunner+0xb9c) [0x42a6dc]
[node1:13600] [ 7] mdrun_ompi(main+0x1dd) [0x42aabd]
[node1:13600] [ 8] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
[0x307851c3fb]
[node1:13600] [ 9] mdrun_ompi [0x412e8a]
[node1:13600] *** End of error message ***
[node1:13602] [ 0] /lib64/tls/libpthread.so.0 [0x3078e0c5b0]
[node1:13602] [ 1] mdrun_ompi(inl3120+0x4a7) [0x525bb7]
[node1:13602] [ 2] mdrun_ompi(do_fnbf+0xe96) [0x4a3c46]
[node1:13602] [ 3] mdrun_ompi(force+0x120) [0x4432f0]
[node1:13602] [ 4] mdrun_ompi(do_force+0xb8b) [0x471afb]
[node1:13602] [ 5] mdrun_ompi(do_md+0x139f) [0x426fdf]
[node1:13602] [ 6] mdrun_ompi(mdrunner+0xb9c) [0x42a6dc]
[node1:13602] [ 7] mdrun_ompi(main+0x1dd) [0x42aabd]
[node1:13602] [ 8] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
[0x307851c3fb]
[node1:13602] [ 9] mdrun_ompi [0x412e8a]
[node1:13602] *** End of error message ***
[node1:13599] [ 0] /lib64/tls/libpthread.so.0 [0x3078e0c5b0]
[node1:13599] [ 1] mdrun_ompi(inl3100+0x248) [0x524b58]
[node1:13599] [ 2] mdrun_ompi(do_fnbf+0xfe7) [0x4a3d97]
[node1:13599] [ 3] mdrun_ompi(force+0x120) [0x4432f0]
[node1:13599] [ 4] mdrun_ompi(do_force+0xb8b) [0x471afb]
[node1:13599] [ 5] mdrun_ompi(do_md+0x139f) [0x426fdf]
[node1:13599] [ 6] mdrun_ompi(mdrunner+0xb9c) [0x42a6dc]
[node1:13599] [ 7] mdrun_ompi(main+0x1dd) [0x42aabd]
[node1:13599] [ 8] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
[0x307851c3fb]
[node1:13599] [ 9] mdrun_ompi [0x412e8a]
[node1:13599] *** End of error message ***
[node1:13601] [ 0] /lib64/tls/libpthread.so.0 [0x3078e0c5b0]
[node1:13601] [ 1] mdrun_ompi(inl3120+0x4a7) [0x525bb7]
[node1:13601] [ 2] mdrun_ompi(do_fnbf+0xe96) [0x4a3c46]
[node1:13601] [ 3] mdrun_ompi(force+0x120) [0x4432f0]
[node1:13601] [ 4] mdrun_ompi(do_force+0xb8b) [0x471afb]
[node1:13601] [ 5] mdrun_ompi(do_md+0x139f) [0x426fdf]
[node1:13601] [ 6] mdrun_ompi(mdrunner+0xb9c) [0x42a6dc]
[node1:13601] [ 7] mdrun_ompi(main+0x1dd) [0x42aabd]
[node1:13601] [ 8] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
[0x307851c3fb]
[node1:13601] [ 9] mdrun_ompi [0x412e8a]
[node1:13601] *** End of error message ***
[node1:13603] *** Process received signal ***
[node1:13603] Signal: Segmentation fault (11)
[node1:13603] Signal code: Address not mapped (1)
[node1:13603] Failing at address: 0x2bb7c08
[node1:13603] [ 0] /lib64/tls/libpthread.so.0 [0x3078e0c5b0]
[node1:13603] [ 1] mdrun_ompi(inl3100+0x248) [0x524b58]
[node1:13603] [ 2] mdrun_ompi(do_fnbf+0xfe7) [0x4a3d97]
[node1:13603] [ 3] mdrun_ompi(force+0x120) [0x4432f0]
[node1:13603] [ 4] mdrun_ompi(do_force+0xb8b) [0x471afb]
[node1:13603] [ 5] mdrun_ompi(do_md+0x139f) [0x426fdf]
[node1:13603] [ 6] mdrun_ompi(mdrunner+0xb9c) [0x42a6dc]