Re: [gmx-users] mpirun: cannot open .tpr file

2013-03-03 Thread Justin Lemkul



On 3/3/13 1:12 AM, Ewaru wrote:

Hi,

I'm trying to test mpirun, so I tried with the NPT equilibration. From the
tutorial of:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/07_equil2.html

I run this command: mpirun --np 10 --hostfile ~/hostfile mdrun_mpi -npme 3
-deffnm npt

and obtained an error saying "Cannot open file: npt.tpr". However, if I run
it with just mdrun, it goes fine. Can someone please advice what went wrong?
:(



The error means that npt.tpr does not exist in the working directory.

-Justin


The output for the mpirun is as below:

NNODES=10, MYRANK=2, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=3, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=6, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=1, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=7, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=4, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=9, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=5, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=0, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NODEID=0 argc=5
NODEID=4 argc=5
NNODES=10, MYRANK=8, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NODEID=2 argc=5
NODEID=1 argc=5
NODEID=3 argc=5
  :-)  G  R  O  M  A  C  S  (-:

NODEID=5 argc=5
NODEID=9 argc=5
NODEID=6 argc=5
NODEID=8 argc=5
Gromacs Runs One Microsecond At Cannonball Speeds

 :-)  VERSION 4.5.4  (-:

 Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
   Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra,
 Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff,
NODEID=7 argc=5
Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
 Michael Shirts, Alfons Sijbers, Peter Tieleman,

Berk Hess, David van der Spoel, and Erik Lindahl.

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2010, The GROMACS development team at
 Uppsala University & The Royal Institute of Technology, Sweden.
 check out http://www.gromacs.org for more information.

  This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU General Public License
  as published by the Free Software Foundation; either version 2
  of the License, or (at your option) any later version.

   :-)  mdrun_mpi  (-:

Option Filename  Type Description

   -snpt.tpr  InputRun input file: tpr tpb tpa
   -onpt.trr  Output   Full precision trajectory: trr trj cpt
   -xnpt.xtc  Output, Opt. Compressed trajectory (portable xdr
format)
-cpinpt.cpt  Input, Opt.  Checkpoint file
-cponpt.cpt  Output, Opt. Checkpoint file
   -cnpt.gro  Output   Structure file: gro g96 pdb etc.
   -enpt.edr  Output   Energy file
   -gnpt.log  Output   Log file
-dhdl   npt.xvg  Output, Opt. xvgr/xmgr file
-field  npt.xvg  Output, Opt. xvgr/xmgr file
-table  npt.xvg  Input, Opt.  xvgr/xmgr file
-tablep npt.xvg  Input, Opt.  xvgr/xmgr file
-tableb npt.xvg  Input, Opt.  xvgr/xmgr file
-rerun  npt.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
-tpinpt.xvg  Output, Opt. xvgr/xmgr file
-tpid   npt.xvg  Output, Opt. xvgr/xmgr file
  -einpt.edi  Input, Opt.  ED sampling input
  -eonpt.edo  Output, Opt. ED sampling output
   -jnpt.gct  Input, Opt.  General coupling stuff
  -jonpt.gct  Output, Opt. General coupling stuff
-ffout  npt.xvg  Output, Opt. xvgr/xmgr file
-devout npt.xvg  Output, Opt. xvgr/xmgr file
-runav  npt.xvg  Output, Opt. xvgr/xmgr file
  -pxnpt.xvg  Output, Opt. xvgr/xmgr file
  -pfnpt.xvg  Output, Opt. xvgr/xmgr file
-mtxnpt.mtx  Output, Opt. Hessian matrix
  -dnnpt.ndx  Output, Opt. Index file
-multidir   npt  Input, Opt., Mult. Run directory

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint0   Set the nicelevel
-deffnm  string npt Set the default filename for all file options
-xvg enum   xmgrace  xvg plot formatting: xmgrace, xmgr or none
-[no]pd  bool   no  Use particle decompostion
-dd  vector 0 0 0   Domain decomposition grid, 0 is optimize
-npmeint3   Number of separate nodes to be used for PME, -1
 is guess
-ddorder enum   interleave  DD node order: interleave, pp_pme or
cartesian
-[no]ddcheck bool   yes Check for all bonded i

[gmx-users] mpirun: cannot open .tpr file

2013-03-02 Thread Ewaru
Hi,

I'm trying to test mpirun, so I tried with the NPT equilibration. From the
tutorial of:

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/07_equil2.html

I run this command: mpirun --np 10 --hostfile ~/hostfile mdrun_mpi -npme 3
-deffnm npt

and obtained an error saying "Cannot open file: npt.tpr". However, if I run
it with just mdrun, it goes fine. Can someone please advice what went wrong?
:( 

The output for the mpirun is as below:

NNODES=10, MYRANK=2, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=3, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=6, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=1, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=7, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=4, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=9, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=5, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NNODES=10, MYRANK=0, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NODEID=0 argc=5
NODEID=4 argc=5
NNODES=10, MYRANK=8, HOSTNAME=wizztjh-Presario-CQ42-Notebook-PC
NODEID=2 argc=5
NODEID=1 argc=5
NODEID=3 argc=5
 :-)  G  R  O  M  A  C  S  (-:

NODEID=5 argc=5
NODEID=9 argc=5
   NODEID=6 argc=5
NODEID=8 argc=5
Gromacs Runs One Microsecond At Cannonball Speeds

:-)  VERSION 4.5.4  (-:

Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
  Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra, 
Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff, 
NODEID=7 argc=5
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz, 
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2010, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  mdrun_mpi  (-:

Option Filename  Type Description

  -snpt.tpr  InputRun input file: tpr tpb tpa
  -onpt.trr  Output   Full precision trajectory: trr trj cpt
  -xnpt.xtc  Output, Opt. Compressed trajectory (portable xdr
format)
-cpinpt.cpt  Input, Opt.  Checkpoint file
-cponpt.cpt  Output, Opt. Checkpoint file
  -cnpt.gro  Output   Structure file: gro g96 pdb etc.
  -enpt.edr  Output   Energy file
  -gnpt.log  Output   Log file
-dhdl   npt.xvg  Output, Opt. xvgr/xmgr file
-field  npt.xvg  Output, Opt. xvgr/xmgr file
-table  npt.xvg  Input, Opt.  xvgr/xmgr file
-tablep npt.xvg  Input, Opt.  xvgr/xmgr file
-tableb npt.xvg  Input, Opt.  xvgr/xmgr file
-rerun  npt.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
-tpinpt.xvg  Output, Opt. xvgr/xmgr file
-tpid   npt.xvg  Output, Opt. xvgr/xmgr file
 -einpt.edi  Input, Opt.  ED sampling input
 -eonpt.edo  Output, Opt. ED sampling output
  -jnpt.gct  Input, Opt.  General coupling stuff
 -jonpt.gct  Output, Opt. General coupling stuff
-ffout  npt.xvg  Output, Opt. xvgr/xmgr file
-devout npt.xvg  Output, Opt. xvgr/xmgr file
-runav  npt.xvg  Output, Opt. xvgr/xmgr file
 -pxnpt.xvg  Output, Opt. xvgr/xmgr file
 -pfnpt.xvg  Output, Opt. xvgr/xmgr file
-mtxnpt.mtx  Output, Opt. Hessian matrix
 -dnnpt.ndx  Output, Opt. Index file
-multidir   npt  Input, Opt., Mult. Run directory

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint0   Set the nicelevel
-deffnm  string npt Set the default filename for all file options
-xvg enum   xmgrace  xvg plot formatting: xmgrace, xmgr or none
-[no]pd  bool   no  Use particle decompostion
-dd  vector 0 0 0   Domain decomposition grid, 0 is optimize
-npmeint3   Number of separate nodes to be used for PME, -1
is guess
-ddorder enum   interleave  DD node order: interleave, pp_pme or
cartesian
-[no]ddcheck bool   yes Check for all bonded interactions with DD
-rdd real   0   The maximum distance for bonded interactions
with
DD (nm), 0 is determi

Re: [gmx-users] mpirun error

2012-11-20 Thread Justin Lemkul



On 11/20/12 3:55 PM, Parisa Rahmani wrote:

Yes, I ran two test simulations , one with 5 cpu and another with 1;

5 cpu simulation :
step 50380, will finish at Wed Nov 21 01:46:25 2012
step 50020, will finish at Wed Nov 21 01:48:36 2012
step 50320, will finish at Wed Nov 21 01:46:49 2012
step 50270, will finish at Wed Nov 21 01:47:07 2012
*Time command :*
*real 153m1.968s*
*user 0m0.472s*
*sys 0m2.072s*
*
*
1 cpu simulation :(started almost 5 minutes later)
step 56000, will finish at Wed Nov 21 02:02:07 2012
*Time command :*
*real 177m25.541s*
*user 177m23.041s*
*sys 0m0.352s*
*
*



It appears to me that mdrun is functioning correctly, but overall performance is 
based on how large the system is and how good the hardware is.  It just seems to 
me that you're not getting particularly great scaling.


-Justin


On Tue, Nov 20, 2012 at 8:06 PM, Justin Lemkul  wrote:




On 11/20/12 8:43 AM, Parisa Rahmani wrote:


Thanks for your reply.
I have also tried installing with _mpi suffix
Here is the output of ldd:

gromacs3.3/bin$ ldd mdrun_mpi
linux-vdso.so.1 =>  (0x7fff4658c000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.**so.1 (0x7f7d7afe9000)
libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.**6 (0x7f7d7a9f3000)
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.**so.1 (0x7f7d79ff5000)
libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/**libpthread.so.0
(0x7f7d79bce000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.**6 (0x7f7d79847000)
/lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.**so.2 (0x7f7d79642000)

--**--**
--**--
gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
It seems that gromacs has been compiled with mpich.



Does this executable still give the error listed below?  Performance is
one thing, errors are another.  You may not necessarily obtain great
scaling, depending on the contents of the system.

-Justin




On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:




On 11/19/12 12:09 PM, Parisa Rahmani wrote:

  Dear gmx users


I have a problem with running parallel jobs on my Debian system(Openmpi
installed on it),
**Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
I am using gmx 3.3.3, because of the *lambda dynamics* method which is

implemented in it.

AS I know ,in gmx 3.x, the number of processors supplied for the
subsequent
mdrun needed to match the input file. but when i use **grompp -np 6 &
mpirun -np 6 mdrun** the following error appears :

ERROR : run input file md.tpr was made for 6 nodes,
while mdrun expected it to be for 1 nodes.

through search of mailing list i found similar problems, but non of
the solutions worked for my case.

wihtout -np option in grompp the error disappears, and then with each
of these commands

**

1)mpirun -np 6 mdrun -deffnm md

2)mpirun -np 6 mdrun -deffnm md -N 6

3)mpirun -np 6 mdrun -np 6 -deffnm md

4)mdrun -np 6 -s md -N 6


**it uses 6 processors(each one at nearly 100%), but the simulation
time is the same as for 1 processor.

I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
openmpi), with following command:

**
grompp -np 6 -f ...
mpiexec mdrun ...(number of processors is specified in the bash file)
**
,
but i can't run it on my 6 core system.

Also, I have no problem with newer version of gmx (4.5.x), but i
should use this version, and Hope someone can help me.


  Have you properly compiled an MPI-enabled mdrun?  The default

executable
name should be mdrun_mpi.  It should be linked against libmpi, so running
ldd on the mdrun executable should tell you.

-Justin

--
====


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>




====

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>



* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search>>before
posting!

* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun error

2012-11-20 Thread Parisa Rahmani
Yes, I ran two test simulations , one with 5 cpu and another with 1;

5 cpu simulation :
step 50380, will finish at Wed Nov 21 01:46:25 2012
step 50020, will finish at Wed Nov 21 01:48:36 2012
step 50320, will finish at Wed Nov 21 01:46:49 2012
step 50270, will finish at Wed Nov 21 01:47:07 2012
*Time command :*
*real 153m1.968s*
*user 0m0.472s*
*sys 0m2.072s*
*
*
1 cpu simulation :(started almost 5 minutes later)
step 56000, will finish at Wed Nov 21 02:02:07 2012
*Time command :*
*real 177m25.541s*
*user 177m23.041s*
*sys 0m0.352s*
*
*

On Tue, Nov 20, 2012 at 8:06 PM, Justin Lemkul  wrote:

>
>
> On 11/20/12 8:43 AM, Parisa Rahmani wrote:
>
>> Thanks for your reply.
>> I have also tried installing with _mpi suffix
>> Here is the output of ldd:
>>
>> gromacs3.3/bin$ ldd mdrun_mpi
>> linux-vdso.so.1 =>  (0x7fff4658c000)
>> libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.**so.1 (0x7f7d7afe9000)
>> libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
>> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.**6 (0x7f7d7a9f3000)
>> libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
>> libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
>> libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
>> librt.so.1 => /lib/x86_64-linux-gnu/librt.**so.1 (0x7f7d79ff5000)
>> libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
>> libpthread.so.0 => /lib/x86_64-linux-gnu/**libpthread.so.0
>> (0x7f7d79bce000)
>> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.**6 (0x7f7d79847000)
>> /lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
>> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.**so.2 (0x7f7d79642000)
>>
>> --**--**
>> --**--
>> gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
>> libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
>> It seems that gromacs has been compiled with mpich.
>>
>>
> Does this executable still give the error listed below?  Performance is
> one thing, errors are another.  You may not necessarily obtain great
> scaling, depending on the contents of the system.
>
> -Justin
>
>
>>
>> On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 11/19/12 12:09 PM, Parisa Rahmani wrote:
>>>
>>>  Dear gmx users

 I have a problem with running parallel jobs on my Debian system(Openmpi
 installed on it),
 **Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
 I am using gmx 3.3.3, because of the *lambda dynamics* method which is

 implemented in it.

 AS I know ,in gmx 3.x, the number of processors supplied for the
 subsequent
 mdrun needed to match the input file. but when i use **grompp -np 6 &
 mpirun -np 6 mdrun** the following error appears :

 ERROR : run input file md.tpr was made for 6 nodes,
 while mdrun expected it to be for 1 nodes.

 through search of mailing list i found similar problems, but non of
 the solutions worked for my case.

 wihtout -np option in grompp the error disappears, and then with each
 of these commands

 **

 1)mpirun -np 6 mdrun -deffnm md

 2)mpirun -np 6 mdrun -deffnm md -N 6

 3)mpirun -np 6 mdrun -np 6 -deffnm md

 4)mdrun -np 6 -s md -N 6


 **it uses 6 processors(each one at nearly 100%), but the simulation
 time is the same as for 1 processor.

 I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
 openmpi), with following command:

 **
 grompp -np 6 -f ...
 mpiexec mdrun ...(number of processors is specified in the bash file)
 **
 ,
 but i can't run it on my 6 core system.

 Also, I have no problem with newer version of gmx (4.5.x), but i
 should use this version, and Hope someone can help me.


  Have you properly compiled an MPI-enabled mdrun?  The default
>>> executable
>>> name should be mdrun_mpi.  It should be linked against libmpi, so running
>>> ldd on the mdrun executable should tell you.
>>>
>>> -Justin
>>>
>>> --
>>> ====
>>>
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Research Scientist
>>> Department of Biochemistry
>>> Virginia Tech
>>> Blacksburg, VA
>>> jalemkul[at]vt.edu | (540) 231-9080
>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
>>> >
>>>
>>> ====
>>>
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users>
>>> >
>>> * Please search the archive at http://www.gromacs.org/**
>>> Support/Mailing_Lists/Search>> Mailing_Lists/Search>before
>>> posting!
>>>
>>> * Plea

Re: [gmx-users] mpirun error

2012-11-20 Thread Justin Lemkul



On 11/20/12 8:43 AM, Parisa Rahmani wrote:

Thanks for your reply.
I have also tried installing with _mpi suffix
Here is the output of ldd:

gromacs3.3/bin$ ldd mdrun_mpi
linux-vdso.so.1 =>  (0x7fff4658c000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f7d7afe9000)
libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f7d7a9f3000)
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f7d79ff5000)
libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f7d79bce000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f7d79847000)
/lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f7d79642000)


gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
It seems that gromacs has been compiled with mpich.



Does this executable still give the error listed below?  Performance is one 
thing, errors are another.  You may not necessarily obtain great scaling, 
depending on the contents of the system.


-Justin




On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:




On 11/19/12 12:09 PM, Parisa Rahmani wrote:


Dear gmx users

I have a problem with running parallel jobs on my Debian system(Openmpi
installed on it),
**Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
I am using gmx 3.3.3, because of the *lambda dynamics* method which is

implemented in it.

AS I know ,in gmx 3.x, the number of processors supplied for the
subsequent
mdrun needed to match the input file. but when i use **grompp -np 6 &
mpirun -np 6 mdrun** the following error appears :

ERROR : run input file md.tpr was made for 6 nodes,
while mdrun expected it to be for 1 nodes.

through search of mailing list i found similar problems, but non of
the solutions worked for my case.

wihtout -np option in grompp the error disappears, and then with each
of these commands

**

1)mpirun -np 6 mdrun -deffnm md

2)mpirun -np 6 mdrun -deffnm md -N 6

3)mpirun -np 6 mdrun -np 6 -deffnm md

4)mdrun -np 6 -s md -N 6


**it uses 6 processors(each one at nearly 100%), but the simulation
time is the same as for 1 processor.

I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
openmpi), with following command:

**
grompp -np 6 -f ...
mpiexec mdrun ...(number of processors is specified in the bash file)
**
,
but i can't run it on my 6 core system.

Also, I have no problem with newer version of gmx (4.5.x), but i
should use this version, and Hope someone can help me.



Have you properly compiled an MPI-enabled mdrun?  The default executable
name should be mdrun_mpi.  It should be linked against libmpi, so running
ldd on the mdrun executable should tell you.

-Justin

--
==**==

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin

==**==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists



--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpirun error

2012-11-20 Thread Parisa Rahmani
Thanks for your reply.
I have also tried installing with _mpi suffix
Here is the output of ldd:

gromacs3.3/bin$ ldd mdrun_mpi
linux-vdso.so.1 =>  (0x7fff4658c000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f7d7afe9000)
libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f7d7a9f3000)
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f7d79ff5000)
libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f7d79bce000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f7d79847000)
/lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f7d79642000)


gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
It seems that gromacs has been compiled with mpich.



On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:

>
>
> On 11/19/12 12:09 PM, Parisa Rahmani wrote:
>
>> Dear gmx users
>>
>> I have a problem with running parallel jobs on my Debian system(Openmpi
>> installed on it),
>> **Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
>> I am using gmx 3.3.3, because of the *lambda dynamics* method which is
>>
>> implemented in it.
>>
>> AS I know ,in gmx 3.x, the number of processors supplied for the
>> subsequent
>> mdrun needed to match the input file. but when i use **grompp -np 6 &
>> mpirun -np 6 mdrun** the following error appears :
>>
>> ERROR : run input file md.tpr was made for 6 nodes,
>> while mdrun expected it to be for 1 nodes.
>>
>> through search of mailing list i found similar problems, but non of
>> the solutions worked for my case.
>>
>> wihtout -np option in grompp the error disappears, and then with each
>> of these commands
>>
>> **
>>
>> 1)mpirun -np 6 mdrun -deffnm md
>>
>> 2)mpirun -np 6 mdrun -deffnm md -N 6
>>
>> 3)mpirun -np 6 mdrun -np 6 -deffnm md
>>
>> 4)mdrun -np 6 -s md -N 6
>>
>>
>> **it uses 6 processors(each one at nearly 100%), but the simulation
>> time is the same as for 1 processor.
>>
>> I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
>> openmpi), with following command:
>>
>> **
>> grompp -np 6 -f ...
>> mpiexec mdrun ...(number of processors is specified in the bash file)
>> **
>> ,
>> but i can't run it on my 6 core system.
>>
>> Also, I have no problem with newer version of gmx (4.5.x), but i
>> should use this version, and Hope someone can help me.
>>
>>
> Have you properly compiled an MPI-enabled mdrun?  The default executable
> name should be mdrun_mpi.  It should be linked against libmpi, so running
> ldd on the mdrun executable should tell you.
>
> -Justin
>
> --
> ==**==
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
>
> ==**==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpirun error

2012-11-19 Thread Justin Lemkul



On 11/19/12 12:09 PM, Parisa Rahmani wrote:

Dear gmx users

I have a problem with running parallel jobs on my Debian system(Openmpi
installed on it),
**Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
I am using gmx 3.3.3, because of the *lambda dynamics* method which is
implemented in it.

AS I know ,in gmx 3.x, the number of processors supplied for the subsequent
mdrun needed to match the input file. but when i use **grompp -np 6 &
mpirun -np 6 mdrun** the following error appears :

ERROR : run input file md.tpr was made for 6 nodes,
while mdrun expected it to be for 1 nodes.

through search of mailing list i found similar problems, but non of
the solutions worked for my case.

wihtout -np option in grompp the error disappears, and then with each
of these commands

**

1)mpirun -np 6 mdrun -deffnm md

2)mpirun -np 6 mdrun -deffnm md -N 6

3)mpirun -np 6 mdrun -np 6 -deffnm md

4)mdrun -np 6 -s md -N 6


**it uses 6 processors(each one at nearly 100%), but the simulation
time is the same as for 1 processor.

I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
openmpi), with following command:

**
grompp -np 6 -f ...
mpiexec mdrun ...(number of processors is specified in the bash file)
**
,
but i can't run it on my 6 core system.

Also, I have no problem with newer version of gmx (4.5.x), but i
should use this version, and Hope someone can help me.



Have you properly compiled an MPI-enabled mdrun?  The default executable name 
should be mdrun_mpi.  It should be linked against libmpi, so running ldd on the 
mdrun executable should tell you.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] MPIRUN issue for CHARMM36 FF

2012-05-11 Thread Anirban
Hi ALL,

I am trying to simulate a membrane protein system using CHARMM36 FF on
GROAMCS4.5.5 on a parallel cluster running on MPI. The system consists of
arounf 1,17,000 atoms. The job runs fine on 5 nodes (5X12=120 cores) using
mpirun and gives proper output. But whenever I try to submit it on more
than 5 nodes, the job gets killed with the following error:

-

starting mdrun 'Protein'
5000 steps, 10.0 ps.

NOTE: Turning on dynamic load balancing

Fatal error in MPI_Sendrecv: Other MPI error
Fatal error in MPI_Sendrecv: Other MPI error
Fatal error in MPI_Sendrecv: Other MPI error

=
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   EXIT CODE: 256
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
=
[proxy:0:0@cn034] HYD_pmcd_pmip_control_cmd_cb
(./pm/pmiserv/pmip_cb.c:906): assert (!closed) failed
[proxy:0:0@cn034] HYDT_dmxu_poll_wait_for_event
(./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:0@cn034] main (./pm/pmiserv/pmip.c:214): demux engine error
waiting for event
.
.
.
--

Why is this happening? Is it related to DD and PME? How to solve it? Any
suggestion is welcome.


Thanks and regards,

Anirban
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun

2012-03-29 Thread Mark Abraham

On 29/03/2012 8:09 PM, cuong nguyen wrote:

Dear Gromacs Users,

as in the manual, I tried to run the simulation on 4 processors and 
used the command as follow:
/mpirun -np 4 mdrun_mpi -s NVT_50ns -o NVT_50ns -c NVT_50ns.g96 -x 
NVT_50ns -e NVT_50ns -g NVT_50ns -v/


Then I got the error:
/mpirun was unable to launch the specified application as it could not 
find an executable:


Executable: mdrun_mpi/
Please help me to deal with this problem.


http://www.gromacs.org/Documentation/Installation_Instructions#Getting_access_to_GROMACS_after_installation 



Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun

2012-03-29 Thread TH Chew
Where did you install your Gromacs? Most likely the executables are not in
the PATH.

On Thu, Mar 29, 2012 at 5:09 PM, cuong nguyen  wrote:

> Dear Gromacs Users,
>
> as in the manual, I tried to run the simulation on 4 processors and used
> the command as follow:
> *mpirun -np 4 mdrun_mpi -s NVT_50ns -o NVT_50ns -c NVT_50ns.g96 -x
> NVT_50ns -e NVT_50ns -g NVT_50ns -v*
>
> Then I got the error:
> *mpirun was unable to launch the specified application as it could not
> find an executable:
>
> Executable: mdrun_mpi*
> Please help me to deal with this problem.
>
> Best regards,
>
>
> Nguyen Van Cuong
> PhD student - Curtin University of Technology
> Mobile: (+61) 452213981
>
>
>
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Regards,
THChew
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] mpirun

2012-03-29 Thread cuong nguyen
Dear Gromacs Users,

as in the manual, I tried to run the simulation on 4 processors and used
the command as follow:
*mpirun -np 4 mdrun_mpi -s NVT_50ns -o NVT_50ns -c NVT_50ns.g96 -x NVT_50ns
-e NVT_50ns -g NVT_50ns -v*

Then I got the error:
*mpirun was unable to launch the specified application as it could not find
an executable:

Executable: mdrun_mpi*
Please help me to deal with this problem.

Best regards,


Nguyen Van Cuong
PhD student - Curtin University of Technology
Mobile: (+61) 452213981
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun with remd

2011-11-14 Thread Mark Abraham

On 15/11/2011 12:22 AM, 杜波 wrote:

dear teacher,
when i do remd ,i use the commond :
mpirun n8,8,8,8,9,10,15,16,16,20,20,22 -np 24 
/export/software/bin/mdrun_mpi_4.5.5 -multidir ./0/ ./1/ ./2/ ./3/ 
./4/ ./5/ ./6/ ./7/ ./8/ ./9/ ./10/ ./11/ -replex 1000 -nice 0 -s 
pmma.tpr -o md -c after_md -pd -table table.xvg -tableb table.xvg -v 
>& log1 &


node16:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14411 KD 16 0 51528 3552 2728 R 67.6 0.1 6:47.63 mdrun_mpi_4.5.5
14409 KD 15 0 51528 3612 2788 S 25.9 0.1 1:22.06 mdrun_mpi_4.5.5
14408 KD 15 0 51516 3396 2580 S 22.2 0.1 1:10.81 mdrun_mpi_4.5.5
14410 KD 15 0 51508 3212 2408 S 20.6 0.1 1:07.22 mdrun_mpi_4.5.5

and we have 4 cpus on this node16 & there are no other program except 
the system's:,

why only one %CPU of thread (14411 ) is big ?


We don't really know enough to say. We don't know that this percentage 
is a valid metric on your system or that your mpirun invocation is 
valid. Set up a run that should take a few minutes and compare the time 
taken in serial with one or two processors, and with REMD systems of 
various numbers of replicas. Now you can start to track down what, if 
any, problem is occurring.


Mark

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mpirun with remd

2011-11-14 Thread 杜波
dear teacher,
when i do remd ,i use the commond :
mpirun n8,8,8,8,9,10,15,16,16,20,20,22 -np 24
/export/software/bin/mdrun_mpi_4.5.5 -multidir ./0/  ./1/ ./2/ ./3/ ./4/
./5/ ./6/ ./7/ ./8/ ./9/ ./10/ ./11/ -replex 1000 -nice 0 -s pmma.tpr -o md
-c after_md -pd -table table.xvg -tableb table.xvg -v >& log1 &

node16:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
14411 KD16   0 51528 3552 2728 R 67.6  0.1   6:47.63 mdrun_mpi_4.5.5
14409 KD15   0 51528 3612 2788 S 25.9  0.1   1:22.06 mdrun_mpi_4.5.5
14408 KD15   0 51516 3396 2580 S 22.2  0.1   1:10.81 mdrun_mpi_4.5.5
14410 KD15   0 51508 3212 2408 S 20.6  0.1   1:07.22 mdrun_mpi_4.5.5

and we have 4 cpus on this node16 & there are no other program except the
system's:,
why only  one %CPU of thread (14411 ) is big ?

thanks!
regards,
Bo Du
Department of Polymer Science and Engineering,
School of Chemical Engineering and technology,
Tianjin University, Weijin Road 92, Nankai District 300072,
Tianjin City P. R. China
Tel/Fax: +86-22-27404303
E-mail: 2008d...@gmail.com
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun and no pbc

2011-07-19 Thread Justin A. Lemkul



shivangi nangia wrote:

Dear gmx-users,

I am heating my system at 300 K.

I have set the pbc conditions as "no"

I get the following error:
---
Program mdrun_mpi, VERSION 4.0.5
Source code file: domdec.c, line: 5436

Fatal error:
pbc type no is not supported with domain decomposition,
use particle decomposition: mdrun -pd
---

I tried looking for a similar option as "-pd" for parallel runs, but 
could not find any.




The error is telling you to use particle decomposition mode, i.e.:

mdrun -pd (other options...)

-Justin


The md.mdp is:
; Run parameters
integrator  = md ; leap-frog integrator
nsteps  = 50 ; 2 * 50 = 1000 ps, 1 ns
dt= 0.002 ; 2 fs
; Output control
nstxout = 1000  ; save coordinates every 2 ps
nstvout = 1000  ; save velocities every 2 ps
nstxtcout   = 1000  ; xtc compressed trajectory output every 2 ps
nstenergy   = 1000  ; save energies every 2 ps
nstlog  = 1000  ; update log file every 2 ps
; Bond parameters
continuation   = yes; Restarting after NPT
constraint_algorithm = lincs  ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter  = 1  ; accuracy of LINCS
lincs_order = 4  ; also related to accuracy
; Neighborsearching
ns_type = simple  ; search neighboring grid cells
nstlist = 5  ; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb = 1.0; short-range electrostatic cutoff (in nm)
rvdw = 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
;coulombtype = PME; Particle Mesh Ewald for long-range electrostatics
;pme_order   = 4  ; cubic interpolation
;fourierspacing = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein   ; two coupling groups - more accurate
tau_t= 0.1 0.1   ; time constant, in ps
ref_t= 400400   ; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl  = no
pcoupltype  = isotropic ; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0; reference pressure, in bar
compressibility = 4.5e-5   ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc  = no; 3-D PBC
; Dispersion correction
DispCorr = no  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no ; Velocity generation is off
;comm_mode
comm_mode = ANGULAR

Kindly guide regarding this.

Thanks,
SN



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mpirun and no pbc

2011-07-19 Thread shivangi nangia
Dear gmx-users,

I am heating my system at 300 K.

I have set the pbc conditions as "no"

I get the following error:
---
Program mdrun_mpi, VERSION 4.0.5
Source code file: domdec.c, line: 5436

Fatal error:
pbc type no is not supported with domain decomposition,
use particle decomposition: mdrun -pd
---

I tried looking for a similar option as "-pd" for parallel runs, but could
not find any.

The md.mdp is:
; Run parameters
integrator  = md ; leap-frog integrator
nsteps  = 50 ; 2 * 50 = 1000 ps, 1 ns
dt= 0.002 ; 2 fs
; Output control
nstxout = 1000  ; save coordinates every 2 ps
nstvout = 1000  ; save velocities every 2 ps
nstxtcout   = 1000  ; xtc compressed trajectory output every 2 ps
nstenergy   = 1000  ; save energies every 2 ps
nstlog  = 1000  ; update log file every 2 ps
; Bond parameters
continuation   = yes; Restarting after NPT
constraint_algorithm = lincs  ; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds) constrained
lincs_iter  = 1  ; accuracy of LINCS
lincs_order = 4  ; also related to accuracy
; Neighborsearching
ns_type = simple  ; search neighboring grid cells
nstlist = 5  ; 10 fs
rlist= 1.0; short-range neighborlist cutoff (in nm)
rcoulomb = 1.0; short-range electrostatic cutoff (in nm)
rvdw = 1.0; short-range van der Waals cutoff (in nm)
; Electrostatics
;coulombtype = PME; Particle Mesh Ewald for long-range electrostatics
;pme_order   = 4  ; cubic interpolation
;fourierspacing = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Non-Protein   ; two coupling groups - more accurate
tau_t= 0.1 0.1   ; time constant, in ps
ref_t= 400400   ; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl  = no
pcoupltype  = isotropic ; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0; reference pressure, in bar
compressibility = 4.5e-5   ; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc  = no; 3-D PBC
; Dispersion correction
DispCorr = no  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = no ; Velocity generation is off
;comm_mode
comm_mode = ANGULAR

Kindly guide regarding this.

Thanks,
SN
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun error?

2011-02-16 Thread Justin A. Lemkul



Justin Kat wrote:

Dear Gromacs,

My colleague has attempted to issue this command:


mpirun -np 8 (or 7) mdrun_mpi .. (etc)


According to him, he gets the following error message:


MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD  
with errorcode -1.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on   
exactly when Open MPI kills them.

--


--- 
Program mdrun_mpi, VERSION 4.0.7

Source code file: domdec.c, line: 5888

Fatal error:
There is no domain decomposition for 7 nodes that is compatible with the 
given box and a minimum cell size of 0.955625 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS 
settings



However, when he uses say, -np 6, he seems to get no error. Any insight 
on why this might be happening?




When any error comes up, the first port of call should be the Gromacs site, 
followed by a mailing list search.  In this case, the website works quite nicely:


http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm


Also, when he saves the output to a file, sometimes he sees the following:


NOTE: Turning on dynamic load balancing


Is this another flag that might be causing the crash? What does that 
line mean?


See the manual and/or Gromacs 4 paper for an explanation of dynamic load 
balancing.  This is a normal message.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mpirun error?

2011-02-16 Thread Justin Kat
Dear Gromacs,

My colleague has attempted to issue this command:


mpirun -np 8 (or 7) mdrun_mpi .. (etc)


According to him, he gets the following error message:


MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--


---
Program mdrun_mpi, VERSION 4.0.7
Source code file: domdec.c, line: 5888

Fatal error:
There is no domain decomposition for 7 nodes that is compatible with the
given box and a minimum cell size of 0.955625 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS
settings


However, when he uses say, -np 6, he seems to get no error. Any insight on
why this might be happening?

Also, when he saves the output to a file, sometimes he sees the following:


NOTE: Turning on dynamic load balancing


Is this another flag that might be causing the crash? What does that line
mean?

Thanks!
Justin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MPIRUN on Ubontu

2010-12-27 Thread Hassan Shallal
I think you may need to type:
module openmpi/gnu
Then
mpirun mdrun_mpi -v u know the rest


Hassan

On Dec 27, 2010, at 9:44 AM, גדעון לפידות  wrote:

> Hi all,
> I have recently installed Ubonto on my computer (i5 processor) and installed 
> gromacs 4.0.7. I have installed openmpi and fftw but when using mpirun 
> command instead of getting parallel processes it simply runs the same job 
> four times simultaneously. How do I make the necessary adjustments.
> Thanks,
>  Gideon
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MPIRUN on Ubontu

2010-12-27 Thread Linus Östberg
In order to use MPI on Ubuntu with the distribution-supplied package,
you need to use a combination of mpirun and mdrun_mpi, e.g.

mpirun -np 2 mdrun_mpi -deffnm md

to run on two cores.

On Mon, Dec 27, 2010 at 7:04 PM, Justin A. Lemkul  wrote:
>
>
> גדעון לפידות wrote:
>>
>> Hi all,
>> I have recently installed Ubonto on my computer (i5 processor) and
>> installed gromacs 4.0.7. I have installed openmpi and fftw but when using
>> mpirun command instead of getting parallel processes it simply runs the same
>> job four times simultaneously. How do I make the necessary adjustments.
>
> Properly compile an MPI-enabled mdrun.  Since you've provided no detail on
> how you did the installation, the only thing to suggest is that you've done
> something wrong.  Follow the installation guide:
>
> http://www.gromacs.org/Downloads/Installation_Instructions
>
> Alternatively, use the newest version of Gromacs (4.5.3), which uses
> threading for parallelization instead of requiring external MPI support.
>
> -Justin
>
>> Thanks,
>>  Gideon
>>
>
> --
> 
>
> Justin A. Lemkul
> Ph.D. Candidate
> ICTAS Doctoral Scholar
> MILES-IGERT Trainee
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> 
> --
> gmx-users mailing list    gmx-us...@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MPIRUN on Ubontu

2010-12-27 Thread Justin A. Lemkul



גדעון לפידות wrote:

Hi all,
I have recently installed Ubonto on my computer (i5 processor) and 
installed gromacs 4.0.7. I have installed openmpi and fftw but when 
using mpirun command instead of getting parallel processes it simply 
runs the same job four times simultaneously. How do I make the necessary 
adjustments.


Properly compile an MPI-enabled mdrun.  Since you've provided no detail on how 
you did the installation, the only thing to suggest is that you've done 
something wrong.  Follow the installation guide:


http://www.gromacs.org/Downloads/Installation_Instructions

Alternatively, use the newest version of Gromacs (4.5.3), which uses threading 
for parallelization instead of requiring external MPI support.


-Justin


Thanks,
 Gideon



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] MPIRUN on Ubontu

2010-12-27 Thread גדעון לפידות
Hi all,
I have recently installed Ubonto on my computer (i5 processor) and installed
gromacs 4.0.7. I have installed openmpi and fftw but when using mpirun
command instead of getting parallel processes it simply runs the same job
four times simultaneously. How do I make the necessary adjustments.
Thanks,
 Gideon
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] mpirun

2010-11-09 Thread fahimeh baftizadeh
Hello,

I am using Metadynamics implemented on gromacs, doing normal one dimentional 
test on a simple system. I see the results of using only one processor or more 
than one processors are different. 
always I use the same tpr starting file, So, is it normal to have different 
results (in my case different value of collective varriables)

in the beggining they are exactely equal but after some steps, they diverge !

Fahimeh



  -- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun problem

2008-06-16 Thread Carsten Kutzner

ha salem wrote:

dear carsten
I have 11 particels ,cores of cpu has 90 % usage on one machine but 
when I run the same
calculation on 2 machine   the cpu usage of cores reduce to 20% my lan 
is 100 Mbps

do you mean with  LAN Gigabit the cpu uasage increase to 90%?

From the benchmarks I have seen, I can say that you cannot expect any
speedup if your computers are only connected with 100 Mpbs. You will
need at least 1000 Mbps, or better Infiniband/Myrinet.

Carsten


thank you

--- On *Sun, 6/15/08, Carsten Kutzner /<[EMAIL PROTECTED]>/* wrote:

From: Carsten Kutzner <[EMAIL PROTECTED]>
Subject: Re: [gmx-users] mpirun problem
To: [EMAIL PROTECTED], "Discussion list for GROMACS users"

Date: Sunday, June 15, 2008, 7:30 PM

On 15.06.2008, at 20:19, ha salem wrote:


dear users
 I have encouneterd a problem with mpirun I have 2 pc (every
pc has 1 intel
 quad core cpu) ,when I run mdrun on 1 machine with  "-np 4 "
option the calculation
 run on 4 cores and goes faster ,system monitor show all 4
cores of this cpu are working
 every core has 90% cpu usage,and every thing is ok
 but now I connect 2 computer to LAN and I executed lamboot -v
lamhosts
 then I run mpirun -np 8 but I see the all 8 cores of 2
machines are workinng with
 20 % 10% cpu usage and speed is lower than 4 cores of 1 cpu


This could have a lot of reasons. What kind of interconnect do you
use? If it is gigabit 
Ethernet, you will need at least 8 particles to be faster on two
4 CPU machines compared 
to one. With only fast ethernet, do not expect any scaling at all on

today's fast processors.

Try grompp -shuffle -sort, this will help increase the scaling a bit. 


Regards,
  Carsten



 can you help me ?my molecule is part of hsa and is macro molecule
   these are my commands
usr/local/share/gromacs_331/bin/grompp -np 8 -f prmd.mdp -c
finalprsp.gro -r finalprsp.gro -p n.top
mpirun -np 8 /usr/local/share/gromacs_331/bin/mdrun -np 8 -s
prmd.tpr -o prmd.trr -c finalprmd.gro -g prmd.log -e prmd.edr -n n.ndx


___
gmx-users mailing listgmx-users@gromacs.org
<mailto:gmx-users@gromacs.org>
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] mpirun problem

2008-06-15 Thread Carsten Kutzner

On 15.06.2008, at 20:19, ha salem wrote:


dear users
 I have encouneterd a problem with mpirun I have 2 pc (every pc  
has 1 intel
 quad core cpu) ,when I run mdrun on 1 machine with  "-np 4 "  
option the calculation
 run on 4 cores and goes faster ,system monitor show all 4 cores  
of this cpu are working

 every core has 90% cpu usage,and every thing is ok
 but now I connect 2 computer to LAN and I executed lamboot -v  
lamhosts
 then I run mpirun -np 8 but I see the all 8 cores of 2 machines  
are workinng with

 20 % 10% cpu usage and speed is lower than 4 cores of 1 cpu
This could have a lot of reasons. What kind of interconnect do you  
use? If it is gigabit
Ethernet, you will need at least 8 particles to be faster on two 4  
CPU machines compared
to one. With only fast ethernet, do not expect any scaling at all on  
today's fast processors.


Try grompp -shuffle -sort, this will help increase the scaling a bit.

Regards,
  Carsten



 can you help me ?my molecule is part of hsa and is macro molecule
   these are my commands
usr/local/share/gromacs_331/bin/grompp -np 8 -f prmd.mdp -c  
finalprsp.gro -r finalprsp.gro -p n.top
mpirun -np 8 /usr/local/share/gromacs_331/bin/mdrun -np 8 -s  
prmd.tpr -o prmd.trr -c finalprmd.gro -g prmd.log -e prmd.edr -n n.ndx



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] mpirun problem

2008-06-15 Thread ha salem
dear users
 I have encouneterd a problem with mpirun I have 2 pc (every pc has 1 intel 
 quad core cpu) ,when I run mdrun on 1 machine with  "-np 4 " option the 
calculation
 run on 4 cores and goes faster ,system monitor show all 4 cores of this 
cpu are working
 every core has 90% cpu usage,and every thing is ok
 but now I connect 2 computer to LAN and I executed lamboot -v lamhosts
 then I run mpirun -np 8 but I see the all 8 cores of 2 machines are 
workinng with
 20 % 10% cpu usage and speed is lower than 4 cores of 1 cpu
 can you help me ?my molecule is part of hsa and is macro molecule
   these are my commands
usr/local/share/gromacs_331/bin/grompp -np 8 -f prmd.mdp -c finalprsp.gro -r 
finalprsp.gro -p n.top 
mpirun -np 8 /usr/local/share/gromacs_331/bin/mdrun -np 8 -s prmd.tpr -o 
prmd.trr -c finalprmd.gro -g prmd.log -e prmd.edr -n n.ndx




  ___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-10-01 Thread liu xin
yes, I run my work in a queueing system:

qsub -l nodes=*,walltime ***:**:** mdrun.sh

and I will get two file, one for the standard output one for the standard
error output, as soon as my script get running
but for my case, there nothing in the error file, and everything in the
output file seemed ok.

On 10/1/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
>
> liu xin wrote:
> > Thanks Mark
> >
> > But there's no standard error output at all for my problem,
>
> Are you running a batch job in a queueing system and explicitly or
> implicitly asking that standard error not be returned?
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [EMAIL PROTECTED]
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-10-01 Thread Florian Haberl
Hi,
On Monday, 1. October 2007 13:42, liu xin wrote:
> Thanks Florian
> this is how I tried to invoke the mdrun with mpirun:
>
> mpirun -nolocal -machinefile $PBS_NODESFILE mdrun -np * -v -s ...

mpirun -v 

so its verbose for mpi process.

>
> the administrator said the three options for mpirun must be added
> (-nolocal, -machinefile, $PBS_NODESFILE)
> now I'm trying to follow the steps in the webpage you gave to me
>
> On 10/1/07, Florian Haberl <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > On Sunday, 30. September 2007 19:12, liu xin wrote:
> > > Thanks Mark
> > >
> > > But there's no standard error output at all for my problem, it seems
> >
> > mdrun
> >
> > > stagnated at this point, I dont know if anybody had met this situation
> > > before...and now I'm compiling LAMMPI as you suggested, hope this works
> >
> > for
> >
> > > me.
> >
> > does your calculation run with PBS or any queuing system?
> > You can try to run mpirun or others like mpiexec with a verbose option.
> >
> > In your previous mail you wrote something about running jobs with mpd, is
> > it
> > your queuing system
> > (http://www-unix.mcs.anl.gov/mpi/mpich1/docs/mpichntman/node39.htm), i
> > don`t
> > know if its outdated (webpage seems so).
> >
> >
> > Here gromacs run without problems with different mpi implementations also
> > with
> > intel-mpi, mvapich.
> >
> > > On 10/1/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
> > > > liu xin wrote:
> > > > > Dear GMXers
> > > > >
> > > > > my mdrun stops when I try to do it with 8 nodes, but there's no
> >
> > error
> >
> > > > > message, here's the end of the md0.log:
> > > >
> > > > The log file won't be helpful if the problem is outside of GROMACS,
> >
> > and
> >
> > > > the fact that it isn't helpful is strongly diagnostic of that. You
> >
> > need
> >
> > > > the standard error to diagnose what your system problem is.
> > > >
> > > > > "B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M.
> >
> > Fraaije
> >
> > > > > LINCS: A Linear Constraint Solver for molecular simulations
> > > > > J. Comp. Chem. 18 (1997) pp. 1463-1472
> > > > >   --- Thank You ---  
> > > > >
> > > > >
> > > > > Initializing LINear Constraint Solver
> > > > >   number of constraints is 3632
> > > > >   average number of constraints coupled to one constraint is 2.9"
> > > > >
> > > > >
> > > > > I also tried 6 nodes or 10 nodes, but mdrun always stops here,
> >
> > there's
> >
> > > > > no problem if I ran it by -np 4.
> > > > > I searched the list, I found some people said that this probably
> > > > > because the MPI version, currently, we used the 1.2.7
> > > >
> > > > MPICH  for GROMACS is not supported at all. Try LAM if you suspect
> > > > the MPI install, and I would suspect it.
> > > >
> > > > Mark
> > > > ___
> > > > gmx-users mailing listgmx-users@gromacs.org
> > > > http://www.gromacs.org/mailman/listinfo/gmx-users
> > > > Please search the archive at http://www.gromacs.org/search before
> > > > posting! Please don't post (un)subscribe requests to the list. Use
> > > > the www interface or send it to [EMAIL PROTECTED]
> > > > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> >
> > Greetings,
> >
> > Florian
> >
> > --
> >
> > -
> >-- Florian Haberl
> > Computer-Chemie-Centrum
> > Universitaet Erlangen/ Nuernberg
> > Naegelsbachstr 25
> > D-91052 Erlangen
> > Telephone: +49(0) - 9131 - 85 26581
> > Mailto: florian.haberl AT chemie.uni-erlangen.de
> >
> > -
> >-- ___
> > gmx-users mailing listgmx-users@gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before
> > posting! Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to [EMAIL PROTECTED]
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Greetings,

Florian

-- 
---
 Florian Haberl
 Computer-Chemie-Centrum   
 Universitaet Erlangen/ Nuernberg
 Naegelsbachstr 25
 D-91052 Erlangen
 Telephone: +49(0) − 9131 − 85 26581
 Mailto: florian.haberl AT chemie.uni-erlangen.de
---
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-10-01 Thread liu xin
Thanks Florian
this is how I tried to invoke the mdrun with mpirun:

mpirun -nolocal -machinefile $PBS_NODESFILE mdrun -np * -v -s ...

the administrator said the three options for mpirun must be added (-nolocal,
-machinefile, $PBS_NODESFILE)
now I'm trying to follow the steps in the webpage you gave to me



On 10/1/07, Florian Haberl <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> On Sunday, 30. September 2007 19:12, liu xin wrote:
> > Thanks Mark
> >
> > But there's no standard error output at all for my problem, it seems
> mdrun
> > stagnated at this point, I dont know if anybody had met this situation
> > before...and now I'm compiling LAMMPI as you suggested, hope this works
> for
> > me.
>
> does your calculation run with PBS or any queuing system?
> You can try to run mpirun or others like mpiexec with a verbose option.
>
> In your previous mail you wrote something about running jobs with mpd, is
> it
> your queuing system
> (http://www-unix.mcs.anl.gov/mpi/mpich1/docs/mpichntman/node39.htm), i
> don`t
> know if its outdated (webpage seems so).
>
>
> Here gromacs run without problems with different mpi implementations also
> with
> intel-mpi, mvapich.
>
> >
> > On 10/1/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
> > > liu xin wrote:
> > > > Dear GMXers
> > > >
> > > > my mdrun stops when I try to do it with 8 nodes, but there's no
> error
> > > > message, here's the end of the md0.log:
> > >
> > > The log file won't be helpful if the problem is outside of GROMACS,
> and
> > > the fact that it isn't helpful is strongly diagnostic of that. You
> need
> > > the standard error to diagnose what your system problem is.
> > >
> > > > "B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M.
> Fraaije
> > > > LINCS: A Linear Constraint Solver for molecular simulations
> > > > J. Comp. Chem. 18 (1997) pp. 1463-1472
> > > >   --- Thank You ---  
> > > >
> > > >
> > > > Initializing LINear Constraint Solver
> > > >   number of constraints is 3632
> > > >   average number of constraints coupled to one constraint is 2.9"
> > > >
> > > >
> > > > I also tried 6 nodes or 10 nodes, but mdrun always stops here,
> there's
> > > > no problem if I ran it by -np 4.
> > > > I searched the list, I found some people said that this probably
> > > > because the MPI version, currently, we used the 1.2.7
> > >
> > > MPICH  for GROMACS is not supported at all. Try LAM if you suspect the
> > > MPI install, and I would suspect it.
> > >
> > > Mark
> > > ___
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://www.gromacs.org/mailman/listinfo/gmx-users
> > > Please search the archive at http://www.gromacs.org/search before
> > > posting! Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to [EMAIL PROTECTED]
> > > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
> Greetings,
>
> Florian
>
> --
>
> ---
> Florian Haberl
> Computer-Chemie-Centrum
> Universitaet Erlangen/ Nuernberg
> Naegelsbachstr 25
> D-91052 Erlangen
> Telephone: +49(0) − 9131 − 85 26581
> Mailto: florian.haberl AT chemie.uni-erlangen.de
>
> ---
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [EMAIL PROTECTED]
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-10-01 Thread Mark Abraham

liu xin wrote:

Thanks Mark
 
But there's no standard error output at all for my problem,


Are you running a batch job in a queueing system and explicitly or 
implicitly asking that standard error not be returned?


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-10-01 Thread Florian Haberl
Hi,

On Sunday, 30. September 2007 19:12, liu xin wrote:
> Thanks Mark
>
> But there's no standard error output at all for my problem, it seems mdrun
> stagnated at this point, I dont know if anybody had met this situation
> before...and now I'm compiling LAMMPI as you suggested, hope this works for
> me.

does your calculation run with PBS or any queuing system?
You can try to run mpirun or others like mpiexec with a verbose option.

In your previous mail you wrote something about running jobs with mpd, is it 
your queuing system 
(http://www-unix.mcs.anl.gov/mpi/mpich1/docs/mpichntman/node39.htm), i don`t 
know if its outdated (webpage seems so).


Here gromacs run without problems with different mpi implementations also with 
intel-mpi, mvapich.

>
> On 10/1/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
> > liu xin wrote:
> > > Dear GMXers
> > >
> > > my mdrun stops when I try to do it with 8 nodes, but there's no error
> > > message, here's the end of the md0.log:
> >
> > The log file won't be helpful if the problem is outside of GROMACS, and
> > the fact that it isn't helpful is strongly diagnostic of that. You need
> > the standard error to diagnose what your system problem is.
> >
> > > "B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
> > > LINCS: A Linear Constraint Solver for molecular simulations
> > > J. Comp. Chem. 18 (1997) pp. 1463-1472
> > >   --- Thank You ---  
> > >
> > >
> > > Initializing LINear Constraint Solver
> > >   number of constraints is 3632
> > >   average number of constraints coupled to one constraint is 2.9"
> > >
> > >
> > > I also tried 6 nodes or 10 nodes, but mdrun always stops here, there's
> > > no problem if I ran it by -np 4.
> > > I searched the list, I found some people said that this probably
> > > because the MPI version, currently, we used the 1.2.7
> >
> > MPICH  for GROMACS is not supported at all. Try LAM if you suspect the
> > MPI install, and I would suspect it.
> >
> > Mark
> > ___
> > gmx-users mailing listgmx-users@gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before
> > posting! Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to [EMAIL PROTECTED]
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Greetings,

Florian

-- 
---
 Florian Haberl
 Computer-Chemie-Centrum   
 Universitaet Erlangen/ Nuernberg
 Naegelsbachstr 25
 D-91052 Erlangen
 Telephone: +49(0) − 9131 − 85 26581
 Mailto: florian.haberl AT chemie.uni-erlangen.de
---
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-09-30 Thread liu xin
Thanks Mark

But there's no standard error output at all for my problem, it seems mdrun
stagnated at this point, I dont know if anybody had met this situation
before...and now I'm compiling LAMMPI as you suggested, hope this works for
me.


On 10/1/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
>
> liu xin wrote:
> > Dear GMXers
> >
> > my mdrun stops when I try to do it with 8 nodes, but there's no error
> > message, here's the end of the md0.log:
>
> The log file won't be helpful if the problem is outside of GROMACS, and
> the fact that it isn't helpful is strongly diagnostic of that. You need
> the standard error to diagnose what your system problem is.
>
> > "B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
> > LINCS: A Linear Constraint Solver for molecular simulations
> > J. Comp. Chem. 18 (1997) pp. 1463-1472
> >   --- Thank You ---  
> >
> >
> > Initializing LINear Constraint Solver
> >   number of constraints is 3632
> >   average number of constraints coupled to one constraint is 2.9"
> >
> >
> > I also tried 6 nodes or 10 nodes, but mdrun always stops here, there's
> > no problem if I ran it by -np 4.
> > I searched the list, I found some people said that this probably because
> > the MPI version, currently, we used the 1.2.7
>
> MPICH  for GROMACS is not supported at all. Try LAM if you suspect the
> MPI install, and I would suspect it.
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [EMAIL PROTECTED]
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-09-30 Thread Mark Abraham

liu xin wrote:

Dear GMXers

my mdrun stops when I try to do it with 8 nodes, but there's no error 
message, here's the end of the md0.log:


The log file won't be helpful if the problem is outside of GROMACS, and 
the fact that it isn't helpful is strongly diagnostic of that. You need 
the standard error to diagnose what your system problem is.



"B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
  --- Thank You ---  


Initializing LINear Constraint Solver
  number of constraints is 3632
  average number of constraints coupled to one constraint is 2.9"


I also tried 6 nodes or 10 nodes, but mdrun always stops here, there's 
no problem if I ran it by -np 4.
I searched the list, I found some people said that this probably because 
the MPI version, currently, we used the 1.2.7


MPICH  for GROMACS is not supported at all. Try LAM if you suspect the 
MPI install, and I would suspect it.


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] MPIRUN problem when switched to 8 np (searched the list)

2007-09-30 Thread liu xin
Dear GMXers

my mdrun stops when I try to do it with 8 nodes, but there's no error
message, here's the end of the md0.log:

"B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
  --- Thank You ---  


Initializing LINear Constraint Solver
  number of constraints is 3632
  average number of constraints coupled to one constraint is 2.9"


I also tried 6 nodes or 10 nodes, but mdrun always stops here, there's no
problem if I ran it by -np 4.
I searched the list, I found some people said that this probably because the
MPI version, currently, we used the 1.2.7
It is weird that another cluster, install mpich1.2.7, goes well with mdrun.
I also installed mpich2 in my personal directory (I have no administrator
privilege on the cluster...), but it keeps on complaining the "mpd" setting,
though I created a .mpd.conf file as it told me in my home directory.
Should I install mpich2 by root? Is there any conflict if I installed both
mpich1 and mpich2?
I've searched the list but cant find something useful to solve my problem
Any suggestions will be REALLY appreciated!

Xin Liu
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-26 Thread Ragothaman Yennamalli
Hi Tsjerk,
I completely agree with you. I am treating symptoms
rather than the problem. I read your previous comment
on the LINCS warning to Shangwa Han. I dont have any
unnatural amino acids in the protein and EM steps
converged to machine precision. I am attaching the
potential energy .xvg file after EM. I will look into
those atoms and see if I can resolve this problem.
Regards,
Raghu
--- Tsjerk Wassenaar <[EMAIL PROTECTED]> wrote:

> Hi Ragothaman,
> 
> You would do good to try and find out what caused
> the error. You may
> be treating symptoms rather than problems now, and
> simply covering up
> some more severe wrong in your system. Maybe try to
> start a simulation
> after some while, using the same parameters as
> before. This might
> allow your system to relax sufficiently.
> 
> Cheers,
> 
> Tsjerk
> 
> 



__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/

energy.xvg
Description: 1113252116-energy.xvg
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-25 Thread Tsjerk Wassenaar

Hi Ragothaman,

You would do good to try and find out what caused the error. You may
be treating symptoms rather than problems now, and simply covering up
some more severe wrong in your system. Maybe try to start a simulation
after some while, using the same parameters as before. This might
allow your system to relax sufficiently.

Cheers,

Tsjerk

On 1/25/07, Ragothaman Yennamalli <[EMAIL PROTECTED]> wrote:

Hi,
I increased the tau_p to 2.0 and lincs-iter to 4. Now
the system is running smoothly.
Regards,
Ragothaman

--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > HI,
> > Since the log files and crashed .pdb files had
> filled
> > the whole disk space I had to delete them and
> start
> > again.
> > I am simulating a homodimer protein in a water
> box. I
> > have mutated three residues and want to look the
> > behaviour of the protein. I have four setups for
> the
> > same protein without mutation and with mutation
> and
> > respective controls. Among the four only one is
> > crashing at the position restraint stage. The
> other
> > three didnt show me this error (except for the one
> > line LINCS warning).
> > I have run the position restrained dynamics again.
> Yes
> > as you are saying it starts with LINCS warning.
> > This is what it says after the LINCS warning.
>
> You should be doing some energy minimization before
> attempting MD, else
> some bad contacts will send badness around the
> system, maybe eventually
> causing such crashes. Make sure you do EM after
> solvating (and before if
> you need to!)
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
>




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




--
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-25 Thread Ragothaman Yennamalli
Hi,
I increased the tau_p to 2.0 and lincs-iter to 4. Now
the system is running smoothly.
Regards,
Ragothaman

--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > HI,
> > Since the log files and crashed .pdb files had
> filled
> > the whole disk space I had to delete them and
> start
> > again. 
> > I am simulating a homodimer protein in a water
> box. I
> > have mutated three residues and want to look the
> > behaviour of the protein. I have four setups for
> the
> > same protein without mutation and with mutation
> and
> > respective controls. Among the four only one is
> > crashing at the position restraint stage. The
> other
> > three didnt show me this error (except for the one
> > line LINCS warning). 
> > I have run the position restrained dynamics again.
> Yes
> > as you are saying it starts with LINCS warning. 
> > This is what it says after the LINCS warning.
> 
> You should be doing some energy minimization before
> attempting MD, else 
> some bad contacts will send badness around the
> system, maybe eventually 
> causing such crashes. Make sure you do EM after
> solvating (and before if 
> you need to!)
> 
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the 
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> 




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Ragothaman Yennamalli
Hi Mark,
Thanks for the mail. Yes, I solvate the protein and
then do a EM with steepest descent and then I proceed
to do position restrained MD, first restraining the
protein and then the backbone. It is at the backbone
restraint step I got this error. Also I assumed that
if there are any bad contacts they would get resolved
in minimization step, but looks like it hasn't. Please
tell me how to solve this problem. 
Raghu
--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > HI,
> > Since the log files and crashed .pdb files had
> filled
> > the whole disk space I had to delete them and
> start
> > again. 
> > I am simulating a homodimer protein in a water
> box. I
> > have mutated three residues and want to look the
> > behaviour of the protein. I have four setups for
> the
> > same protein without mutation and with mutation
> and
> > respective controls. Among the four only one is
> > crashing at the position restraint stage. The
> other
> > three didnt show me this error (except for the one
> > line LINCS warning). 
> > I have run the position restrained dynamics again.
> Yes
> > as you are saying it starts with LINCS warning. 
> > This is what it says after the LINCS warning.
> 
> You should be doing some energy minimization before
> attempting MD, else 
> some bad contacts will send badness around the
> system, maybe eventually 
> causing such crashes. Make sure you do EM after
> solvating (and before if 
> you need to!)
> 
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the 
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> 




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Mark Abraham

Ragothaman Yennamalli wrote:

HI,
Since the log files and crashed .pdb files had filled
the whole disk space I had to delete them and start
again. 
I am simulating a homodimer protein in a water box. I

have mutated three residues and want to look the
behaviour of the protein. I have four setups for the
same protein without mutation and with mutation and
respective controls. Among the four only one is
crashing at the position restraint stage. The other
three didnt show me this error (except for the one
line LINCS warning). 
I have run the position restrained dynamics again. Yes
as you are saying it starts with LINCS warning. 
This is what it says after the LINCS warning.


You should be doing some energy minimization before attempting MD, else 
some bad contacts will send badness around the system, maybe eventually 
causing such crashes. Make sure you do EM after solvating (and before if 
you need to!)


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Ragothaman Yennamalli
HI,
Since the log files and crashed .pdb files had filled
the whole disk space I had to delete them and start
again. 
I am simulating a homodimer protein in a water box. I
have mutated three residues and want to look the
behaviour of the protein. I have four setups for the
same protein without mutation and with mutation and
respective controls. Among the four only one is
crashing at the position restraint stage. The other
three didnt show me this error (except for the one
line LINCS warning). 
I have run the position restrained dynamics again. Yes
as you are saying it starts with LINCS warning. 
This is what it says after the LINCS warning.
*
Back Off! I just backed up step20672.pdb to
./#step20672.pdb.1#
Sorry couldn't backup step20672.pdb to
./#step20672.pdb.1#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673, time 91.346 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
max 348017441898496.00 (between atoms 7000 and
7002) rms nan
bonds that rotated more than 30 degrees:
**
I am attaching the .mdp file along with this email. 
Thanks in advance.
Raghu

--- Tsjerk Wassenaar <[EMAIL PROTECTED]> wrote:

> Hi Ragu,
> 
> The tail of the .log file is not very informative
> here. Please try to
> find in the log where it first went wrong. It may
> well start out with
> a LINCS warning.
> Besides, please be more specific in what you're
> trying to simulate,
> and what protocol you used.
> 
> Cheers,
> 
> Tsjerk
> 
> On 1/19/07, Ragothaman Yennamalli
> <[EMAIL PROTECTED]> wrote:
> > Hi,
> > This is the tail of the .log file
> > new box (3x3):
> >new box[0]={-4.13207e+15,  0.0e+00,
> > -0.0e+00}
> >new box[1]={ 0.0e+00, -5.17576e+15,
> > -0.0e+00}
> >new box[2]={ 0.0e+00,  1.51116e+23,
> > -1.14219e+16}
> > Correcting invalid box:
> > old box (3x3):
> >old box[0]={-4.13207e+15,  0.0e+00,
> > -0.0e+00}
> >old box[1]={ 0.0e+00, -5.17576e+15,
> > -0.0e+00}
> >old box[2]={ 0.0e+00,  1.51116e+23,
> > -1.14219e+16}
> > THe log files have generated as huge files (approx
> > 20GB) which have used all the disk space.
> > Raghu
> > --- Mark Abraham <[EMAIL PROTECTED]> wrote:
> >
> > > Ragothaman Yennamalli wrote:
> > > > Dear all,
> > > > I am running gromacs3.2 version. When I am
> running
> > > the
> > > > position restraint md for the protein, the
> process
> > > > stops within 100 steps with the following
> error:
> > > >
> > >
> >
>
-
> > > > One of the processes started by mpirun has
> exited
> > > with
> > > >   nonzero exit
> > > > code.  This typically indicates that the
> process
> > > > finished in error.
> > > > If your process did not finish in error, be
> sure
> > > to
> > > > include a "return
> > > > 0" or "exit(0)" in your C code before exiting
> the
> > > > application.
> > > >
> > > > PID 16200 failed on node n0 (10.10.0.8) due to
> > > signal
> > > > 9.
> > > >
> > >
> >
>
-
> > > >
> > > > I searched the mailing list and google and
> > > understood
> > > > that the pressure coupling parameter "tau_p"
> value
> > > in
> > > > the .mdp file has to be more than 1.0 and I
> did
> > > the
> > > > same.
> > >
> > > This is likely irrelevant. What do the ends of
> the
> > > .log files say?
> > >
> > > Mark
> > > ___
> > > gmx-users mailing listgmx-users@gromacs.org
> > >
> http://www.gromacs.org/mailman/listinfo/gmx-users
> > > Please don't post (un)subscribe requests to the
> > > list. Use the
> > > www interface or send it to
> > > [EMAIL PROTECTED]
> > > Can't post? Read
> > > http://www.gromacs.org/mailing_lists/users.php
> > >
> >
> >
> >
> >
> >
>
__
> > Yahoo! India Answers: Share what you know. Learn
> something new
> > http://in.answers.yahoo.com/
> > ___
> > gmx-users mailing listgmx-users@gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please don't post (un)subscribe requests to the
> list. Use the
> > www interface or send it to
> [EMAIL PROTECTED]
> > Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> >
> 
> 
> -- 
> Tsjerk A. Wassenaar, Ph.D.
> Junior UD (post-doc)
> Biomolecular NMR, Bijvoet Center
> Utrecht University
> Padualaan 8
> 3584 CH Utrecht
> The Netherlands
> P: +31-30-2539931
> F: +31-30-

Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Tsjerk Wassenaar

Hi Ragu,

The tail of the .log file is not very informative here. Please try to
find in the log where it first went wrong. It may well start out with
a LINCS warning.
Besides, please be more specific in what you're trying to simulate,
and what protocol you used.

Cheers,

Tsjerk

On 1/19/07, Ragothaman Yennamalli <[EMAIL PROTECTED]> wrote:

Hi,
This is the tail of the .log file
new box (3x3):
   new box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   new box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   new box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
Correcting invalid box:
old box (3x3):
   old box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   old box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   old box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
THe log files have generated as huge files (approx
20GB) which have used all the disk space.
Raghu
--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > Dear all,
> > I am running gromacs3.2 version. When I am running
> the
> > position restraint md for the protein, the process
> > stops within 100 steps with the following error:
> >
>
-
> > One of the processes started by mpirun has exited
> with
> >   nonzero exit
> > code.  This typically indicates that the process
> > finished in error.
> > If your process did not finish in error, be sure
> to
> > include a "return
> > 0" or "exit(0)" in your C code before exiting the
> > application.
> >
> > PID 16200 failed on node n0 (10.10.0.8) due to
> signal
> > 9.
> >
>
-
> >
> > I searched the mailing list and google and
> understood
> > that the pressure coupling parameter "tau_p" value
> in
> > the .mdp file has to be more than 1.0 and I did
> the
> > same.
>
> This is likely irrelevant. What do the ends of the
> .log files say?
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
>




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




--
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Ragothaman Yennamalli
Hi,
This is the tail of the .log file
new box (3x3):
   new box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   new box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   new box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
Correcting invalid box:
old box (3x3):
   old box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   old box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   old box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
THe log files have generated as huge files (approx
20GB) which have used all the disk space. 
Raghu
--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > Dear all,
> > I am running gromacs3.2 version. When I am running
> the
> > position restraint md for the protein, the process
> > stops within 100 steps with the following error:
> >
>
-
> > One of the processes started by mpirun has exited
> with
> >   nonzero exit
> > code.  This typically indicates that the process
> > finished in error.
> > If your process did not finish in error, be sure
> to
> > include a "return
> > 0" or "exit(0)" in your C code before exiting the
> > application.
> > 
> > PID 16200 failed on node n0 (10.10.0.8) due to
> signal
> > 9.
> >
>
-
> > 
> > I searched the mailing list and google and
> understood
> > that the pressure coupling parameter "tau_p" value
> in
> > the .mdp file has to be more than 1.0 and I did
> the
> > same. 
> 
> This is likely irrelevant. What do the ends of the
> .log files say?
> 
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the 
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> 




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-18 Thread Mark Abraham

Ragothaman Yennamalli wrote:

Dear all,
I am running gromacs3.2 version. When I am running the
position restraint md for the protein, the process
stops within 100 steps with the following error:
-
One of the processes started by mpirun has exited with
  nonzero exit
code.  This typically indicates that the process
finished in error.
If your process did not finish in error, be sure to
include a "return
0" or "exit(0)" in your C code before exiting the
application.

PID 16200 failed on node n0 (10.10.0.8) due to signal
9.
-

I searched the mailing list and google and understood
that the pressure coupling parameter "tau_p" value in
the .mdp file has to be more than 1.0 and I did the
same. 


This is likely irrelevant. What do the ends of the .log files say?

Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] MPIRUN error while running position restrained MD

2007-01-18 Thread Ragothaman Yennamalli
Dear all,
I am running gromacs3.2 version. When I am running the
position restraint md for the protein, the process
stops within 100 steps with the following error:
-
One of the processes started by mpirun has exited with
  nonzero exit
code.  This typically indicates that the process
finished in error.
If your process did not finish in error, be sure to
include a "return
0" or "exit(0)" in your C code before exiting the
application.

PID 16200 failed on node n0 (10.10.0.8) due to signal
9.
-

I searched the mailing list and google and understood
that the pressure coupling parameter "tau_p" value in
the .mdp file has to be more than 1.0 and I did the
same. Even otherwise the process gets killed with the
same error.
Please tell me what I am overlooking or making an
error.
Thanks in advance.

Regards,
Raghu

**
Y. M. Ragothaman,
Research Scholar,
Centre for Computational Biology and Bioinformatics,
School of Information Technology,
Jawaharlal Nehru University,
New Delhi - 110067.

Telephone: 91-11-26717568, 26717585
Facsimile: 91-11-26717586
**



__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php