Re: [gmx-users] mpirun error

2012-11-20 Thread Justin Lemkul



On 11/20/12 3:55 PM, Parisa Rahmani wrote:

Yes, I ran two test simulations , one with 5 cpu and another with 1;

5 cpu simulation :
step 50380, will finish at Wed Nov 21 01:46:25 2012
step 50020, will finish at Wed Nov 21 01:48:36 2012
step 50320, will finish at Wed Nov 21 01:46:49 2012
step 50270, will finish at Wed Nov 21 01:47:07 2012
*Time command :*
*real 153m1.968s*
*user 0m0.472s*
*sys 0m2.072s*
*
*
1 cpu simulation :(started almost 5 minutes later)
step 56000, will finish at Wed Nov 21 02:02:07 2012
*Time command :*
*real 177m25.541s*
*user 177m23.041s*
*sys 0m0.352s*
*
*



It appears to me that mdrun is functioning correctly, but overall performance is 
based on how large the system is and how good the hardware is.  It just seems to 
me that you're not getting particularly great scaling.


-Justin


On Tue, Nov 20, 2012 at 8:06 PM, Justin Lemkul  wrote:




On 11/20/12 8:43 AM, Parisa Rahmani wrote:


Thanks for your reply.
I have also tried installing with _mpi suffix
Here is the output of ldd:

gromacs3.3/bin$ ldd mdrun_mpi
linux-vdso.so.1 =>  (0x7fff4658c000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.**so.1 (0x7f7d7afe9000)
libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.**6 (0x7f7d7a9f3000)
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.**so.1 (0x7f7d79ff5000)
libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/**libpthread.so.0
(0x7f7d79bce000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.**6 (0x7f7d79847000)
/lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.**so.2 (0x7f7d79642000)

--**--**
--**--
gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
It seems that gromacs has been compiled with mpich.



Does this executable still give the error listed below?  Performance is
one thing, errors are another.  You may not necessarily obtain great
scaling, depending on the contents of the system.

-Justin




On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:




On 11/19/12 12:09 PM, Parisa Rahmani wrote:

  Dear gmx users


I have a problem with running parallel jobs on my Debian system(Openmpi
installed on it),
**Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
I am using gmx 3.3.3, because of the *lambda dynamics* method which is

implemented in it.

AS I know ,in gmx 3.x, the number of processors supplied for the
subsequent
mdrun needed to match the input file. but when i use **grompp -np 6 &
mpirun -np 6 mdrun** the following error appears :

ERROR : run input file md.tpr was made for 6 nodes,
while mdrun expected it to be for 1 nodes.

through search of mailing list i found similar problems, but non of
the solutions worked for my case.

wihtout -np option in grompp the error disappears, and then with each
of these commands

**

1)mpirun -np 6 mdrun -deffnm md

2)mpirun -np 6 mdrun -deffnm md -N 6

3)mpirun -np 6 mdrun -np 6 -deffnm md

4)mdrun -np 6 -s md -N 6


**it uses 6 processors(each one at nearly 100%), but the simulation
time is the same as for 1 processor.

I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
openmpi), with following command:

**
grompp -np 6 -f ...
mpiexec mdrun ...(number of processors is specified in the bash file)
**
,
but i can't run it on my 6 core system.

Also, I have no problem with newer version of gmx (4.5.x), but i
should use this version, and Hope someone can help me.


  Have you properly compiled an MPI-enabled mdrun?  The default

executable
name should be mdrun_mpi.  It should be linked against libmpi, so running
ldd on the mdrun executable should tell you.

-Justin

--
====


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>




====

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users>



* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Search>>before
posting!

* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] mpirun error

2012-11-20 Thread Parisa Rahmani
Yes, I ran two test simulations , one with 5 cpu and another with 1;

5 cpu simulation :
step 50380, will finish at Wed Nov 21 01:46:25 2012
step 50020, will finish at Wed Nov 21 01:48:36 2012
step 50320, will finish at Wed Nov 21 01:46:49 2012
step 50270, will finish at Wed Nov 21 01:47:07 2012
*Time command :*
*real 153m1.968s*
*user 0m0.472s*
*sys 0m2.072s*
*
*
1 cpu simulation :(started almost 5 minutes later)
step 56000, will finish at Wed Nov 21 02:02:07 2012
*Time command :*
*real 177m25.541s*
*user 177m23.041s*
*sys 0m0.352s*
*
*

On Tue, Nov 20, 2012 at 8:06 PM, Justin Lemkul  wrote:

>
>
> On 11/20/12 8:43 AM, Parisa Rahmani wrote:
>
>> Thanks for your reply.
>> I have also tried installing with _mpi suffix
>> Here is the output of ldd:
>>
>> gromacs3.3/bin$ ldd mdrun_mpi
>> linux-vdso.so.1 =>  (0x7fff4658c000)
>> libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.**so.1 (0x7f7d7afe9000)
>> libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
>> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.**6 (0x7f7d7a9f3000)
>> libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
>> libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
>> libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
>> librt.so.1 => /lib/x86_64-linux-gnu/librt.**so.1 (0x7f7d79ff5000)
>> libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
>> libpthread.so.0 => /lib/x86_64-linux-gnu/**libpthread.so.0
>> (0x7f7d79bce000)
>> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.**6 (0x7f7d79847000)
>> /lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
>> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.**so.2 (0x7f7d79642000)
>>
>> --**--**
>> --**--
>> gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
>> libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
>> It seems that gromacs has been compiled with mpich.
>>
>>
> Does this executable still give the error listed below?  Performance is
> one thing, errors are another.  You may not necessarily obtain great
> scaling, depending on the contents of the system.
>
> -Justin
>
>
>>
>> On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 11/19/12 12:09 PM, Parisa Rahmani wrote:
>>>
>>>  Dear gmx users

 I have a problem with running parallel jobs on my Debian system(Openmpi
 installed on it),
 **Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
 I am using gmx 3.3.3, because of the *lambda dynamics* method which is

 implemented in it.

 AS I know ,in gmx 3.x, the number of processors supplied for the
 subsequent
 mdrun needed to match the input file. but when i use **grompp -np 6 &
 mpirun -np 6 mdrun** the following error appears :

 ERROR : run input file md.tpr was made for 6 nodes,
 while mdrun expected it to be for 1 nodes.

 through search of mailing list i found similar problems, but non of
 the solutions worked for my case.

 wihtout -np option in grompp the error disappears, and then with each
 of these commands

 **

 1)mpirun -np 6 mdrun -deffnm md

 2)mpirun -np 6 mdrun -deffnm md -N 6

 3)mpirun -np 6 mdrun -np 6 -deffnm md

 4)mdrun -np 6 -s md -N 6


 **it uses 6 processors(each one at nearly 100%), but the simulation
 time is the same as for 1 processor.

 I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
 openmpi), with following command:

 **
 grompp -np 6 -f ...
 mpiexec mdrun ...(number of processors is specified in the bash file)
 **
 ,
 but i can't run it on my 6 core system.

 Also, I have no problem with newer version of gmx (4.5.x), but i
 should use this version, and Hope someone can help me.


  Have you properly compiled an MPI-enabled mdrun?  The default
>>> executable
>>> name should be mdrun_mpi.  It should be linked against libmpi, so running
>>> ldd on the mdrun executable should tell you.
>>>
>>> -Justin
>>>
>>> --
>>> ====
>>>
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Research Scientist
>>> Department of Biochemistry
>>> Virginia Tech
>>> Blacksburg, VA
>>> jalemkul[at]vt.edu | (540) 231-9080
>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
>>> >
>>>
>>> ====
>>>
>>> --
>>> gmx-users mailing listgmx-users@gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users>
>>> >
>>> * Please search the archive at http://www.gromacs.org/**
>>> Support/Mailing_Lists/Search>> Mailing_Lists/Search>before
>>> posting!
>>>
>>> * Plea

Re: [gmx-users] mpirun error

2012-11-20 Thread Justin Lemkul



On 11/20/12 8:43 AM, Parisa Rahmani wrote:

Thanks for your reply.
I have also tried installing with _mpi suffix
Here is the output of ldd:

gromacs3.3/bin$ ldd mdrun_mpi
linux-vdso.so.1 =>  (0x7fff4658c000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f7d7afe9000)
libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f7d7a9f3000)
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f7d79ff5000)
libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f7d79bce000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f7d79847000)
/lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f7d79642000)


gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
It seems that gromacs has been compiled with mpich.



Does this executable still give the error listed below?  Performance is one 
thing, errors are another.  You may not necessarily obtain great scaling, 
depending on the contents of the system.


-Justin




On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:




On 11/19/12 12:09 PM, Parisa Rahmani wrote:


Dear gmx users

I have a problem with running parallel jobs on my Debian system(Openmpi
installed on it),
**Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
I am using gmx 3.3.3, because of the *lambda dynamics* method which is

implemented in it.

AS I know ,in gmx 3.x, the number of processors supplied for the
subsequent
mdrun needed to match the input file. but when i use **grompp -np 6 &
mpirun -np 6 mdrun** the following error appears :

ERROR : run input file md.tpr was made for 6 nodes,
while mdrun expected it to be for 1 nodes.

through search of mailing list i found similar problems, but non of
the solutions worked for my case.

wihtout -np option in grompp the error disappears, and then with each
of these commands

**

1)mpirun -np 6 mdrun -deffnm md

2)mpirun -np 6 mdrun -deffnm md -N 6

3)mpirun -np 6 mdrun -np 6 -deffnm md

4)mdrun -np 6 -s md -N 6


**it uses 6 processors(each one at nearly 100%), but the simulation
time is the same as for 1 processor.

I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
openmpi), with following command:

**
grompp -np 6 -f ...
mpiexec mdrun ...(number of processors is specified in the bash file)
**
,
but i can't run it on my 6 core system.

Also, I have no problem with newer version of gmx (4.5.x), but i
should use this version, and Hope someone can help me.



Have you properly compiled an MPI-enabled mdrun?  The default executable
name should be mdrun_mpi.  It should be linked against libmpi, so running
ldd on the mdrun executable should tell you.

-Justin

--
==**==

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin

==**==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists



--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpirun error

2012-11-20 Thread Parisa Rahmani
Thanks for your reply.
I have also tried installing with _mpi suffix
Here is the output of ldd:

gromacs3.3/bin$ ldd mdrun_mpi
linux-vdso.so.1 =>  (0x7fff4658c000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x7f7d7afe9000)
libfftw3f.so.3 => /usr/lib/libfftw3f.so.3 (0x7f7d7ac76000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f7d7a9f3000)
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7f7d7a603000)
libopa.so.1 => /usr/lib/libopa.so.1 (0x7f7d7a402000)
libmpl.so.1 => /usr/lib/libmpl.so.1 (0x7f7d7a1fd000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f7d79ff5000)
libcr.so.0 => /usr/lib/libcr.so.0 (0x7f7d79deb000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f7d79bce000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f7d79847000)
/lib64/ld-linux-x86-64.so.2 (0x7f7d7b216000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f7d79642000)


gromacs3.3/bin$ ldd mdrun_mpi | grep mpi
libmpich.so.3 => /usr/lib/libmpich.so.3 (0x7fc78fb5c000)
It seems that gromacs has been compiled with mpich.



On Tue, Nov 20, 2012 at 4:15 AM, Justin Lemkul  wrote:

>
>
> On 11/19/12 12:09 PM, Parisa Rahmani wrote:
>
>> Dear gmx users
>>
>> I have a problem with running parallel jobs on my Debian system(Openmpi
>> installed on it),
>> **Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
>> I am using gmx 3.3.3, because of the *lambda dynamics* method which is
>>
>> implemented in it.
>>
>> AS I know ,in gmx 3.x, the number of processors supplied for the
>> subsequent
>> mdrun needed to match the input file. but when i use **grompp -np 6 &
>> mpirun -np 6 mdrun** the following error appears :
>>
>> ERROR : run input file md.tpr was made for 6 nodes,
>> while mdrun expected it to be for 1 nodes.
>>
>> through search of mailing list i found similar problems, but non of
>> the solutions worked for my case.
>>
>> wihtout -np option in grompp the error disappears, and then with each
>> of these commands
>>
>> **
>>
>> 1)mpirun -np 6 mdrun -deffnm md
>>
>> 2)mpirun -np 6 mdrun -deffnm md -N 6
>>
>> 3)mpirun -np 6 mdrun -np 6 -deffnm md
>>
>> 4)mdrun -np 6 -s md -N 6
>>
>>
>> **it uses 6 processors(each one at nearly 100%), but the simulation
>> time is the same as for 1 processor.
>>
>> I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
>> openmpi), with following command:
>>
>> **
>> grompp -np 6 -f ...
>> mpiexec mdrun ...(number of processors is specified in the bash file)
>> **
>> ,
>> but i can't run it on my 6 core system.
>>
>> Also, I have no problem with newer version of gmx (4.5.x), but i
>> should use this version, and Hope someone can help me.
>>
>>
> Have you properly compiled an MPI-enabled mdrun?  The default executable
> name should be mdrun_mpi.  It should be linked against libmpi, so running
> ldd on the mdrun executable should tell you.
>
> -Justin
>
> --
> ==**==
>
> Justin A. Lemkul, Ph.D.
> Research Scientist
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
>
> ==**==
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Searchbefore
>  posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read 
> http://www.gromacs.org/**Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpirun error

2012-11-19 Thread Justin Lemkul



On 11/19/12 12:09 PM, Parisa Rahmani wrote:

Dear gmx users

I have a problem with running parallel jobs on my Debian system(Openmpi
installed on it),
**Linux debian 3.2.0-1-amd64 #1 SMP , UTC 2012 x86_64 GNU/Linux**
I am using gmx 3.3.3, because of the *lambda dynamics* method which is
implemented in it.

AS I know ,in gmx 3.x, the number of processors supplied for the subsequent
mdrun needed to match the input file. but when i use **grompp -np 6 &
mpirun -np 6 mdrun** the following error appears :

ERROR : run input file md.tpr was made for 6 nodes,
while mdrun expected it to be for 1 nodes.

through search of mailing list i found similar problems, but non of
the solutions worked for my case.

wihtout -np option in grompp the error disappears, and then with each
of these commands

**

1)mpirun -np 6 mdrun -deffnm md

2)mpirun -np 6 mdrun -deffnm md -N 6

3)mpirun -np 6 mdrun -np 6 -deffnm md

4)mdrun -np 6 -s md -N 6


**it uses 6 processors(each one at nearly 100%), but the simulation
time is the same as for 1 processor.

I have no problem with parallel jobs on our cluster(gmx 3.3.3 &
openmpi), with following command:

**
grompp -np 6 -f ...
mpiexec mdrun ...(number of processors is specified in the bash file)
**
,
but i can't run it on my 6 core system.

Also, I have no problem with newer version of gmx (4.5.x), but i
should use this version, and Hope someone can help me.



Have you properly compiled an MPI-enabled mdrun?  The default executable name 
should be mdrun_mpi.  It should be linked against libmpi, so running ldd on the 
mdrun executable should tell you.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpirun error?

2011-02-16 Thread Justin A. Lemkul



Justin Kat wrote:

Dear Gromacs,

My colleague has attempted to issue this command:


mpirun -np 8 (or 7) mdrun_mpi .. (etc)


According to him, he gets the following error message:


MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD  
with errorcode -1.


NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on   
exactly when Open MPI kills them.

--


--- 
Program mdrun_mpi, VERSION 4.0.7

Source code file: domdec.c, line: 5888

Fatal error:
There is no domain decomposition for 7 nodes that is compatible with the 
given box and a minimum cell size of 0.955625 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS 
settings



However, when he uses say, -np 6, he seems to get no error. Any insight 
on why this might be happening?




When any error comes up, the first port of call should be the Gromacs site, 
followed by a mailing list search.  In this case, the website works quite nicely:


http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm


Also, when he saves the output to a file, sometimes he sees the following:


NOTE: Turning on dynamic load balancing


Is this another flag that might be causing the crash? What does that 
line mean?


See the manual and/or Gromacs 4 paper for an explanation of dynamic load 
balancing.  This is a normal message.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mpirun error?

2011-02-16 Thread Justin Kat
Dear Gromacs,

My colleague has attempted to issue this command:


mpirun -np 8 (or 7) mdrun_mpi .. (etc)


According to him, he gets the following error message:


MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--


---
Program mdrun_mpi, VERSION 4.0.7
Source code file: domdec.c, line: 5888

Fatal error:
There is no domain decomposition for 7 nodes that is compatible with the
given box and a minimum cell size of 0.955625 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS
settings


However, when he uses say, -np 6, he seems to get no error. Any insight on
why this might be happening?

Also, when he saves the output to a file, sometimes he sees the following:


NOTE: Turning on dynamic load balancing


Is this another flag that might be causing the crash? What does that line
mean?

Thanks!
Justin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-26 Thread Ragothaman Yennamalli
Hi Tsjerk,
I completely agree with you. I am treating symptoms
rather than the problem. I read your previous comment
on the LINCS warning to Shangwa Han. I dont have any
unnatural amino acids in the protein and EM steps
converged to machine precision. I am attaching the
potential energy .xvg file after EM. I will look into
those atoms and see if I can resolve this problem.
Regards,
Raghu
--- Tsjerk Wassenaar <[EMAIL PROTECTED]> wrote:

> Hi Ragothaman,
> 
> You would do good to try and find out what caused
> the error. You may
> be treating symptoms rather than problems now, and
> simply covering up
> some more severe wrong in your system. Maybe try to
> start a simulation
> after some while, using the same parameters as
> before. This might
> allow your system to relax sufficiently.
> 
> Cheers,
> 
> Tsjerk
> 
> 



__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/

energy.xvg
Description: 1113252116-energy.xvg
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-25 Thread Tsjerk Wassenaar

Hi Ragothaman,

You would do good to try and find out what caused the error. You may
be treating symptoms rather than problems now, and simply covering up
some more severe wrong in your system. Maybe try to start a simulation
after some while, using the same parameters as before. This might
allow your system to relax sufficiently.

Cheers,

Tsjerk

On 1/25/07, Ragothaman Yennamalli <[EMAIL PROTECTED]> wrote:

Hi,
I increased the tau_p to 2.0 and lincs-iter to 4. Now
the system is running smoothly.
Regards,
Ragothaman

--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > HI,
> > Since the log files and crashed .pdb files had
> filled
> > the whole disk space I had to delete them and
> start
> > again.
> > I am simulating a homodimer protein in a water
> box. I
> > have mutated three residues and want to look the
> > behaviour of the protein. I have four setups for
> the
> > same protein without mutation and with mutation
> and
> > respective controls. Among the four only one is
> > crashing at the position restraint stage. The
> other
> > three didnt show me this error (except for the one
> > line LINCS warning).
> > I have run the position restrained dynamics again.
> Yes
> > as you are saying it starts with LINCS warning.
> > This is what it says after the LINCS warning.
>
> You should be doing some energy minimization before
> attempting MD, else
> some bad contacts will send badness around the
> system, maybe eventually
> causing such crashes. Make sure you do EM after
> solvating (and before if
> you need to!)
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
>




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




--
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-25 Thread Ragothaman Yennamalli
Hi,
I increased the tau_p to 2.0 and lincs-iter to 4. Now
the system is running smoothly.
Regards,
Ragothaman

--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > HI,
> > Since the log files and crashed .pdb files had
> filled
> > the whole disk space I had to delete them and
> start
> > again. 
> > I am simulating a homodimer protein in a water
> box. I
> > have mutated three residues and want to look the
> > behaviour of the protein. I have four setups for
> the
> > same protein without mutation and with mutation
> and
> > respective controls. Among the four only one is
> > crashing at the position restraint stage. The
> other
> > three didnt show me this error (except for the one
> > line LINCS warning). 
> > I have run the position restrained dynamics again.
> Yes
> > as you are saying it starts with LINCS warning. 
> > This is what it says after the LINCS warning.
> 
> You should be doing some energy minimization before
> attempting MD, else 
> some bad contacts will send badness around the
> system, maybe eventually 
> causing such crashes. Make sure you do EM after
> solvating (and before if 
> you need to!)
> 
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the 
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> 




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Ragothaman Yennamalli
Hi Mark,
Thanks for the mail. Yes, I solvate the protein and
then do a EM with steepest descent and then I proceed
to do position restrained MD, first restraining the
protein and then the backbone. It is at the backbone
restraint step I got this error. Also I assumed that
if there are any bad contacts they would get resolved
in minimization step, but looks like it hasn't. Please
tell me how to solve this problem. 
Raghu
--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > HI,
> > Since the log files and crashed .pdb files had
> filled
> > the whole disk space I had to delete them and
> start
> > again. 
> > I am simulating a homodimer protein in a water
> box. I
> > have mutated three residues and want to look the
> > behaviour of the protein. I have four setups for
> the
> > same protein without mutation and with mutation
> and
> > respective controls. Among the four only one is
> > crashing at the position restraint stage. The
> other
> > three didnt show me this error (except for the one
> > line LINCS warning). 
> > I have run the position restrained dynamics again.
> Yes
> > as you are saying it starts with LINCS warning. 
> > This is what it says after the LINCS warning.
> 
> You should be doing some energy minimization before
> attempting MD, else 
> some bad contacts will send badness around the
> system, maybe eventually 
> causing such crashes. Make sure you do EM after
> solvating (and before if 
> you need to!)
> 
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the 
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> 




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Mark Abraham

Ragothaman Yennamalli wrote:

HI,
Since the log files and crashed .pdb files had filled
the whole disk space I had to delete them and start
again. 
I am simulating a homodimer protein in a water box. I

have mutated three residues and want to look the
behaviour of the protein. I have four setups for the
same protein without mutation and with mutation and
respective controls. Among the four only one is
crashing at the position restraint stage. The other
three didnt show me this error (except for the one
line LINCS warning). 
I have run the position restrained dynamics again. Yes
as you are saying it starts with LINCS warning. 
This is what it says after the LINCS warning.


You should be doing some energy minimization before attempting MD, else 
some bad contacts will send badness around the system, maybe eventually 
causing such crashes. Make sure you do EM after solvating (and before if 
you need to!)


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Ragothaman Yennamalli
HI,
Since the log files and crashed .pdb files had filled
the whole disk space I had to delete them and start
again. 
I am simulating a homodimer protein in a water box. I
have mutated three residues and want to look the
behaviour of the protein. I have four setups for the
same protein without mutation and with mutation and
respective controls. Among the four only one is
crashing at the position restraint stage. The other
three didnt show me this error (except for the one
line LINCS warning). 
I have run the position restrained dynamics again. Yes
as you are saying it starts with LINCS warning. 
This is what it says after the LINCS warning.
*
Back Off! I just backed up step20672.pdb to
./#step20672.pdb.1#
Sorry couldn't backup step20672.pdb to
./#step20672.pdb.1#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673  Warning: pressure scaling more than 1%,
mu: 8.9983e+20 8.9983e+20 8.9983e+20

Step 20673, time 91.346 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
max 348017441898496.00 (between atoms 7000 and
7002) rms nan
bonds that rotated more than 30 degrees:
**
I am attaching the .mdp file along with this email. 
Thanks in advance.
Raghu

--- Tsjerk Wassenaar <[EMAIL PROTECTED]> wrote:

> Hi Ragu,
> 
> The tail of the .log file is not very informative
> here. Please try to
> find in the log where it first went wrong. It may
> well start out with
> a LINCS warning.
> Besides, please be more specific in what you're
> trying to simulate,
> and what protocol you used.
> 
> Cheers,
> 
> Tsjerk
> 
> On 1/19/07, Ragothaman Yennamalli
> <[EMAIL PROTECTED]> wrote:
> > Hi,
> > This is the tail of the .log file
> > new box (3x3):
> >new box[0]={-4.13207e+15,  0.0e+00,
> > -0.0e+00}
> >new box[1]={ 0.0e+00, -5.17576e+15,
> > -0.0e+00}
> >new box[2]={ 0.0e+00,  1.51116e+23,
> > -1.14219e+16}
> > Correcting invalid box:
> > old box (3x3):
> >old box[0]={-4.13207e+15,  0.0e+00,
> > -0.0e+00}
> >old box[1]={ 0.0e+00, -5.17576e+15,
> > -0.0e+00}
> >old box[2]={ 0.0e+00,  1.51116e+23,
> > -1.14219e+16}
> > THe log files have generated as huge files (approx
> > 20GB) which have used all the disk space.
> > Raghu
> > --- Mark Abraham <[EMAIL PROTECTED]> wrote:
> >
> > > Ragothaman Yennamalli wrote:
> > > > Dear all,
> > > > I am running gromacs3.2 version. When I am
> running
> > > the
> > > > position restraint md for the protein, the
> process
> > > > stops within 100 steps with the following
> error:
> > > >
> > >
> >
>
-
> > > > One of the processes started by mpirun has
> exited
> > > with
> > > >   nonzero exit
> > > > code.  This typically indicates that the
> process
> > > > finished in error.
> > > > If your process did not finish in error, be
> sure
> > > to
> > > > include a "return
> > > > 0" or "exit(0)" in your C code before exiting
> the
> > > > application.
> > > >
> > > > PID 16200 failed on node n0 (10.10.0.8) due to
> > > signal
> > > > 9.
> > > >
> > >
> >
>
-
> > > >
> > > > I searched the mailing list and google and
> > > understood
> > > > that the pressure coupling parameter "tau_p"
> value
> > > in
> > > > the .mdp file has to be more than 1.0 and I
> did
> > > the
> > > > same.
> > >
> > > This is likely irrelevant. What do the ends of
> the
> > > .log files say?
> > >
> > > Mark
> > > ___
> > > gmx-users mailing listgmx-users@gromacs.org
> > >
> http://www.gromacs.org/mailman/listinfo/gmx-users
> > > Please don't post (un)subscribe requests to the
> > > list. Use the
> > > www interface or send it to
> > > [EMAIL PROTECTED]
> > > Can't post? Read
> > > http://www.gromacs.org/mailing_lists/users.php
> > >
> >
> >
> >
> >
> >
>
__
> > Yahoo! India Answers: Share what you know. Learn
> something new
> > http://in.answers.yahoo.com/
> > ___
> > gmx-users mailing listgmx-users@gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please don't post (un)subscribe requests to the
> list. Use the
> > www interface or send it to
> [EMAIL PROTECTED]
> > Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> >
> 
> 
> -- 
> Tsjerk A. Wassenaar, Ph.D.
> Junior UD (post-doc)
> Biomolecular NMR, Bijvoet Center
> Utrecht University
> Padualaan 8
> 3584 CH Utrecht
> The Netherlands
> P: +31-30-2539931
> F: +31-30-

Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Tsjerk Wassenaar

Hi Ragu,

The tail of the .log file is not very informative here. Please try to
find in the log where it first went wrong. It may well start out with
a LINCS warning.
Besides, please be more specific in what you're trying to simulate,
and what protocol you used.

Cheers,

Tsjerk

On 1/19/07, Ragothaman Yennamalli <[EMAIL PROTECTED]> wrote:

Hi,
This is the tail of the .log file
new box (3x3):
   new box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   new box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   new box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
Correcting invalid box:
old box (3x3):
   old box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   old box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   old box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
THe log files have generated as huge files (approx
20GB) which have used all the disk space.
Raghu
--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > Dear all,
> > I am running gromacs3.2 version. When I am running
> the
> > position restraint md for the protein, the process
> > stops within 100 steps with the following error:
> >
>
-
> > One of the processes started by mpirun has exited
> with
> >   nonzero exit
> > code.  This typically indicates that the process
> > finished in error.
> > If your process did not finish in error, be sure
> to
> > include a "return
> > 0" or "exit(0)" in your C code before exiting the
> > application.
> >
> > PID 16200 failed on node n0 (10.10.0.8) due to
> signal
> > 9.
> >
>
-
> >
> > I searched the mailing list and google and
> understood
> > that the pressure coupling parameter "tau_p" value
> in
> > the .mdp file has to be more than 1.0 and I did
> the
> > same.
>
> This is likely irrelevant. What do the ends of the
> .log files say?
>
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
>




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




--
Tsjerk A. Wassenaar, Ph.D.
Junior UD (post-doc)
Biomolecular NMR, Bijvoet Center
Utrecht University
Padualaan 8
3584 CH Utrecht
The Netherlands
P: +31-30-2539931
F: +31-30-2537623
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-22 Thread Ragothaman Yennamalli
Hi,
This is the tail of the .log file
new box (3x3):
   new box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   new box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   new box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
Correcting invalid box:
old box (3x3):
   old box[0]={-4.13207e+15,  0.0e+00,
-0.0e+00}
   old box[1]={ 0.0e+00, -5.17576e+15,
-0.0e+00}
   old box[2]={ 0.0e+00,  1.51116e+23,
-1.14219e+16}
THe log files have generated as huge files (approx
20GB) which have used all the disk space. 
Raghu
--- Mark Abraham <[EMAIL PROTECTED]> wrote:

> Ragothaman Yennamalli wrote:
> > Dear all,
> > I am running gromacs3.2 version. When I am running
> the
> > position restraint md for the protein, the process
> > stops within 100 steps with the following error:
> >
>
-
> > One of the processes started by mpirun has exited
> with
> >   nonzero exit
> > code.  This typically indicates that the process
> > finished in error.
> > If your process did not finish in error, be sure
> to
> > include a "return
> > 0" or "exit(0)" in your C code before exiting the
> > application.
> > 
> > PID 16200 failed on node n0 (10.10.0.8) due to
> signal
> > 9.
> >
>
-
> > 
> > I searched the mailing list and google and
> understood
> > that the pressure coupling parameter "tau_p" value
> in
> > the .mdp file has to be more than 1.0 and I did
> the
> > same. 
> 
> This is likely irrelevant. What do the ends of the
> .log files say?
> 
> Mark
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the
> list. Use the 
> www interface or send it to
> [EMAIL PROTECTED]
> Can't post? Read
> http://www.gromacs.org/mailing_lists/users.php
> 




__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] MPIRUN error while running position restrained MD

2007-01-18 Thread Mark Abraham

Ragothaman Yennamalli wrote:

Dear all,
I am running gromacs3.2 version. When I am running the
position restraint md for the protein, the process
stops within 100 steps with the following error:
-
One of the processes started by mpirun has exited with
  nonzero exit
code.  This typically indicates that the process
finished in error.
If your process did not finish in error, be sure to
include a "return
0" or "exit(0)" in your C code before exiting the
application.

PID 16200 failed on node n0 (10.10.0.8) due to signal
9.
-

I searched the mailing list and google and understood
that the pressure coupling parameter "tau_p" value in
the .mdp file has to be more than 1.0 and I did the
same. 


This is likely irrelevant. What do the ends of the .log files say?

Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] MPIRUN error while running position restrained MD

2007-01-18 Thread Ragothaman Yennamalli
Dear all,
I am running gromacs3.2 version. When I am running the
position restraint md for the protein, the process
stops within 100 steps with the following error:
-
One of the processes started by mpirun has exited with
  nonzero exit
code.  This typically indicates that the process
finished in error.
If your process did not finish in error, be sure to
include a "return
0" or "exit(0)" in your C code before exiting the
application.

PID 16200 failed on node n0 (10.10.0.8) due to signal
9.
-

I searched the mailing list and google and understood
that the pressure coupling parameter "tau_p" value in
the .mdp file has to be more than 1.0 and I did the
same. Even otherwise the process gets killed with the
same error.
Please tell me what I am overlooking or making an
error.
Thanks in advance.

Regards,
Raghu

**
Y. M. Ragothaman,
Research Scholar,
Centre for Computational Biology and Bioinformatics,
School of Information Technology,
Jawaharlal Nehru University,
New Delhi - 110067.

Telephone: 91-11-26717568, 26717585
Facsimile: 91-11-26717586
**



__
Yahoo! India Answers: Share what you know. Learn something new
http://in.answers.yahoo.com/
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php