Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-27 Thread bharat v. adkar

On Sun, 27 Dec 2009, Mark Abraham wrote:


bharat v. adkar wrote:


 Dear all,
   I am trying to perform replica exchange MD (REMD) on a 'protein in
 water' system. I am following instructions given on wiki (How-Tos -
 REMD). I have to perform the REMD simulation with 35 different
 temperatures. As per advise on wiki, I equilibrated the system at
 respective temperatures (total of 35 equilibration simulations). After
 this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr files from the
 equilibrated structures.

 Now when I submit final job for REMD with following command-line, it gives
 some error:

 command line: mpiexec -np 70 mdrun -multi 35 -replex 1000 -s chk_.tpr -v

 error msg:
 ---
 Program mdrun_mpi, VERSION 4.0.7
 Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 179

 Fatal error:
 Not enough memory. Failed to realloc 790760 bytes for nlist-jjnr,
 nlist-jjnr=0x9a400030
 (called from file ../../../SRC/src/mdlib/ns.c, line 503)
 ---

 Thanx for Using GROMACS - Have a Nice Day
:  Cannot allocate memory
 Error on node 19, will try to stop all the nodes
 Halting parallel program mdrun_mpi on CPU 19 out of 70
 ***


 The individual node on the cluster has 8GB of physical memory and 16GB of
 swap memory. Moreover, when logged onto the individual nodes, it shows
 more than 1GB of free memory, so there should be no problem with cluster
 memory. Also, the equilibration jobs for the same system are run on the
 same cluster without any problem.

 What I have observed by submitting different test jobs with varying number
 of processors (and no. of replicas, wherever necessary), that any job with
 total number of processors = 64, runs faithfully without any problem. As
 soon as total number of processors are more than 64, it gives the above
 error. I have tested this with 65 processors/65 replicas also.


This sounds like you might be running on fewer physical CPUs than you have 
available. If so, running multiple MPI processes per physical CPU can lead to 
memory shortage conditions.


I don't understand what you mean. Do you mean, there might be more than 8 
processes running per node (each node has 8 processors)? But that also 
does not seem to be the case, as SGE (sun grid engine) output shows only 
eight processes per node.




I don't know what you mean by swap memory.


Sorry, I meant cache memory..

bharat



Mark


 System: Protein + water + Na ions (total 46878 atoms)
 Gromacs version: tested with both v4.0.5 and v4.0.7
 compiled with: --enable-float --with-fft=fftw3 --enable-mpi
 compiler: gcc_3.4.6 -O3
 machine details: uname -mpio: x86_64 x86_64 x86_64 GNU/Linux


 I tried searching the mailing-list without any luck. I am not sure, if i
 am doing anything wrong in giving commands. Please correct me if it is
 wrong.

 Kindly let me know the solution.


 bharat






--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-27 Thread Mark Abraham

bharat v. adkar wrote:

On Sun, 27 Dec 2009, Mark Abraham wrote:


bharat v. adkar wrote:


 Dear all,
   I am trying to perform replica exchange MD (REMD) on a 'protein in
 water' system. I am following instructions given on wiki (How-Tos -
 REMD). I have to perform the REMD simulation with 35 different
 temperatures. As per advise on wiki, I equilibrated the system at
 respective temperatures (total of 35 equilibration simulations). After
 this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr files from the
 equilibrated structures.

 Now when I submit final job for REMD with following command-line, it 
gives

 some error:

 command line: mpiexec -np 70 mdrun -multi 35 -replex 1000 -s 
chk_.tpr -v


 error msg:
 ---
 Program mdrun_mpi, VERSION 4.0.7
 Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 179

 Fatal error:
 Not enough memory. Failed to realloc 790760 bytes for nlist-jjnr,
 nlist-jjnr=0x9a400030
 (called from file ../../../SRC/src/mdlib/ns.c, line 503)
 ---

 Thanx for Using GROMACS - Have a Nice Day
:  Cannot allocate memory
 Error on node 19, will try to stop all the nodes
 Halting parallel program mdrun_mpi on CPU 19 out of 70
 ***


 The individual node on the cluster has 8GB of physical memory and 
16GB of

 swap memory. Moreover, when logged onto the individual nodes, it shows
 more than 1GB of free memory, so there should be no problem with 
cluster

 memory. Also, the equilibration jobs for the same system are run on the
 same cluster without any problem.

 What I have observed by submitting different test jobs with varying 
number
 of processors (and no. of replicas, wherever necessary), that any 
job with
 total number of processors = 64, runs faithfully without any 
problem. As

 soon as total number of processors are more than 64, it gives the above
 error. I have tested this with 65 processors/65 replicas also.


This sounds like you might be running on fewer physical CPUs than you 
have available. If so, running multiple MPI processes per physical CPU 
can lead to memory shortage conditions.


I don't understand what you mean. Do you mean, there might be more than 
8 processes running per node (each node has 8 processors)? But that also 
does not seem to be the case, as SGE (sun grid engine) output shows only 
eight processes per node.


65 processes can't have 8 processes per node.

Mark


I don't know what you mean by swap memory.


Sorry, I meant cache memory..

bharat



Mark


 System: Protein + water + Na ions (total 46878 atoms)
 Gromacs version: tested with both v4.0.5 and v4.0.7
 compiled with: --enable-float --with-fft=fftw3 --enable-mpi
 compiler: gcc_3.4.6 -O3
 machine details: uname -mpio: x86_64 x86_64 x86_64 GNU/Linux


 I tried searching the mailing-list without any luck. I am not sure, 
if i

 am doing anything wrong in giving commands. Please correct me if it is
 wrong.

 Kindly let me know the solution.


 bharat







--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-27 Thread bharat v. adkar

On Sun, 27 Dec 2009, Mark Abraham wrote:


bharat v. adkar wrote:

 On Sun, 27 Dec 2009, Mark Abraham wrote:

  bharat v. adkar wrote:
  
Dear all,

  I am trying to perform replica exchange MD (REMD) on a 'protein in
water' system. I am following instructions given on wiki (How-Tos -
REMD). I have to perform the REMD simulation with 35 different
temperatures. As per advise on wiki, I equilibrated the system at
respective temperatures (total of 35 equilibration simulations). 
After

this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr files from the
equilibrated structures.
  
   Now when I submit final job for REMD with following command-line, it 
   gives

some error:
  
   command line: mpiexec -np 70 mdrun -multi 35 -replex 1000 -s chk_.tpr 
   -v
  
error msg:

---
Program mdrun_mpi, VERSION 4.0.7
Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 179
  
Fatal error:

Not enough memory. Failed to realloc 790760 bytes for nlist-jjnr,
nlist-jjnr=0x9a400030
(called from file ../../../SRC/src/mdlib/ns.c, line 503)
---
  
Thanx for Using GROMACS - Have a Nice Day

  :   Cannot allocate memory
Error on node 19, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 19 out of 70
***
  
  
   The individual node on the cluster has 8GB of physical memory and 16GB 
   of
swap memory. Moreover, when logged onto the individual nodes, it 
shows
more than 1GB of free memory, so there should be no problem with 
   cluster
memory. Also, the equilibration jobs for the same system are run on 
the

same cluster without any problem.
  
   What I have observed by submitting different test jobs with varying 
   number
   of processors (and no. of replicas, wherever necessary), that any job 
   with
   total number of processors = 64, runs faithfully without any problem. 
   As
soon as total number of processors are more than 64, it gives the 
above

error. I have tested this with 65 processors/65 replicas also.
 
  This sounds like you might be running on fewer physical CPUs than you 
  have available. If so, running multiple MPI processes per physical CPU 
  can lead to memory shortage conditions.


 I don't understand what you mean. Do you mean, there might be more than 8
 processes running per node (each node has 8 processors)? But that also
 does not seem to be the case, as SGE (sun grid engine) output shows only
 eight processes per node.


65 processes can't have 8 processes per node.
why can't it have? as i said, there are 8 processors per node. what i have 
not mentioned is that how many nodes it is using. The jobs got distributed 
over 9 nodes. 8 of which corresponds to 64 processors + 1 processor from 
9th node.
As far I can tell you, job distribution seems okay to me. It is 1 job per 
processor.


bharat



Mark


  I don't know what you mean by swap memory.

 Sorry, I meant cache memory..

 bharat

 
  Mark
 
System: Protein + water + Na ions (total 46878 atoms)

Gromacs version: tested with both v4.0.5 and v4.0.7
compiled with: --enable-float --with-fft=fftw3 --enable-mpi
compiler: gcc_3.4.6 -O3
machine details: uname -mpio: x86_64 x86_64 x86_64 GNU/Linux
  
  
   I tried searching the mailing-list without any luck. I am not sure, if 
   i
am doing anything wrong in giving commands. Please correct me if it 
is

wrong.
  
Kindly let me know the solution.
  
  
bharat
  
  







--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] tpr older version message

2009-12-27 Thread Jack Shultz
If I preped the tpr using amber forcefields, could that be the reason?
The mdrun I am using does not have any force field libraries in its
directory.

On Sat, Dec 26, 2009 at 11:57 PM, Mark Abraham mark.abra...@anu.edu.au wrote:
 Jack Shultz wrote:

 I preped this ligand using acpypi followed by grompp
 grompp -f em.mdp -c ligand_GMX.gro -p ligand_GMX.top
 I tested this .tpr file on my server. WheI had another computer run it
 I get the following message. However we are using the same versions of
 gromacs.

 Back Off! I just backed up md.log to ./#md.log.2#

 ---
 Program mdrun, VERSION 4.0.5
 Source code file: tpxio.c, line: 1643

 Fatal error:
 Can not read file topol.tpr,
             this file is from a Gromacs version which is older than 2.0
             Make a new one with grompp or use a gro or pdb file, if
 possible
 ---

 I'd say it's evident that if the file is not corrupted (use gmxcheck), the
 GROMACS installations weren't the same (unmodified) version. Reproduce the
 conditions and run grompp -h to inspect the version.

 Perhaps you are having a problem with a shared-library mismatch.

 If you have such an old version of GROMACS around, either uninstall it and
 retire the sysadmin, or send the computer to a museum :-)

 Mark

 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-- 
Jack

http://drugdiscoveryathome.com
http://hydrogenathome.org
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] fix a group in truncated octahedron

2009-12-27 Thread lammps lammps
Hi GMX users,

I want to fix a group in a truncate octahedron. How can I dealt with the
questions below,

1. I hope the box corresponds to an inscribed circle of cubic of size
40*40*40, how to calculate the box vectors?

2. One spherical rigid body consists of  face-center cubic lattices is fixed
in the center of the box. I do not want to calculate the force and energy
between the paritcles of this rigid body, so that no matter how large force
between them shoud not blow up the rigid body.   How can I do this?

Thanks in advance.
-- 
wende
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] fix a group in truncated octahedron

2009-12-27 Thread chris . neale
Hi Wende, please do not double post. If you are unsure if your post  
got through, you can easily see the list at  
http://lists.gromacs.org/pipermail/gmx-users/2009-December/date.html.


You did not put units beside 40, so I suppose that you mean 40 A,  
whereas gromacs uses nm.


1. Make a box with one sodium ion and then editconf -c -d 4 -bt  
dodecahedron. This will give you your box, then you can put your  
lattice inside it. With properly selected atom in an index file, you  
could easily do this in one step based on the commands above (plus the  
index group with a single central atom).


2. This is clearly laid out in the manual under energygrp_excl. You  
should familiarize yourself with the online .mdp file options at  
http://manual.gromacs.org/current/online/mdp_opt.html which will help  
you find such things.


Chris.

-- original message --

Hi GMX users,

I want to fix a group in a truncate octahedron. How can I dealt with the
questions below,

1. I hope the box corresponds to an inscribed circle of cubic of size
40*40*40, how to calculate the box vectors?

2. One spherical rigid body consists of  face-center cubic lattices is fixed
in the center of the box. I do not want to calculate the force and energy
between the paritcles of this rigid body, so that no matter how large force
between them shoud not blow up the rigid body.   How can I do this?

Thanks in advance.
--
wende

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Decoupling of Coul. and LJ separately in free energy calculation

2009-12-27 Thread Eudes Fileti
Dear GMX users,
it is recommended the use of two-step procedure
for decoupling the solute from the solvent in hydration
free energy calculation: first decreasing the charges in the solute, without
soft core,
and after that, the same procedure is used for the LJ interactions.

In version Gromacs 4, how can I do this by using the
options couple-moltype(lambda0,lambda1,intramol)?
I'm unsure about the use of these options for this purpose.
Thanks
eef
___
Eudes Eterno Fileti
Centro de Ciências Naturais e Humanas
Universidade Federal do ABC — CCNH
Av. dos Estados, 5001
Santo André - SP - Brasil
CEP 09210-971
+55.11.4437-0196
http://fileti.ufabc.edu.br
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-27 Thread Mark Abraham

bharat v. adkar wrote:

On Sun, 27 Dec 2009, Mark Abraham wrote:


bharat v. adkar wrote:

 On Sun, 27 Dec 2009, Mark Abraham wrote:

  bharat v. adkar wrote:
  Dear all,
  I am trying to perform replica exchange MD (REMD) on a 
'protein in
water' system. I am following instructions given on wiki 
(How-Tos -

REMD). I have to perform the REMD simulation with 35 different
temperatures. As per advise on wiki, I equilibrated the system at
respective temperatures (total of 35 equilibration 
simulations). After
this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr files 
from the

equilibrated structures.
 Now when I submit final job for REMD with following 
command-line, itgives

some error:
 command line: mpiexec -np 70 mdrun -multi 35 -replex 1000 -s 
chk_.tpr-v

  error msg:
---
Program mdrun_mpi, VERSION 4.0.7
Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 179
  Fatal error:
Not enough memory. Failed to realloc 790760 bytes for nlist-jjnr,
nlist-jjnr=0x9a400030
(called from file ../../../SRC/src/mdlib/ns.c, line 503)
---
  Thanx for Using GROMACS - Have a Nice Day
  :   Cannot allocate memory
Error on node 19, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 19 out of 70

***
   The individual node on the cluster has 8GB of physical 
memory and 16GBof
swap memory. Moreover, when logged onto the individual nodes, 
it shows
more than 1GB of free memory, so there should be no problem 
withcluster
memory. Also, the equilibration jobs for the same system are 
run on the

same cluster without any problem.
 What I have observed by submitting different test jobs with 
varyingnumber
   of processors (and no. of replicas, wherever necessary), that 
any jobwith
   total number of processors = 64, runs faithfully without any 
problem.As
soon as total number of processors are more than 64, it gives 
the above

error. I have tested this with 65 processors/65 replicas also.
   This sounds like you might be running on fewer physical CPUs 
than you   have available. If so, running multiple MPI processes per 
physical CPU   can lead to memory shortage conditions.


 I don't understand what you mean. Do you mean, there might be more 
than 8

 processes running per node (each node has 8 processors)? But that also
 does not seem to be the case, as SGE (sun grid engine) output shows 
only

 eight processes per node.


65 processes can't have 8 processes per node.
why can't it have? as i said, there are 8 processors per node. what i 
have not mentioned is that how many nodes it is using. The jobs got 
distributed over 9 nodes. 8 of which corresponds to 64 processors + 1 
processor from 9th node.


OK, that's a full description. Your symptoms are indicative of someone 
making an error somewhere. Since GROMACS works over more than 64 
processors elsewhere, the presumption is that you are doing something 
wrong or the machine is not set up in the way you think it is or should 
be. To get the most effective help, you need to be sure you're providing 
full information - else we can't tell which error you're making or 
(potentially) eliminate you as a source of error.


As far I can tell you, job distribution seems okay to me. It is 1 job 
per processor.


Does non-REMD GROMACS run on more than 64 processors? Does your cluster 
support using more than 8 nodes in a run? Can you run an MPI Hello 
world application that prints the processor and node ID across more 
than 64 processors?


Mark



bharat



Mark


  I don't know what you mean by swap memory.

 Sorry, I meant cache memory..

 bharat

   Mark
 System: Protein + water + Na ions (total 46878 atoms)
Gromacs version: tested with both v4.0.5 and v4.0.7
compiled with: --enable-float --with-fft=fftw3 --enable-mpi
compiler: gcc_3.4.6 -O3
machine details: uname -mpio: x86_64 x86_64 x86_64 GNU/Linux
   I tried searching the mailing-list without any luck. I 
am not sure, ifi
am doing anything wrong in giving commands. Please correct me 
if it is

wrong.
  Kindly let me know the solution.
bharat







--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] tpr older version message

2009-12-27 Thread Mark Abraham

Jack Shultz wrote:

If I preped the tpr using amber forcefields, could that be the reason?


No.


The mdrun I am using does not have any force field libraries in its
directory.


That's irrelevant. Only non-mdrun tools care about the contents of 
$GMXLIB or local force field files. The point of GROMPP is that it is 
the GROMacs Pre-Processor that does all such for mdrun.


When you get some advice, it's good politics to be seen to follow those 
up (or reject with reasons) before casting about wildly with other 
theories :-) You don't want the people giving free advice feeling like 
you're wasting their time!


Mark


On Sat, Dec 26, 2009 at 11:57 PM, Mark Abraham mark.abra...@anu.edu.au wrote:

Jack Shultz wrote:

I preped this ligand using acpypi followed by grompp
grompp -f em.mdp -c ligand_GMX.gro -p ligand_GMX.top
I tested this .tpr file on my server. WheI had another computer run it
I get the following message. However we are using the same versions of
gromacs.

Back Off! I just backed up md.log to ./#md.log.2#

---
Program mdrun, VERSION 4.0.5
Source code file: tpxio.c, line: 1643

Fatal error:
Can not read file topol.tpr,
this file is from a Gromacs version which is older than 2.0
Make a new one with grompp or use a gro or pdb file, if
possible
---

I'd say it's evident that if the file is not corrupted (use gmxcheck), the
GROMACS installations weren't the same (unmodified) version. Reproduce the
conditions and run grompp -h to inspect the version.

Perhaps you are having a problem with a shared-library mismatch.

If you have such an old version of GROMACS around, either uninstall it and
retire the sysadmin, or send the computer to a museum :-)

Mark

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php






--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] tpr older version message

2009-12-27 Thread Jack Shultz
Hi Marc,

I figured it out. I tried upir suggestion and tested the gmxcheck but
got the following errors

gmxcheck.exe -s1 topol.tpr

Please give me TWO run input (.tpr/.tpa/.tpb) files
or specify the -m flag to generate a methods.tex file

gmxcheck.exe -s1 topol.tpr -m

---
Program gmxcheck, VERSION 4.0.5
Source code file: tpxio.c, line: 1643

Fatal error:
Can not read file topol.tpr,
 this file is from a Gromacs version which is older than 2.0
 Make a new one with grompp or use a gro or pdb file, if possible
---



gcq#332: Thanx for Using GROMACS - Have a Nice Day


But then decided I need to run the grompp on my clients along with the
pre-processing libraries generated by acpypi. Then tested the mdrun on
the .tpr this generated. Then it was missing the aminoacids.dat I
downloaded it and everything seems to work. I will now make some
additional steps on this workflow so this should now work!

Thanks again for your help I very much appreciate it.


On Sun, Dec 27, 2009 at 7:18 PM, Mark Abraham mark.abra...@anu.edu.au wrote:
 Jack Shultz wrote:

 If I preped the tpr using amber forcefields, could that be the reason?

 No.

 The mdrun I am using does not have any force field libraries in its
 directory.

 That's irrelevant. Only non-mdrun tools care about the contents of $GMXLIB
 or local force field files. The point of GROMPP is that it is the GROMacs
 Pre-Processor that does all such for mdrun.

 When you get some advice, it's good politics to be seen to follow those up
 (or reject with reasons) before casting about wildly with other theories :-)
 You don't want the people giving free advice feeling like you're wasting
 their time!

 Mark

 On Sat, Dec 26, 2009 at 11:57 PM, Mark Abraham mark.abra...@anu.edu.au
 wrote:

 Jack Shultz wrote:

 I preped this ligand using acpypi followed by grompp
 grompp -f em.mdp -c ligand_GMX.gro -p ligand_GMX.top
 I tested this .tpr file on my server. WheI had another computer run it
 I get the following message. However we are using the same versions of
 gromacs.

 Back Off! I just backed up md.log to ./#md.log.2#

 ---
 Program mdrun, VERSION 4.0.5
 Source code file: tpxio.c, line: 1643

 Fatal error:
 Can not read file topol.tpr,
            this file is from a Gromacs version which is older than 2.0
            Make a new one with grompp or use a gro or pdb file, if
 possible
 ---

 I'd say it's evident that if the file is not corrupted (use gmxcheck),
 the
 GROMACS installations weren't the same (unmodified) version. Reproduce
 the
 conditions and run grompp -h to inspect the version.

 Perhaps you are having a problem with a shared-library mismatch.

 If you have such an old version of GROMACS around, either uninstall it
 and
 retire the sysadmin, or send the computer to a museum :-)

 Mark

 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before
 posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php




 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-- 
Jack

http://drugdiscoveryathome.com
http://hydrogenathome.org
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] tpr older version message

2009-12-27 Thread Justin A. Lemkul



Jack Shultz wrote:

Hi Marc,

I figured it out. I tried upir suggestion and tested the gmxcheck but
got the following errors

gmxcheck.exe -s1 topol.tpr

Please give me TWO run input (.tpr/.tpa/.tpb) files
or specify the -m flag to generate a methods.tex file

gmxcheck.exe -s1 topol.tpr -m

---
Program gmxcheck, VERSION 4.0.5
Source code file: tpxio.c, line: 1643

Fatal error:
Can not read file topol.tpr,
 this file is from a Gromacs version which is older than 2.0
 Make a new one with grompp or use a gro or pdb file, if possible
---




gmxcheck -c is the appropriate usage for checking the contents of a single .tpr 
file.  Using -s1 implies -s2, per the documentation.


-Justin



gcq#332: Thanx for Using GROMACS - Have a Nice Day


But then decided I need to run the grompp on my clients along with the
pre-processing libraries generated by acpypi. Then tested the mdrun on
the .tpr this generated. Then it was missing the aminoacids.dat I
downloaded it and everything seems to work. I will now make some
additional steps on this workflow so this should now work!

Thanks again for your help I very much appreciate it.


On Sun, Dec 27, 2009 at 7:18 PM, Mark Abraham mark.abra...@anu.edu.au wrote:

Jack Shultz wrote:

If I preped the tpr using amber forcefields, could that be the reason?

No.


The mdrun I am using does not have any force field libraries in its
directory.

That's irrelevant. Only non-mdrun tools care about the contents of $GMXLIB
or local force field files. The point of GROMPP is that it is the GROMacs
Pre-Processor that does all such for mdrun.

When you get some advice, it's good politics to be seen to follow those up
(or reject with reasons) before casting about wildly with other theories :-)
You don't want the people giving free advice feeling like you're wasting
their time!

Mark


On Sat, Dec 26, 2009 at 11:57 PM, Mark Abraham mark.abra...@anu.edu.au
wrote:

Jack Shultz wrote:

I preped this ligand using acpypi followed by grompp
grompp -f em.mdp -c ligand_GMX.gro -p ligand_GMX.top
I tested this .tpr file on my server. WheI had another computer run it
I get the following message. However we are using the same versions of
gromacs.

Back Off! I just backed up md.log to ./#md.log.2#

---
Program mdrun, VERSION 4.0.5
Source code file: tpxio.c, line: 1643

Fatal error:
Can not read file topol.tpr,
   this file is from a Gromacs version which is older than 2.0
   Make a new one with grompp or use a gro or pdb file, if
possible
---

I'd say it's evident that if the file is not corrupted (use gmxcheck),
the
GROMACS installations weren't the same (unmodified) version. Reproduce
the
conditions and run grompp -h to inspect the version.

Perhaps you are having a problem with a shared-library mismatch.

If you have such an old version of GROMACS around, either uninstall it
and
retire the sysadmin, or send the computer to a museum :-)

Mark

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface
or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php







--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] convert B-factor

2009-12-27 Thread AntonioLeung
Dear all,
I want to convert the difference of two rmsf data sets into B-factor of a 
coordinate (to illustrate their difference by coloring the structure by 
B-factor), can anyone tell me how to do it?

Thanks in advance!

Antonio-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] convert B-factor

2009-12-27 Thread Mark Abraham

AntonioLeung wrote:

Dear all,
I want to convert the difference of two rmsf data sets into B-factor of 
a coordinate (to illustrate their difference by coloring the structure 
by B-factor), can anyone tell me how to do it?


g_rmsf -h

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-27 Thread bharat v. adkar

On Mon, 28 Dec 2009, Mark Abraham wrote:


bharat v. adkar wrote:

 On Sun, 27 Dec 2009, Mark Abraham wrote:

  bharat v. adkar wrote:
On Sun, 27 Dec 2009, Mark Abraham wrote:
  
 bharat v. adkar wrote:

 Dear all,
 I am trying to perform replica exchange MD (REMD) on a 
   'protein in
   water' system. I am following instructions given on wiki 
   (How-Tos -

   REMD). I have to perform the REMD simulation with 35 different
   temperatures. As per advise on wiki, I equilibrated the system 
   at
   respective temperatures (total of 35 equilibration 
   simulations). After
   this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr files 
   from the

   equilibrated structures.
Now when I submit final job for REMD with following 
   command-line, itgives

   some error:
command line: mpiexec -np 70 mdrun -multi 35 -replex 1000 -s 
   chk_.tpr-v

 error msg:
   ---
   Program mdrun_mpi, VERSION 4.0.7
   Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 179
 Fatal error:
   Not enough memory. Failed to realloc 790760 bytes for 
   nlist-jjnr,

   nlist-jjnr=0x9a400030
   (called from file ../../../SRC/src/mdlib/ns.c, line 503)
   ---
 Thanx for Using GROMACS - Have a Nice Day
:Cannot allocate memory
   Error on node 19, will try to stop all the nodes
   Halting parallel program mdrun_mpi on CPU 19 out of 70

   ***
  The individual node on the cluster has 8GB of physical 
   memory and 16GBof
   swap memory. Moreover, when logged onto the individual nodes, 
  it  shows
   more than 1GB of free memory, so there should be no problem 
  with cluster
   memory. Also, the equilibration jobs for the same system are 
   run on the

   same cluster without any problem.
What I have observed by submitting different test jobs with 
  varying number
  of processors (and no. of replicas, wherever necessary), that 
   any jobwith
  total number of processors = 64, runs faithfully without any 
   problem.As
   soon as total number of processors are more than 64, it gives 
  the  above

  error. I have tested this with 65 processors/65 replicas also.
  This sounds like you might be running on fewer physical CPUs 
   than you   have available. If so, running multiple MPI processes per 
   physical CPU   can lead to memory shortage conditions.
  
   I don't understand what you mean. Do you mean, there might be more 
   than 8
processes running per node (each node has 8 processors)? But that 
also
does not seem to be the case, as SGE (sun grid engine) output shows 
   only

eight processes per node.
 
  65 processes can't have 8 processes per node.

 why can't it have? as i said, there are 8 processors per node. what i have
 not mentioned is that how many nodes it is using. The jobs got distributed
 over 9 nodes. 8 of which corresponds to 64 processors + 1 processor from
 9th node.


OK, that's a full description. Your symptoms are indicative of someone making 
an error somewhere. Since GROMACS works over more than 64 processors 
elsewhere, the presumption is that you are doing something wrong or the 
machine is not set up in the way you think it is or should be. To get the 
most effective help, you need to be sure you're providing full information - 
else we can't tell which error you're making or (potentially) eliminate you 
as a source of error.



Sorry for not being clear in statements.


 As far I can tell you, job distribution seems okay to me. It is 1 job per
 processor.


Does non-REMD GROMACS run on more than 64 processors? Does your cluster 
support using more than 8 nodes in a run? Can you run an MPI Hello world 
application that prints the processor and node ID across more than 64 
processors?


Yes, the cluster supports runs with more than 8 nodes. I generated a 
system with 10 nm water box and submitted on 80 processors. It was running 
fine. It printed all 80 NODEIDs. Also showed me when the job will get 
over.


bharat




Mark



 bharat

 
  Mark
 
 I don't know what you mean by swap memory.
  
Sorry, I meant cache memory..
  
bharat
  
  Mark

System: Protein + water + Na ions (total 46878 atoms)
   Gromacs version: tested with both v4.0.5 and v4.0.7
   compiled with: --enable-float --with-fft=fftw3 --enable-mpi
   compiler: gcc_3.4.6 -O3
   machine details: uname -mpio: x86_64 x86_64 x86_64 GNU/Linux
  I tried searching the mailing-list without any luck. I 
   am not sure, ifi
   am doing anything wrong in giving commands. Please correct me 
   if it is

   wrong.
 Kindly let me know the solution.
 

Re: [gmx-users] convert B-factor

2009-12-27 Thread AntonioLeung
I know how to calculate, and have calculated the RMSF of  two trajectories (of 
the same molecule), and I want to compare the two RMSFs. I want convert their 
discrepancy into B-factors. Can you tell me more detailed? 
 
-- Original --
From:  Mark Abrahammark.abra...@anu.edu.au;
Date:  Mon, Dec 28, 2009 11:04 AM
To:  Discussion list for GROMACS usersgmx-users@gromacs.org; 

Subject:  Re: [gmx-users] convert B-factor

 
 AntonioLeung wrote:
 Dear all,
 I want to convert the difference of two rmsf data sets into B-factor of 
 a coordinate (to illustrate their difference by coloring the structure 
 by B-factor), can anyone tell me how to do it?

g_rmsf -h

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] protein simulation

2009-12-27 Thread edmund lee

Dear all, 

I am trying to do a simulation of protein OMPA. At the step grompp, it shows a 
fatal error stated  Fatal error: Atomtype 'HC' not found!
I tried to configure the error but i failed.  So, hope that anyone can help me 
in this.

Thnaks..

  
_
New Windows 7: Simplify what you do everyday. Find the right PC for you.
http://windows.microsoft.com/shop-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-27 Thread David van der Spoel

bharat v. adkar wrote:

On Mon, 28 Dec 2009, Mark Abraham wrote:


bharat v. adkar wrote:

 On Sun, 27 Dec 2009, Mark Abraham wrote:

  bharat v. adkar wrote:
On Sun, 27 Dec 2009, Mark Abraham wrote:
   bharat v. adkar wrote:
 Dear all,
 I am trying to perform replica exchange MD (REMD) on a  
  'protein in
   water' system. I am following instructions given on wiki  
  (How-Tos -
   REMD). I have to perform the REMD simulation with 35 
different
   temperatures. As per advise on wiki, I equilibrated the 
systemat
   respective temperatures (total of 35 equilibration
simulations). After
   this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr 
filesfrom the

   equilibrated structures.
Now when I submit final job for REMD with following  
  command-line, itgives

   some error:
command line: mpiexec -np 70 mdrun -multi 35 -replex 
1000 -schk_.tpr-v

 error msg:
   ---
   Program mdrun_mpi, VERSION 4.0.7
   Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 
179

 Fatal error:
   Not enough memory. Failed to realloc 790760 bytes for   
 nlist-jjnr,

   nlist-jjnr=0x9a400030
   (called from file ../../../SRC/src/mdlib/ns.c, line 503)
   ---
 Thanx for Using GROMACS - Have a Nice Day
:Cannot allocate memory
   Error on node 19, will try to stop all the nodes
   Halting parallel program mdrun_mpi on CPU 19 out of 70
   
***
  The individual node on the cluster has 8GB of 
physicalmemory and 16GBof
   swap memory. Moreover, when logged onto the individual 
nodes,   it  shows
   more than 1GB of free memory, so there should be no 
problem   with cluster
   memory. Also, the equilibration jobs for the same system 
arerun on the

   same cluster without any problem.
What I have observed by submitting different test jobs 
with   varying number
  of processors (and no. of replicas, wherever necessary), 
thatany jobwith
  total number of processors = 64, runs faithfully without 
anyproblem.As
   soon as total number of processors are more than 64, it 
gives   the  above

  error. I have tested this with 65 processors/65 replicas also.
  This sounds like you might be running on fewer physical 
CPUsthan you   have available. If so, running multiple MPI 
processes perphysical CPU   can lead to memory shortage 
conditions.
 I don't understand what you mean. Do you mean, there might 
be morethan 8
processes running per node (each node has 8 processors)? But 
that also
does not seem to be the case, as SGE (sun grid engine) output 
showsonly

eight processes per node.
   65 processes can't have 8 processes per node.
 why can't it have? as i said, there are 8 processors per node. what 
i have
 not mentioned is that how many nodes it is using. The jobs got 
distributed
 over 9 nodes. 8 of which corresponds to 64 processors + 1 processor 
from

 9th node.


OK, that's a full description. Your symptoms are indicative of someone 
making an error somewhere. Since GROMACS works over more than 64 
processors elsewhere, the presumption is that you are doing something 
wrong or the machine is not set up in the way you think it is or 
should be. To get the most effective help, you need to be sure you're 
providing full information - else we can't tell which error you're 
making or (potentially) eliminate you as a source of error.



Sorry for not being clear in statements.

 As far I can tell you, job distribution seems okay to me. It is 1 
job per

 processor.


Does non-REMD GROMACS run on more than 64 processors? Does your 
cluster support using more than 8 nodes in a run? Can you run an MPI 
Hello world application that prints the processor and node ID across 
more than 64 processors?


Yes, the cluster supports runs with more than 8 nodes. I generated a 
system with 10 nm water box and submitted on 80 processors. It was 
running fine. It printed all 80 NODEIDs. Also showed me when the job 
will get over.


bharat




Mark



 bharat

   Mark
  I don't know what you mean by swap memory.
  Sorry, I meant cache memory..
  bharat
Mark
System: Protein + water + Na ions (total 46878 atoms)
   Gromacs version: tested with both v4.0.5 and v4.0.7
   compiled with: --enable-float --with-fft=fftw3 --enable-mpi
   compiler: gcc_3.4.6 -O3
   machine details: uname -mpio: x86_64 x86_64 x86_64 GNU/Linux
  I tried searching the mailing-list without any 
luck. Iam not sure, ifi
   am doing anything wrong in giving commands. Please correct 
meif it is

   wrong.
 Kindly let me