RE: [gmx-users] problem with the size of freeze groups

2010-03-03 Thread Berk Hess

Hi,

I don't know exactly what you have done, so currently I can say more than I 
have done already.

Could you please file a bugzilla at bugzilla.gromacs.org and attach all the 
files required to run grompp?

Thanks,

Berk

Date: Tue, 2 Mar 2010 21:11:06 -0500
Subject: Re: [gmx-users] problem with the size of freeze groups
From: jampa...@gmail.com
To: gmx-users@gromacs.org

Dear Berk,

Thanks for your previous responses, Can you please let me know if you have any 
solution for the size of freezing groups? I am still not able to do if i have 
larger freezing groups.

Thanks again for your kind help.

srinivas.


On Fri, Feb 26, 2010 at 5:37 PM, jampani srinivas jampa...@gmail.com wrote:

Dear Berk,
I have checked my inputs and tcl scripts that i have used for the selection, i 
could see that my selection doesn't have any problem. I submitted it again 
still i am getting the same log   file with nrdf 0 for non-freezing group. 
Please let me know if you want see any of my input files, and help me if you 
have solution for this problem.


Thanks and RegardsSrinivas.

On Fri, Feb 26, 2010 at 12:39 PM, jampani srinivas jampa...@gmail.com wrote:


Dear Berk,
I am using VERSION 4.0.5. As you said if there is no problem i should get it 
correctly, i don't know where it is going wrong. I have written a small script 
in tcl to use in vmd to get my selections. i will check the script and the 
selection again. I will let you know my results again.



Thanks for your valuable time and kind help.Srinivas.
On Fri, Feb 26, 2010 at 12:29 PM, Berk Hess g...@hotmail.com wrote:








Hi,

Which version of Gromacs are you using?
I can't see any issues in the 4.0 code, but some older version might have 
problems.

Berk

Date: Fri, 26 Feb 2010 12:05:56 -0500



Subject: Re: [gmx-users] problem with the size of freeze groups
From: jampa...@gmail.com
To: gmx-users@gromacs.org




Dear Berk,
They are same, freeze and Tmp2 are exactly the same groups. I just put them 
like that for my convenience, just to avoid confusion in my second email i made 
it uniform.




ThanksSrinivas.

On Fri, Feb 26, 2010 at 11:59 AM, Berk Hess g...@hotmail.com wrote:









That is what I suspected, by I don't know why this is.

Are you really sure you made a temperature coupling group
that is exactly the freeze group?
This first mdp file you mailed had a different group names for the freeze group




and the tcoupl groups.

Berk

Date: Fri, 26 Feb 2010 11:53:49 -0500
Subject: Re: [gmx-users] problem with the size of freeze groups
From: jampa...@gmail.com




To: gmx-users@gromacs.org

Dear Berk, 
It looks to me some thing is wrong when i change the radius from 35 to 25,  
herewith i am giving grpopts for both systems





+
grpopts: (system with 35 A)   nrdf: 33141.4 0   ref_t: 300  
 0   tau_t: 0.1 0.1+
grpopts: (system with 25A)   nrdf:   0  0   ref_t: 
300   0

   tau_t: 0.1 0.1
   I think some thing is going wrong when the size of freezing group is 
increased. I don't know whether my understand is correct or not.





 Thanks Srinivas.


On Fri, Feb 26, 2010 at 11:01 AM, Berk Hess g...@hotmail.com wrote:






Ah, but that does not correspond to the mdp options tou mailed.
Here there is only one group with 0 degrees of freedom and reference 
temperature 0.

Berk

Date: Fri, 26 Feb 2010 10:50:13 -0500






Subject: Re: [gmx-users] problem with the size of freeze groups
From: jampa...@gmail.com
To: gmx-users@gromacs.org







HI
Thanks, My log file shows me nrdf: 0
###
   grpopts:   nrdf: 0   ref_t:0






   tau_t:   0
###
ThanksSrinivas.
On Fri, Feb 26, 2010 at 10:25 AM, Berk Hess g...@hotmail.com wrote:












Hi,

Then I have no clue what might be wrong.
Have you check nrdf in the log file?

Berk

Date: Fri, 26 Feb 2010 09:54:22 -0500
Subject: Re: [gmx-users] problem with the size of freeze groups







From: jampa...@gmail.com
To: gmx-users@gromacs.org

Dear Berk,

Thanks for your response, As you mentioned i have separated t-coupling group 
for frozen and non-frozen groups, still the result is same.Herewith i am giving 
my md.mdp file, Can you suggest me if i am missing any options in  my md.mdp 
file?








Thanks againSrinivas.
md.mdp file
+++title   = AB2130








cpp = /usr/bin/cpp
constraints = all-bonds
integrator  = md
dt  = 0.002 ; ps !
nsteps  = 150 ; total 3.0 ns.
nstcomm = 1
nstxout = 1000 ; collect data every 2.0 ps








nstvout = 1000 ; collect velocity every 2.0 ps
nstfout = 0
nstlog  = 0
nstenergy   = 1000 ; collect energy   every 2.0 ps
nstlist = 10
ns_type = grid
rlist   = 1.0








coulombtype = PME
rcoulomb 

Re: [gmx-users] gromacs memory usage

2010-03-03 Thread Amit Choubey
Hi Roland,

It says

gromacs/4.0.5/bin/mdrun_mpi: ELF 64-bit LSB executable, AMD x86-64, version
1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for
GNU/Linux 2.6.9, not stripped

On Tue, Mar 2, 2010 at 10:34 PM, Roland Schulz rol...@utk.edu wrote:

 Amit,

 try the full line (with the file)

 Roland

 On Wed, Mar 3, 2010 at 1:22 AM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland

 I tried 'which mdrun' but it only gives the path name of installation. Is
 there any other way to know if the installation is 64 bit ot not?

 Thank you,
 Amit


 On Tue, Mar 2, 2010 at 10:03 PM, Roland Schulz rol...@utk.edu wrote:

 Hi,

 do:
 file `which mdrun`
 and it should give:
 /usr/bin/mdrun: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
 dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped

 If it is not 64 you need to compile with 64 and have a 64bit kernel.
 Since you asked before about 2GB large files this might indeed be your
 problem.

 Roland

 On Wed, Mar 3, 2010 at 12:48 AM, Amit Choubey kgp.a...@gmail.comwrote:

 Hi Tsjerk,

 I tried to do a test run based on the presentation. But there was a
 memory related error (I had given a leverage of more than 2 GB).

 I did not understand the 64 bit issue, could you let me know wheres the
 documentation? I need to look into that.

 Thank you,
 amit


 On Tue, Mar 2, 2010 at 9:14 PM, Tsjerk Wassenaar tsje...@gmail.comwrote:

 Hi Amit,

 I think the presentation gives right what you want: a rough estimate.
 Now as Berk pointed out, to allocate more than 2GB of memory, you need
 to compile in 64bit. Then, if you want to have a real feel for the
 memory usage, there's no other way than trying. But fortunately, the
 memory requirements of a (very) long simulation are equal to that of a
 very short one, so it doesn't need to cost much time.

 Cheers,

 Tsjerk

 On Wed, Mar 3, 2010 at 5:31 AM, Amit Choubey kgp.a...@gmail.com
 wrote:
  Hi Mark,
 
  Yes thats one way to go about it. But it would have been great if i
 could
  get a rough estimation.
 
  Thank you.
 
  amit
 
 
  On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  wrote:
 
  On 3/03/2010 12:53 PM, Amit Choubey wrote:
 
 Hi Mark,
 
 I quoted the memory usage requirements from a presentation by
 Berk
 Hess, Following is the link to it
 
 
 
 
 http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf
 
 l. In that presentation on pg 27,28 Berk does talk about memory
 usage but then I am not sure if he referred to any other
 specific
  thing.
 
 My system only contains SPC water. I want Berendsen T coupling
 and
 Coulomb interaction with Reaction Field.
 
 I just want a rough estimate of how big of a system of water can
 be
 simulated on our super computers.
 
  Try increasingly large systems until it runs out of memory. There's
 your
  answer.
 
  Mark
 
  On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  mailto:mark.abra...@anu.edu.au wrote:
 
 - Original Message -
 From: Amit Choubey kgp.a...@gmail.com mailto:
 kgp.a...@gmail.com
 Date: Saturday, February 27, 2010 10:17
 Subject: Re: [gmx-users] gromacs memory usage
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 mailto:gmx-users@gromacs.org
 
   Hi Mark,
   We have few nodes with 64 GB memory and many other with 16 GB
 of
 memory. I am attempting a simulation of around 100 M atoms.
 
 Well, try some smaller systems and work upwards to see if you
 have a
 limit in practice. 50K atoms can be run in less than 32GB over
 64
 processors. You didn't say whether your simulation system can
 run on
 1 processor... if it does, then you can be sure the problem
 really
 is related to parallelism.
 
   I did find some document which says one need (50bytes)*NATOMS
 on
 master node, also one needs
(100+4*(no. of atoms in cutoff)*(NATOMS/nprocs) for compute
 nodes. Is this true?
 
 In general, no. It will vary with the simulation algorithm
 you're
 using. Quoting such without attributing the source or describing
 the
 context is next to useless. You also dropped a parenthesis.
 
 Mark
 --
 gmx-users mailing list gmx-users@gromacs.org
 mailto:gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/searchbefore
 posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org
 mailto:gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at http://www.gromacs.org/search before
 posting!
  Please don't post (un)subscribe requests to the list. Use the www
  

[gmx-users] parallel simulations

2010-03-03 Thread Gavin Melaugh
Hi all

My apologies for the lack of  detail in my previous e-mail. I am trying
to run gromacs-4.0.7 for a system that I am studying. I have ran several
simulations on serial on my own computer that have to date worked fine.
I am now however trying to run the simulations on our local cluster in
parallel using mpich-1.2.7 and experiencing some difficulty. Please note
that the version of gromacs mentioned above is installed in parallel.
Right when I run a short simulation of 500 steps in one two or three nodes the 
simulations
runs fine (takes about 10 seconds) and all the data is written to the
log file. However when I increase the nodes to 4 there is no stepwise
info written and the simulation does not progress. For clarity I have
attached the log file that iam getting for the 4 node simulation. I realise that
this maybe a cluster problem, but if anyone has experienced similar
issues I would be grateful of some feedback.

Here is the script I use:

#!/bin/bash
#PBS -N hex
#PBS -r n
#PBS -q longterm
#PBS -l walltime=00:30:00
#PBS -l nodes=4

cd $PBS_O_WORKDIR
export P4_GLOBMEMSIZE=1

/usr/local/bin/mpiexec mdrun -s

Also here is my path:
# Gromacs
export GMXLIB=/k/gavin/gromacs-4.0.7-parallel/share/gromacs/top
export PATH=$PATH:/k/gavin/gromacs-4.0.7-parallel/bin


Cheers

Gavin

Log file opened on Wed Mar  3 14:46:51 2010
Host: kari57  pid: 32586  nodeid: 0  nnodes:  4
The Gromacs distribution was built Wed Jan 20 10:02:46 GMT 2010 by
ga...@kari (Linux 2.6.17asc64 x86_64)


 :-)  G  R  O  M  A  C  S  (-:

   GROningen MAchine for Chemical Simulation

:-)  VERSION 4.0.7  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

:-)  mdrun  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

parameters of the run:
   integrator   = md
   nsteps   = 500
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 1
   comm_mode= Linear
   nstlog   = 25
   nstxout  = 25
   nstvout  = 25
   nstfout  = 25
   nstenergy= 25
   nstxtcout= 0
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 35
   nky  = 35
   nkz  = 35
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = FALSE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = Nose-Hoover
   epc  = Parrinello-Rahman
   epctype  = Isotropic
   tau_p= 1
   ref_p (3x3):
  ref_p[0]={ 1.01325e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  1.01325e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  1.01325e+00}
   compress (3x3):
  compress[0]={ 4.5e-05,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  4.5e-05,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  4.5e-05}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 

Re: [gmx-users] gromacs memory usage

2010-03-03 Thread Roland Schulz
Hi,

ok then it is compiled in 64bit.

You didn't say how many cores each node has and on how many nodes you want
to run.

Roland

On Wed, Mar 3, 2010 at 4:32 AM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland,

 It says

 gromacs/4.0.5/bin/mdrun_mpi: ELF 64-bit LSB executable, AMD x86-64, version
 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for
 GNU/Linux 2.6.9, not stripped


 On Tue, Mar 2, 2010 at 10:34 PM, Roland Schulz rol...@utk.edu wrote:

 Amit,

 try the full line (with the file)

 Roland

 On Wed, Mar 3, 2010 at 1:22 AM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland

 I tried 'which mdrun' but it only gives the path name of installation. Is
 there any other way to know if the installation is 64 bit ot not?

 Thank you,
 Amit


 On Tue, Mar 2, 2010 at 10:03 PM, Roland Schulz rol...@utk.edu wrote:

 Hi,

 do:
 file `which mdrun`
 and it should give:
 /usr/bin/mdrun: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
 dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped

 If it is not 64 you need to compile with 64 and have a 64bit kernel.
 Since you asked before about 2GB large files this might indeed be your
 problem.

 Roland

 On Wed, Mar 3, 2010 at 12:48 AM, Amit Choubey kgp.a...@gmail.comwrote:

 Hi Tsjerk,

 I tried to do a test run based on the presentation. But there was a
 memory related error (I had given a leverage of more than 2 GB).

 I did not understand the 64 bit issue, could you let me know wheres the
 documentation? I need to look into that.

 Thank you,
 amit


 On Tue, Mar 2, 2010 at 9:14 PM, Tsjerk Wassenaar tsje...@gmail.comwrote:

 Hi Amit,

 I think the presentation gives right what you want: a rough estimate.
 Now as Berk pointed out, to allocate more than 2GB of memory, you need
 to compile in 64bit. Then, if you want to have a real feel for the
 memory usage, there's no other way than trying. But fortunately, the
 memory requirements of a (very) long simulation are equal to that of a
 very short one, so it doesn't need to cost much time.

 Cheers,

 Tsjerk

 On Wed, Mar 3, 2010 at 5:31 AM, Amit Choubey kgp.a...@gmail.com
 wrote:
  Hi Mark,
 
  Yes thats one way to go about it. But it would have been great if i
 could
  get a rough estimation.
 
  Thank you.
 
  amit
 
 
  On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  wrote:
 
  On 3/03/2010 12:53 PM, Amit Choubey wrote:
 
 Hi Mark,
 
 I quoted the memory usage requirements from a presentation by
 Berk
 Hess, Following is the link to it
 
 
 
 
 http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf
 
 l. In that presentation on pg 27,28 Berk does talk about memory
 usage but then I am not sure if he referred to any other
 specific
  thing.
 
 My system only contains SPC water. I want Berendsen T coupling
 and
 Coulomb interaction with Reaction Field.
 
 I just want a rough estimate of how big of a system of water
 can be
 simulated on our super computers.
 
  Try increasingly large systems until it runs out of memory. There's
 your
  answer.
 
  Mark
 
  On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  mailto:mark.abra...@anu.edu.au wrote:
 
 - Original Message -
 From: Amit Choubey kgp.a...@gmail.com mailto:
 kgp.a...@gmail.com
 Date: Saturday, February 27, 2010 10:17
 Subject: Re: [gmx-users] gromacs memory usage
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 mailto:gmx-users@gromacs.org
 
   Hi Mark,
   We have few nodes with 64 GB memory and many other with 16
 GB of
 memory. I am attempting a simulation of around 100 M atoms.
 
 Well, try some smaller systems and work upwards to see if you
 have a
 limit in practice. 50K atoms can be run in less than 32GB over
 64
 processors. You didn't say whether your simulation system can
 run on
 1 processor... if it does, then you can be sure the problem
 really
 is related to parallelism.
 
   I did find some document which says one need
 (50bytes)*NATOMS on
 master node, also one needs
(100+4*(no. of atoms in cutoff)*(NATOMS/nprocs) for compute
 nodes. Is this true?
 
 In general, no. It will vary with the simulation algorithm
 you're
 using. Quoting such without attributing the source or
 describing the
 context is next to useless. You also dropped a parenthesis.
 
 Mark
 --
 gmx-users mailing list gmx-users@gromacs.org
 mailto:gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/searchbefore
 posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org
 mailto:gmx-users-requ...@gromacs.org.
 Can't post? Read
 http://www.gromacs.org/mailing_lists/users.php
 
 
  --
  gmx-users mailing list

[gmx-users] GPU GROMACS

2010-03-03 Thread Jack Shultz
Is anyone working on a build GROMACS with CUDA other than the OpenMM
project? I would like to build this with explicit solvents and as I
understand it, OpenMM is implicit.

-- 
Jack

http://drugdiscoveryathome.com
http://hydrogenathome.org
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] parallel simulations

2010-03-03 Thread Gavin Melaugh
Hi all

My apologies for the lack of  detail in my previous e-mail. I am trying
to run gromacs-4.0.7 for a system that I am studying. I have ran several
simulations on serial on my own computer that have to date worked fine.
I am now however trying to run the simulations on our local cluster in
parallel using mpich-1.2.7 and experiencing some difficulty. Please note
that the version of gromacs mentioned above is installed in parallel.
Right when I run a short simulation of 500 steps in one two or three
nodes the simulations
runs fine (takes about 10 seconds) and all the data is written to the
log file. However when I increase the nodes to 4 there is no stepwise
info written and the simulation does not progress. For clarity I have
attached the log file that iam getting for the 4 node simulation. I
realise that
this maybe a cluster problem, but if anyone has experienced similar
issues I would be grateful of some feedback.

Here is the script I use:

#!/bin/bash
#PBS -N hex
#PBS -r n
#PBS -q longterm
#PBS -l walltime=00:30:00
#PBS -l nodes=4

cd $PBS_O_WORKDIR
export P4_GLOBMEMSIZE=1

/usr/local/bin/mpiexec mdrun -s

Also here is my path:
# Gromacs
export GMXLIB=/k/gavin/gromacs-4.0.7-parallel/share/gromacs/top
export PATH=$PATH:/k/gavin/gromacs-4.0.7-parallel/bin


Cheers

Gavin


Log file opened on Wed Mar  3 14:46:51 2010
Host: kari57  pid: 32586  nodeid: 0  nnodes:  4
The Gromacs distribution was built Wed Jan 20 10:02:46 GMT 2010 by
ga...@kari (Linux 2.6.17asc64 x86_64)


 :-)  G  R  O  M  A  C  S  (-:

   GROningen MAchine for Chemical Simulation

:-)  VERSION 4.0.7  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

:-)  mdrun  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

parameters of the run:
   integrator   = md
   nsteps   = 500
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 1
   comm_mode= Linear
   nstlog   = 25
   nstxout  = 25
   nstvout  = 25
   nstfout  = 25
   nstenergy= 25
   nstxtcout= 0
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 35
   nky  = 35
   nkz  = 35
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = FALSE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = Nose-Hoover
   epc  = Parrinello-Rahman
   epctype  = Isotropic
   tau_p= 1
   ref_p (3x3):
  ref_p[0]={ 1.01325e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  1.01325e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  1.01325e+00}
   compress (3x3):
  compress[0]={ 4.5e-05,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  4.5e-05,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  4.5e-05}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 

[gmx-users] NVT simulation and mdp file

2010-03-03 Thread teklebrh

Dear Gromacs Users,

I have encountered the following issues while I was running my MD  
simulation. Can anybody comment on what the meaning of these notes  
are. Is there anything I could do to avoid them.


NOTE 2 [file PAP.top, line unknown]:

  The largest charge group contains 12 atoms.

  Since atoms only see each other when the centers of geometry of the charge

  groups they belong to are within the cut-off distance, too large charge

  groups can lead to serious cut-off artifacts.

  For efficiency and accuracy, charge group should consist of a few atoms.

  For all-atom force fields use: CH3, CH2, CH, NH2, NH, OH, CO2, CO, etc.

My SOLVENT IS TOLUENE --- the PRODRG gave me a topology file with only  
one group charge only.


NOTE 1 [file nvt.mdp, line unknown]:

  The Berendsen thermostat does not generate the correct kinetic energy

  distribution. You might want to consider using the V-rescale thermostat.


NOTE 3 [file aminoacids.dat, line 1]:

  The optimal PME mesh load for parallel simulations is below 0.5

  and for highly parallel simulations between 0.25 and 0.33,

  for higher performance, increase the cut-off and the PME grid spacing


In addition to the above notes I have also some questions about  the  
NVT and NPT simulation.


1)I am using toluene as a solvent to simulate my polymer, do I need to  
use the compressibility of toluene which is  9.2e-5 or the default  
value  4.5e-5 1/bar.
2)What about the dielectric constant (the dielectric constant for  
toluene is 2-2.4), but the default value is 80 ( I assume this is for  
water- am I right).
3)Is  always rvdw = 1.4 nm for GROMOS96. As a result I have to  
increase my box size of the solute at the beginning to a min of 2*1.4  
=2.8 ( min image convection). Is this the right way to do!
4)I run an NVT simulation to equilibrate my system  for 100 ps. When I  
checked my simulation at the end (successfully completed) I noticed  
that the shape of my simulation box looks CIRCULAR! some how the  
rectangular shape looks distorted. What does this tell! Do you guys  
think something is wrong in my simulation.
5)I included the polar and aromatic hydrogens in my simulation (   
ffG43a1.itp – GROMOS96.1 in PRODRG). Does these hydrogen influence my  
result as the force field is a united atom force field. Or How can I  
get rid of them if I want. With or without the aromatic hydrogen gave  
good results ( besides lower computational cost). Does Gromos96 model  
correctly aromatic-Aromatic interaction.


For more information I am posting my full NVT.mdb file below.

I really appreciate your feedback and help in advance.

thank you

Rob

#include 
;

;   File 'mdout.mdp' was generated

;
;

; LINES STARTING WITH ';' ARE COMMENTS



title   = NVT equlibration  ; Title of run

cpp = /usr/bin/cpp ; location of cpp on linux

; The following lines tell the program the standard locations where to  
find certain files




; VARIOUS PREPROCESSING OPTIONS

; Preprocessor information: use cpp syntax.

; e.g.: -I/home/joe/doe -I/home/mary/hoe

include  =

; e.g.: -DI_Want_Cookies -DMe_Too

define   = -DPOSRES



; RUN CONTROL PARAMETERS

integrator   = md

; Start time and timestep in ps

tinit= 0

dt   = 0.002

nsteps   = 5

; For exact run continuation or redoing part of a run

; Part index is updated automatically on checkpointing (keeps files separate)

simulation_part  = 1

init_step= 0

; mode for center of mass motion removal

comm-mode= Linear

; number of steps for center of mass motion removal

nstcomm  = 1

; group(s) for center of mass motion removal

comm-grps=



; LANGEVIN DYNAMICS OPTIONS

; Friction coefficient (amu/ps) and random seed

bd-fric  = 0

ld-seed  = 1993



; ENERGY MINIMIZATION OPTIONS

; Force tolerance and initial step-size

emtol= 100

emstep   = 0.01

; Max number of iterations in relax_shells

niter= 20

; Step size (ps^2) for minimization of flexible constraints

fcstep   = 0

; Frequency of steepest descents steps when doing CG

nstcgsteep   = 1000

nbfgscorr= 10



; TEST PARTICLE INSERTION OPTIONS

rtpi = 0.05



; OUTPUT CONTROL OPTIONS

; Output frequency for coords (x), velocities (v) and forces (f)

nstxout  = 100

nstvout  = 100

nstfout  = 100

; Output frequency for energies to log file and energy file

nstlog   = 100

nstenergy= 100

; Output frequency and precision for xtc file

nstxtcout= 100

xtc-precision= 1000

; This selects the subset of atoms for the xtc file. You can

; select multiple groups. By default all atoms will be written.

xtc-grps

[gmx-users] Problems with parallel run

2010-03-03 Thread Gavin Melaugh
Hi all

My apologies for the lack of  detail in my previous e-mail. I am trying
to run gromacs-4.0.7 for a system that I am studying. I have ran several
simulations on serial on my own computer that have to date worked fine.
I am now however trying to run the simulations on our local cluster in
parallel using mpich-1.2.7 and experiencing some difficulty. Please note
that the version of gromacs mentioned above is installed in parallel.
Right when I run a short simulation of 500 steps in one two or three
nodes the simulations
runs fine (takes about 10 seconds) and all the data is written to the
log file. However when I increase the nodes to 4 there is no stepwise
info written and the simulation does not progress. I
realise that
this maybe a cluster problem, but if anyone has experienced similar
issues I would be grateful of some feedback.

Here is the script I use:

#!/bin/bash
#PBS -N hex
#PBS -r n
#PBS -q longterm
#PBS -l walltime=00:30:00
#PBS -l nodes=4

cd $PBS_O_WORKDIR
export P4_GLOBMEMSIZE=1

/usr/local/bin/mpiexec mdrun -s

Also here is my path:
# Gromacs
export GMXLIB=/k/gavin/gromacs-4.0.7-parallel/share/gromacs/top
export PATH=$PATH:/k/gavin/gromacs-4.0.7-parallel/bin


Cheers

Gavin

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] problem with the size of freeze groups

2010-03-03 Thread jampani srinivas
Dear Berk,

Thanks for your reply, I have just submitted all my files to
bugzilla.gromacs.org. I got the bug # 400.

Thanks very much
Srinivas.

On Wed, Mar 3, 2010 at 4:15 AM, Berk Hess g...@hotmail.com wrote:

  Hi,

 I don't know exactly what you have done, so currently I can say more than I
 have done already.

 Could you please file a bugzilla at bugzilla.gromacs.org and attach all
 the files required to run grompp?

 Thanks,

 Berk

 --
 Date: Tue, 2 Mar 2010 21:11:06 -0500

 Subject: Re: [gmx-users] problem with the size of freeze groups
 From: jampa...@gmail.com
 To: gmx-users@gromacs.org

 Dear Berk,

 Thanks for your previous responses, Can you please let me know if you have
 any solution for the size of freezing groups? I am still not able to do if i
 have larger freezing groups.

 Thanks again for your kind help.
 srinivas.


 On Fri, Feb 26, 2010 at 5:37 PM, jampani srinivas jampa...@gmail.comwrote:

 Dear Berk,

 I have checked my inputs and tcl scripts that i have used for
 the selection, i could see that my selection doesn't have any problem.
 I submitted it again still i am getting the same log   file with nrdf 0 for
 non-freezing group. Please let me know if you want see any of my input
 files, and help me if you have solution for this problem.

 Thanks and Regards
 Srinivas.


 On Fri, Feb 26, 2010 at 12:39 PM, jampani srinivas jampa...@gmail.comwrote:

 Dear Berk,

 I am using VERSION 4.0.5. As you said if there is no problem i should get
 it correctly, i don't know where it is going wrong. I have written a small
 script in tcl to use in vmd to get my selections. i will check the script
 and the selection again. I will let you know my results again.

 Thanks for your valuable time and kind help.
 Srinivas.

 On Fri, Feb 26, 2010 at 12:29 PM, Berk Hess g...@hotmail.com wrote:

  Hi,

 Which version of Gromacs are you using?
 I can't see any issues in the 4.0 code, but some older version might have
 problems.

 Berk

 --
 Date: Fri, 26 Feb 2010 12:05:56 -0500

 Subject: Re: [gmx-users] problem with the size of freeze groups
 From: jampa...@gmail.com
 To: gmx-users@gromacs.org

 Dear Berk,

 They are same, freeze and Tmp2 are exactly the same groups. I just put them
 like that for my convenience, just to avoid confusion in my second email i
 made it uniform.

  Thanks
 Srinivas.

 On Fri, Feb 26, 2010 at 11:59 AM, Berk Hess g...@hotmail.com wrote:

  That is what I suspected, by I don't know why this is.

 Are you really sure you made a temperature coupling group
 that is exactly the freeze group?
 This first mdp file you mailed had a different group names for the freeze
 group
 and the tcoupl groups.

 Berk

 --
 Date: Fri, 26 Feb 2010 11:53:49 -0500

 Subject: Re: [gmx-users] problem with the size of freeze groups
 From: jampa...@gmail.com
 To: gmx-users@gromacs.org

 Dear Berk,

 It looks to me some thing is wrong when i change the radius from 35 to 25,
  herewith i am giving grpopts for both systems


 +
 grpopts: (system with 35 A)
nrdf: 33141.4 0
ref_t: 300   0
tau_t: 0.1 0.1
 +

 grpopts: (system with 25A)
nrdf:   0  0
ref_t: 300   0
 tau_t: 0.1 0.1
 

I think some thing is going wrong when the size of freezing group is
 increased. I don't know whether my understand is correct or not.



 Thanks
 Srinivas.


 On Fri, Feb 26, 2010 at 11:01 AM, Berk Hess g...@hotmail.com wrote:

  Ah, but that does not correspond to the mdp options tou mailed.
 Here there is only one group with 0 degrees of freedom and reference
 temperature 0.

 Berk

 --
 Date: Fri, 26 Feb 2010 10:50:13 -0500

 Subject: Re: [gmx-users] problem with the size of freeze groups
 From: jampa...@gmail.com
 To: gmx-users@gromacs.org

 HI

 Thanks, My log file shows me nrdf: 0

 ###

grpopts:
nrdf: 0
ref_t:0
tau_t:   0

 ###

 Thanks
 Srinivas.

 On Fri, Feb 26, 2010 at 10:25 AM, Berk Hess g...@hotmail.com wrote:

  Hi,

 Then I have no clue what might be wrong.
 Have you check nrdf in the log file?

 Berk

 --
 Date: Fri, 26 Feb 2010 09:54:22 -0500
 Subject: Re: [gmx-users] problem with the size of freeze groups

 From: jampa...@gmail.com
 To: gmx-users@gromacs.org

 Dear Berk,

 Thanks for your response, As you mentioned i have separated t-coupling
 group for frozen and non-frozen groups, still the result is same.
 Herewith i am giving my md.mdp file, Can you suggest me if i am missing any
 options in  my md.mdp file?

 Thanks again
 Srinivas.

 md.mdp file

 +++
 title   = AB2130
 cpp = /usr/bin/cpp
 constraints = all-bonds
 integrator  = md
 dt 

Re: [gmx-users] NVT simulation and mdp file

2010-03-03 Thread Justin A. Lemkul



tekle...@ualberta.ca wrote:

Dear Gromacs Users,

I have encountered the following issues while I was running my MD 
simulation. Can anybody comment on what the meaning of these notes are. 
Is there anything I could do to avoid them.


NOTE 2 [file PAP.top, line unknown]:

  The largest charge group contains 12 atoms.

  Since atoms only see each other when the centers of geometry of the 
charge


  groups they belong to are within the cut-off distance, too large charge

  groups can lead to serious cut-off artifacts.

  For efficiency and accuracy, charge group should consist of a few atoms.

  For all-atom force fields use: CH3, CH2, CH, NH2, NH, OH, CO2, CO, etc.

My SOLVENT IS TOLUENE --- the PRODRG gave me a topology file with only 
one group charge only.




That's almost certainly wrong.  See, for instance, the PHE side chain in the 
relevant .rtp entry for a more reasonable charge group setup.  If you're using 
PRODRG defaults, then the charges are probably unsatisfactory, as well.


The rationale for the charge group size is summed up here:

http://lists.gromacs.org/pipermail/gmx-users/2008-November/038153.html


NOTE 1 [file nvt.mdp, line unknown]:

  The Berendsen thermostat does not generate the correct kinetic energy

  distribution. You might want to consider using the V-rescale thermostat.




See the literature about this one, as well as the numerous list archive 
discussions.  For initial equilibration, a weak coupling scheme is probably 
fine, but you can also use V-rescale.  Also of interest:


http://www.gromacs.org/Documentation/Terminology/Thermostats


NOTE 3 [file aminoacids.dat, line 1]:

  The optimal PME mesh load for parallel simulations is below 0.5

  and for highly parallel simulations between 0.25 and 0.33,

  for higher performance, increase the cut-off and the PME grid spacing



This all depends on the size of your system, how much of the work is distributed 
between the real-space Coulombic interaction and PME.




In addition to the above notes I have also some questions about  the NVT 
and NPT simulation.


1)I am using toluene as a solvent to simulate my polymer, do I need to 
use the compressibility of toluene which is  9.2e-5 or the default 
value  4.5e-5 1/bar.


Well, 4.5e-5 corresponds to water, which you aren't using...

For NVT, this won't matter since the box is fixed, but for NPT, the 
compressibility will affect the response of your system to pressure.  The 
differences may be minimal, but if you know the right value, why accept a wrong one?


2)What about the dielectric constant (the dielectric constant for 
toluene is 2-2.4), but the default value is 80 ( I assume this is for 
water- am I right).


Yes, the default again assumes water as the solvent.

3)Is  always rvdw = 1.4 nm for GROMOS96. As a result I have to increase 
my box size of the solute at the beginning to a min of 2*1.4 =2.8 ( min 
image convection). Is this the right way to do!


At an absolute minimum.  Keep in mind that the box vectors will fluctuate under 
NPT, so if the box decreases even a little bit below 2.8, you will be violating 
the minimum image convention.


4)I run an NVT simulation to equilibrate my system  for 100 ps. When I 
checked my simulation at the end (successfully completed) I noticed that 
the shape of my simulation box looks CIRCULAR! some how the rectangular 
shape looks distorted. What does this tell! Do you guys think something 
is wrong in my simulation.


This could be some visualization artifact, or the components of your system have 
condensed within the box.  Without actually seeing it, it's hard to tell.  If 
you post an image online (Photobucket, etc) then we might get a better sense of 
what's going on.


5)I included the polar and aromatic hydrogens in my simulation (  
ffG43a1.itp – GROMOS96.1 in PRODRG). Does these hydrogen influence my 
result as the force field is a united atom force field. Or How can I get 
rid of them if I want. With or without the aromatic hydrogen gave good 
results ( besides lower computational cost). Does Gromos96 model 
correctly aromatic-Aromatic interaction.




Well, correct is a relative term for all force fields, but you need to follow 
the prescribed setup of the force field itself, otherwise you can throw it all 
away.  If you lump the hydrogens into the ring carbons and have an uncharged 
ring, the result will be different than if you have the hydrogens there with a 
small charge on each C and H.  Again, refer to the force field .rtp file for 
examples.  You can also create a better toluene topology by renaming the residue 
in your coordinate file PHE and trick pdb2gmx:


pdb2gmx -f toluene.pdb -missing

Then change the mass of the CH2 group (which pdb2gmx thinks is a CB for PHE) to 
reflect a CH3 group.  Make an .itp file out of the resulting .top by removing 
the unnecessary #includes, [system], and [molecule] directives.  Then you don't 
have to worry about messing with PRODRG.  I should note, as well, that this 

Re: [gmx-users] Problems with parallel run

2010-03-03 Thread Oliver Stueker
This email (as well as the two others) have found their way to the
list. No need to post several times!


On Wed, Mar 3, 2010 at 12:47, Gavin Melaugh gmelaug...@qub.ac.uk wrote:
 Hi all

 My apologies for the lack of  detail in my previous e-mail. I am trying
 to run gromacs-4.0.7 for a system that I am studying. I have ran several
 simulations on serial on my own computer that have to date worked fine.
 I am now however trying to run the simulations on our local cluster in
 parallel using mpich-1.2.7 and experiencing some difficulty. Please note
[...]

If you would have searched the mailing list for posts with terms like
parallel problem,
you probably would have found several posts that report people having
serious problems with mpich-1.2.x and the advise to use mpich2,
openMPI or LAM/MPI instead.

Check with the admins of your Cluster which alternative MPI lib is
available or have them install one of the suggested.



Oliver
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] ethanol bond types

2010-03-03 Thread nishap . patel

Hello,

I am trying to simulate Ethanol in water using OPLSAA. I already  
tried it with all atoms, but now I want to try it with United atoms  
for ethanol, i.e. CH3,CH2,OH, and HO in my topology file. I created  
the topology but I got an error saying 'No bond types', so I checked  
ffoplsaabon.itp file, and as the error indicated I could not find any  
bond types. Is there a way for me to determine the bonds? This is my  
topology file for ethanol:


[ moleculetype ]
; Namenrexcl
Ethanol 3

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass   
typeBchargeB  massB

 1   opls_068  1EOH CB  1  0 15.035   ; qtot 0
 2   opls_081  1EOH CA  2  0.265 14.027
; qtot 0.265
 3   opls_078  1EOH OH  2   -0.715.9994
; qtot -0.435

 4   opls_079  1EOH HO  2  0.435  1.008   ; qtot 0

[ bonds ]
;  aiaj functc0c1c2c3
1 2 1
2 3 1
3 4 1

[ pairs ]
;  aiaj functc0c1c2c3
1 4 1

[ angles ]
;  aiajak functc0c1c2   
  c3

1 2 3 1
2 3 4 1

[ dihedrals ]
;  aiajakal functc0c1 
c2c3c4c5

1 2 3 4 3

I checked one of the paper that has been published using the same  
parameters as follows:


   Table 1. Potential Parameters and Molecular
 Geometries of OPLS-UA and SPC/E
 qa
atom or group   ?, Å  , kJ/mol
 OPLS
 R13.905  0.73220.000
 R23.905  0.49370.265
  -0.700
 O 3.070  0.7113
 H 0.000  0.0.435

I would really appreciate some suggestions, on how I should tackle the error.

Thanks

Nisha P


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Problems with parallel run

2010-03-03 Thread Gavin Melaugh
Hi Oliver

Thanks very much. Sorry about the other two e-mails that was a mistake,
I had checked the list after I sent the first one and it wasn't on so I
thought it wasn't received. I was reading many of those posts today and
did realise that there were problems with mpich-1.2.x but they seemed to
be for earlier versions of gromacs and it was on this point that I was
unsure.
Thanks anyway for providing some insight on the matter.

Cheers

Gavin

Oliver Stueker wrote:
 This email (as well as the two others) have found their way to the
 list. No need to post several times!


 On Wed, Mar 3, 2010 at 12:47, Gavin Melaugh gmelaug...@qub.ac.uk wrote:
   
 Hi all

 My apologies for the lack of  detail in my previous e-mail. I am trying
 to run gromacs-4.0.7 for a system that I am studying. I have ran several
 simulations on serial on my own computer that have to date worked fine.
 I am now however trying to run the simulations on our local cluster in
 parallel using mpich-1.2.7 and experiencing some difficulty. Please note
 
 [...]

 If you would have searched the mailing list for posts with terms like
 parallel problem,
 you probably would have found several posts that report people having
 serious problems with mpich-1.2.x and the advise to use mpich2,
 openMPI or LAM/MPI instead.

 Check with the admins of your Cluster which alternative MPI lib is
 available or have them install one of the suggested.



 Oliver
   

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] ethanol bond types

2010-03-03 Thread David van der Spoel

On 3/3/10 8:14 PM, nishap.pa...@utoronto.ca wrote:

Hello,

I am trying to simulate Ethanol in water using OPLSAA. I already 
tried it with all atoms, but now I want to try it with United atoms 
for ethanol, i.e. CH3,CH2,OH, and HO in my topology file. I created 
the topology but I got an error saying 'No bond types', so I checked 
ffoplsaabon.itp file, and as the error indicated I could not find any 
bond types. Is there a way for me to determine the bonds? This is my 
topology file for ethanol:


[ moleculetype ]
; Namenrexcl
Ethanol 3

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  
typeBchargeB  massB
 1   opls_068  1EOH CB  1  0 15.035   
; qtot 0
 2   opls_081  1EOH CA  2  0.265 14.027   
; qtot 0.265
 3   opls_078  1EOH OH  2   -0.715.9994   
; qtot -0.435
 4   opls_079  1EOH HO  2  0.435  1.008   
; qtot 0


[ bonds ]
;  aiaj functc0c1c2c3
1 2 1
2 3 1
3 4 1

[ pairs ]
;  aiaj functc0c1c2c3
1 4 1

[ angles ]
;  aiajak functc0c1c2  
  c3

1 2 3 1
2 3 4 1

[ dihedrals ]
;  aiajakal functc0c1
c2c3c4c5

1 2 3 4 3

I checked one of the paper that has been published using the same 
parameters as follows:


   Table 1. Potential Parameters and Molecular
 Geometries of OPLS-UA and SPC/E
 qa
atom or group   ?, Å  , kJ/mol
 OPLS
 R13.905  0.73220.000
 R23.905  0.49370.265
  -0.700
 O 3.070  0.7113
 H 0.000  0.0.435

I would really appreciate some suggestions, on how I should tackle the 
error.


Thanks

Nisha P


The united atom parameters are really leftovers from long past. 
Jorgensen published his first all-atom alcohol simulations in 1988 IIRC. 
There is a methanol paper from 1983. Just search literature for 
Jorgensen, methanol ethanol and it will show up. Then you will to type 
in the parameters yourself, see chapter 5.


--
David.

David van der Spoel, PhD, Professor of Biology
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
sp...@xray.bmc.uu.sesp...@gromacs.org   http://xray.bmc.uu.se/~spoel


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Re: GROMACS PME on BLUEGENE

2010-03-03 Thread Mark Abraham

On 4/03/2010 3:54 AM, Xevi Biarnes Fontal (SISSA) wrote:

Dear Mark,

I found a topic in the gromacs-users mailing list regarding your
troubles in running gromacs with PME on BlueGene/P
(http://lists.gromacs.org/pipermail/gmx-users/2009-June/042421.html)

did you solve the problem?


Please keep such discussions on the mailing list. I might not have the 
expertise or time to help, and if someone can help, the answer should be 
archived for all to find. Imagine if you'd not being able to find 
anything with searches :-)



I am now facing exactly the same problem. If I turn on PME I get the
execution stacked and a core file written out. If I don't request for
PME (i.e. Cut-Off) the execution runs without problems.


I wasn't having any problem with BlueGene, if you read that email 
exchange carefully. Jakob was. He apparently never shared his solution, 
if any.


Try running on one CPU (mdrun -np 1 -exe blah -args blahblah) to see if 
the problem is generic, or parallel-related.


Read in your BlueGene documentation how to use addr2line to probe the 
stack dump. You'll need to compile a version of mdrun with debugging 
enabled (add -g to any CFLAGS=... on your configure command line) 
for this detective work to succeed. That will tell you the function and 
line number where the problem occurs, which will likely lead to a 
solution. Complain to IBM that this is a 1960s solution :) Obviously, 
once it's fixed, get rid of the debugging version.



If you solved the problem, can you tell me which steps did you follow.

I compiled gromacs-4.0.7 with FFTW3.1.2 for the backend and with a
modified version of the same libraries for the frontend. (I am using the
configure options explained in
http://www.gromacs.org/index.php?title=Download_%26_Installation/Installation_Instructions/GROMACS_on_BlueGene)


If you're using BG/P, note Puetz's comments there about fixing MPI 
libraries and such.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] External forces

2010-03-03 Thread ROHIT MALSHE
Hi all, 

Can anyone tell me how I can impart external forces on a system - say I have a 
water droplet placed on a tilted polymer surface. With the application of the 
force, the droplet should flow. 

- Rohit
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] External forces

2010-03-03 Thread Justin A. Lemkul



ROHIT MALSHE wrote:
Hi all, 

Can anyone tell me how I can impart external forces on a system - say I have a water droplet placed on a tilted polymer surface. With the application of the force, the droplet should flow. 



That's what the pull code is for.

-Justin


- Rohit


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] NVT simulation and mdp file

2010-03-03 Thread Justin A. Lemkul



tekle...@ualberta.ca wrote:

Dear Justin,

I really appreciate your help and feedback. once again thank you for 
your time.


 This could be some visualization artifact, or the components of 
your

 system have condensed within the box.  Without actually seeing it,
 it's hard to tell.  If you post an image online (Photobucket, etc)
 then we might get a better sense of what's going on.
 Here is my .gro file taht looks like condensed.

Attached please find the NVT.gro file

or the images in .jpeg file format



Please note that this is not what I suggested you do.  Posting images online 
(not to the list) is a far better method.  Users who subscribe to the list in 
digest format will not get these attachments, and thus you alienate others who 
might be able to help you.  Also, for the security-conscious, download 
attachments is not always preferable.


It seems pretty clear from what's going on that your system is simply 
condensing, so your initial configuration placed all the molecules too far apart 
(at too low of a density).  Display the box vectors in VMD and you will see 
exactly what's going on:


(Tk console)
package require pbctools
pbc box

-Justin



Best,

Rob


Quoting Justin A. Lemkul jalem...@vt.edu:




tekle...@ualberta.ca wrote:

Dear Gromacs Users,

I have encountered the following issues while I was running my MD 
simulation. Can anybody comment on what the meaning of these notes 
are. Is there anything I could do to avoid them.


NOTE 2 [file PAP.top, line unknown]:

 The largest charge group contains 12 atoms.

 Since atoms only see each other when the centers of geometry of the 
charge


 groups they belong to are within the cut-off distance, too large charge

 groups can lead to serious cut-off artifacts.

 For efficiency and accuracy, charge group should consist of a few 
atoms.


 For all-atom force fields use: CH3, CH2, CH, NH2, NH, OH, CO2, CO, etc.

My SOLVENT IS TOLUENE --- the PRODRG gave me a topology file with 
only one group charge only.




That's almost certainly wrong.  See, for instance, the PHE side chain 
in the relevant .rtp entry for a more reasonable charge group setup.  
If you're using PRODRG defaults, then the charges are probably 
unsatisfactory, as well.


The rationale for the charge group size is summed up here:

http://lists.gromacs.org/pipermail/gmx-users/2008-November/038153.html


NOTE 1 [file nvt.mdp, line unknown]:

 The Berendsen thermostat does not generate the correct kinetic energy

 distribution. You might want to consider using the V-rescale 
thermostat.





See the literature about this one, as well as the numerous list 
archive discussions.  For initial equilibration, a weak coupling 
scheme is probably fine, but you can also use V-rescale.  Also of 
interest:


http://www.gromacs.org/Documentation/Terminology/Thermostats


NOTE 3 [file aminoacids.dat, line 1]:

 The optimal PME mesh load for parallel simulations is below 0.5

 and for highly parallel simulations between 0.25 and 0.33,

 for higher performance, increase the cut-off and the PME grid spacing



This all depends on the size of your system, how much of the work is 
distributed between the real-space Coulombic interaction and PME.




In addition to the above notes I have also some questions about  the 
NVT and NPT simulation.


1)I am using toluene as a solvent to simulate my polymer, do I need 
to use the compressibility of toluene which is  9.2e-5 or the default 
value  4.5e-5 1/bar.


Well, 4.5e-5 corresponds to water, which you aren't using...

For NVT, this won't matter since the box is fixed, but for NPT, the 
compressibility will affect the response of your system to pressure. 
 The differences may be minimal, but if you know the right value, why 
accept a wrong one?


2)What about the dielectric constant (the dielectric constant for 
toluene is 2-2.4), but the default value is 80 ( I assume this is for 
water- am I right).


Yes, the default again assumes water as the solvent.

3)Is  always rvdw = 1.4 nm for GROMOS96. As a result I have to 
increase my box size of the solute at the beginning to a min of 2*1.4 
=2.8 ( min image convection). Is this the right way to do!


At an absolute minimum.  Keep in mind that the box vectors will 
fluctuate under NPT, so if the box decreases even a little bit below 
2.8, you will be violating the minimum image convention.


4)I run an NVT simulation to equilibrate my system  for 100 ps. When 
I checked my simulation at the end (successfully completed) I noticed 
that the shape of my simulation box looks CIRCULAR! some how the 
rectangular shape looks distorted. What does this tell! Do you guys 
think something is wrong in my simulation.


This could be some visualization artifact, or the components of your 
system have condensed within the box.  Without actually seeing it, 
it's hard to tell.  If you post an image online (Photobucket, etc) 
then we might get a better sense of what's going on.


5)I included the 

Re: [gmx-users] gromacs memory usage

2010-03-03 Thread Amit Choubey
Hi Roland,

I was using 32 nodes with 8 cores, each with 16 Gb memory. The system was
about 154 M particles. This should be feasible according to the numbers.
Assuming that it takes 50bytes per atoms and 1.76 KB per atom per core then

Masternode - (50*154 M + 8*1.06)bytes ~ 16GB (There is no leverage here)
All other nodes 8*1.06 ~ 8.5 GB

I am planning to try the same run on 64 nodes with 8 cores each again but
not until i am a little more confident. The problem is if gromacs crashes
due to memory it makes the nodes to hang and people have to recycle the
power supply.


Thank you,

amit

On Wed, Mar 3, 2010 at 7:34 AM, Roland Schulz rol...@utk.edu wrote:

 Hi,

 ok then it is compiled in 64bit.

 You didn't say how many cores each node has and on how many nodes you want
 to run.

 Roland


 On Wed, Mar 3, 2010 at 4:32 AM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland,

 It says

 gromacs/4.0.5/bin/mdrun_mpi: ELF 64-bit LSB executable, AMD x86-64,
 version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared
 libs), for GNU/Linux 2.6.9, not stripped


 On Tue, Mar 2, 2010 at 10:34 PM, Roland Schulz rol...@utk.edu wrote:

 Amit,

 try the full line (with the file)

 Roland

 On Wed, Mar 3, 2010 at 1:22 AM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland

 I tried 'which mdrun' but it only gives the path name of installation.
 Is there any other way to know if the installation is 64 bit ot not?

 Thank you,
 Amit


 On Tue, Mar 2, 2010 at 10:03 PM, Roland Schulz rol...@utk.edu wrote:

 Hi,

 do:
 file `which mdrun`
 and it should give:
 /usr/bin/mdrun: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
 dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped

 If it is not 64 you need to compile with 64 and have a 64bit kernel.
 Since you asked before about 2GB large files this might indeed be your
 problem.

 Roland

 On Wed, Mar 3, 2010 at 12:48 AM, Amit Choubey kgp.a...@gmail.comwrote:

 Hi Tsjerk,

 I tried to do a test run based on the presentation. But there was a
 memory related error (I had given a leverage of more than 2 GB).

 I did not understand the 64 bit issue, could you let me know wheres
 the documentation? I need to look into that.

 Thank you,
 amit


 On Tue, Mar 2, 2010 at 9:14 PM, Tsjerk Wassenaar 
 tsje...@gmail.comwrote:

 Hi Amit,

 I think the presentation gives right what you want: a rough estimate.
 Now as Berk pointed out, to allocate more than 2GB of memory, you
 need
 to compile in 64bit. Then, if you want to have a real feel for the
 memory usage, there's no other way than trying. But fortunately, the
 memory requirements of a (very) long simulation are equal to that of
 a
 very short one, so it doesn't need to cost much time.

 Cheers,

 Tsjerk

 On Wed, Mar 3, 2010 at 5:31 AM, Amit Choubey kgp.a...@gmail.com
 wrote:
  Hi Mark,
 
  Yes thats one way to go about it. But it would have been great if i
 could
  get a rough estimation.
 
  Thank you.
 
  amit
 
 
  On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  wrote:
 
  On 3/03/2010 12:53 PM, Amit Choubey wrote:
 
 Hi Mark,
 
 I quoted the memory usage requirements from a presentation by
 Berk
 Hess, Following is the link to it
 
 
 
 
 http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf
 
 l. In that presentation on pg 27,28 Berk does talk about
 memory
 usage but then I am not sure if he referred to any other
 specific
  thing.
 
 My system only contains SPC water. I want Berendsen T coupling
 and
 Coulomb interaction with Reaction Field.
 
 I just want a rough estimate of how big of a system of water
 can be
 simulated on our super computers.
 
  Try increasingly large systems until it runs out of memory.
 There's your
  answer.
 
  Mark
 
  On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  mailto:mark.abra...@anu.edu.au wrote:
 
 - Original Message -
 From: Amit Choubey kgp.a...@gmail.com mailto:
 kgp.a...@gmail.com
 Date: Saturday, February 27, 2010 10:17
 Subject: Re: [gmx-users] gromacs memory usage
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 mailto:gmx-users@gromacs.org
 
   Hi Mark,
   We have few nodes with 64 GB memory and many other with 16
 GB of
 memory. I am attempting a simulation of around 100 M atoms.
 
 Well, try some smaller systems and work upwards to see if you
 have a
 limit in practice. 50K atoms can be run in less than 32GB over
 64
 processors. You didn't say whether your simulation system can
 run on
 1 processor... if it does, then you can be sure the problem
 really
 is related to parallelism.
 
   I did find some document which says one need
 (50bytes)*NATOMS on
 master node, also one needs
(100+4*(no. of atoms in cutoff)*(NATOMS/nprocs) for
 compute
 nodes. Is this true?
 
 In general, no. It will vary with the simulation algorithm
 you're
 

[gmx-users] g_mindist periodic boundary condition

2010-03-03 Thread Dian Jiao
Hi Gromacs users,

I was trying to compute minimum distance between groups in a cubic water box
with g_mindist using periodic boundary condition. In order to test this, I
added one more atom which is far away from any of the other atoms in the
pdb file. The mindist between that atom and all the waters were computed.
The output of g_mindist is 3.089281e+00. (the unit is nm, right?)

The manual shows that pbc is one of the option of g_mindist, but isn't the
default yes? I even tried with -pbc in the command, still did not work.
Can anyone tell me how to turn on PBC in g_mindist?

Thanks

Dian
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] gromacs memory usage

2010-03-03 Thread Roland Schulz
Hi,

a couple of points:

1) you will need some additional memory for the system, MPI, the binary,
 - how much this is does not depend on GROMACS (ask e.g. your sys-admin)
2) you might want to try to run only 1 rank on the first node (how to do
this depends on your MPI implementation and should be asked on the specific
MPI list)
3) By setting limits (e.g. ulimit with bash) you can prevent the system to
freeze (again ask you sysadmin about how to use limits)
4) You can compile GROMACS with CFLAGS=-DPRINT_ALLOC_KB and it gives debug
information about the memory usage. This can be used to verify the my/Berk's
numbers

Roland

On Wed, Mar 3, 2010 at 7:15 PM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland,

 I was using 32 nodes with 8 cores, each with 16 Gb memory. The system was
 about 154 M particles. This should be feasible according to the numbers.
 Assuming that it takes 50bytes per atoms and 1.76 KB per atom per core then

 Masternode - (50*154 M + 8*1.06)bytes ~ 16GB (There is no leverage here)
 All other nodes 8*1.06 ~ 8.5 GB

 I am planning to try the same run on 64 nodes with 8 cores each again but
 not until i am a little more confident. The problem is if gromacs crashes
 due to memory it makes the nodes to hang and people have to recycle the
 power supply.


 Thank you,

 amit

 On Wed, Mar 3, 2010 at 7:34 AM, Roland Schulz rol...@utk.edu wrote:

 Hi,

 ok then it is compiled in 64bit.

 You didn't say how many cores each node has and on how many nodes you want
 to run.

 Roland


 On Wed, Mar 3, 2010 at 4:32 AM, Amit Choubey kgp.a...@gmail.com wrote:

 Hi Roland,

 It says

 gromacs/4.0.5/bin/mdrun_mpi: ELF 64-bit LSB executable, AMD x86-64,
 version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared
 libs), for GNU/Linux 2.6.9, not stripped


 On Tue, Mar 2, 2010 at 10:34 PM, Roland Schulz rol...@utk.edu wrote:

 Amit,

 try the full line (with the file)

 Roland

 On Wed, Mar 3, 2010 at 1:22 AM, Amit Choubey kgp.a...@gmail.comwrote:

 Hi Roland

 I tried 'which mdrun' but it only gives the path name of installation.
 Is there any other way to know if the installation is 64 bit ot not?

 Thank you,
 Amit


 On Tue, Mar 2, 2010 at 10:03 PM, Roland Schulz rol...@utk.edu wrote:

 Hi,

 do:
 file `which mdrun`
 and it should give:
 /usr/bin/mdrun: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
 dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped

 If it is not 64 you need to compile with 64 and have a 64bit kernel.
 Since you asked before about 2GB large files this might indeed be your
 problem.

 Roland

 On Wed, Mar 3, 2010 at 12:48 AM, Amit Choubey kgp.a...@gmail.comwrote:

 Hi Tsjerk,

 I tried to do a test run based on the presentation. But there was a
 memory related error (I had given a leverage of more than 2 GB).

 I did not understand the 64 bit issue, could you let me know wheres
 the documentation? I need to look into that.

 Thank you,
 amit


 On Tue, Mar 2, 2010 at 9:14 PM, Tsjerk Wassenaar 
 tsje...@gmail.comwrote:

 Hi Amit,

 I think the presentation gives right what you want: a rough
 estimate.
 Now as Berk pointed out, to allocate more than 2GB of memory, you
 need
 to compile in 64bit. Then, if you want to have a real feel for the
 memory usage, there's no other way than trying. But fortunately, the
 memory requirements of a (very) long simulation are equal to that of
 a
 very short one, so it doesn't need to cost much time.

 Cheers,

 Tsjerk

 On Wed, Mar 3, 2010 at 5:31 AM, Amit Choubey kgp.a...@gmail.com
 wrote:
  Hi Mark,
 
  Yes thats one way to go about it. But it would have been great if
 i could
  get a rough estimation.
 
  Thank you.
 
  amit
 
 
  On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  wrote:
 
  On 3/03/2010 12:53 PM, Amit Choubey wrote:
 
 Hi Mark,
 
 I quoted the memory usage requirements from a presentation by
 Berk
 Hess, Following is the link to it
 
 
 
 
 http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf
 
 l. In that presentation on pg 27,28 Berk does talk about
 memory
 usage but then I am not sure if he referred to any other
 specific
  thing.
 
 My system only contains SPC water. I want Berendsen T
 coupling and
 Coulomb interaction with Reaction Field.
 
 I just want a rough estimate of how big of a system of water
 can be
 simulated on our super computers.
 
  Try increasingly large systems until it runs out of memory.
 There's your
  answer.
 
  Mark
 
  On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham 
 mark.abra...@anu.edu.au
  mailto:mark.abra...@anu.edu.au wrote:
 
 - Original Message -
 From: Amit Choubey kgp.a...@gmail.com mailto:
 kgp.a...@gmail.com
 Date: Saturday, February 27, 2010 10:17
 Subject: Re: [gmx-users] gromacs memory usage
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 mailto:gmx-users@gromacs.org
 
   Hi Mark,
   We have few nodes with 

Re: [gmx-users] g_mindist periodic boundary condition

2010-03-03 Thread Mark Abraham

On 4/03/2010 11:30 AM, Dian Jiao wrote:

Hi Gromacs users,

I was trying to compute minimum distance between groups in a cubic water
box with g_mindist using periodic boundary condition. In order to test
this, I added one more atom which is far away from any of the other
atoms in the pdb file. The mindist between that atom and all the waters
were computed. The output of g_mindist is 3.089281e+00. (the unit is nm,
right?)


You haven't said how big your box is, or how far far away is, so we 
can't tell whether you think 3nm is too big, too small, etc.



The manual shows that pbc is one of the option of g_mindist, but isn't
the default yes? I even tried with -pbc in the command, still did
not work. Can anyone tell me how to turn on PBC in g_mindist?


See g_mindist -h. The -pbc flag turns PBC on, -nopbc turns it off.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs memory usage

2010-03-03 Thread Mark Abraham

On 4/03/2010 11:15 AM, Amit Choubey wrote:

Hi Roland,

I was using 32 nodes with 8 cores, each with 16 Gb memory. The system
was about 154 M particles. This should be feasible according to the
numbers. Assuming that it takes 50bytes per atoms and 1.76 KB per atom
per core then

Masternode - (50*154 M + 8*1.06)bytes ~ 16GB (There is no leverage here)
All other nodes 8*1.06 ~ 8.5 GB

I am planning to try the same run on 64 nodes with 8 cores each again
but not until i am a little more confident. The problem is if gromacs
crashes due to memory it makes the nodes to hang and people have to
recycle the power supply.


Your sysadmins should be able to come up with a better solution than 
that. Not doing so just makes work for them detecting and rebooting 
things by hand for the next 5 years.


The traditional approach is to use virtual memory on disk. Now the 
calculation will run, but will be slow when accessing less-frequently 
used memory. In GROMACS, that will only be those all-atom arrays on the 
master process, which probably only get accessed during input, DD and 
output phases, which are already slow.


At the very least, they need a malloc library that doesn't lead to a 
system hang.


Mark


On Wed, Mar 3, 2010 at 7:34 AM, Roland Schulz rol...@utk.edu
mailto:rol...@utk.edu wrote:

Hi,

ok then it is compiled in 64bit.

You didn't say how many cores each node has and on how many nodes
you want to run.

Roland


On Wed, Mar 3, 2010 at 4:32 AM, Amit Choubey kgp.a...@gmail.com
mailto:kgp.a...@gmail.com wrote:

Hi Roland,

It says

gromacs/4.0.5/bin/mdrun_mpi: ELF 64-bit LSB executable, AMD
x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically
linked (uses shared libs), for GNU/Linux 2.6.9, not stripped


On Tue, Mar 2, 2010 at 10:34 PM, Roland Schulz rol...@utk.edu
mailto:rol...@utk.edu wrote:

Amit,

try the full line (with the file)

Roland

On Wed, Mar 3, 2010 at 1:22 AM, Amit Choubey
kgp.a...@gmail.com mailto:kgp.a...@gmail.com wrote:

Hi Roland

I tried 'which mdrun' but it only gives the path name of
installation. Is there any other way to know if the
installation is 64 bit ot not?

Thank you,
Amit


On Tue, Mar 2, 2010 at 10:03 PM, Roland Schulz
rol...@utk.edu mailto:rol...@utk.edu wrote:

Hi,

do:
file `which mdrun`
and it should give:
/usr/bin/mdrun: ELF 64-bit LSB executable, x86-64,
version 1 (SYSV), dynamically linked (uses shared
libs), for GNU/Linux 2.6.15, stripped

If it is not 64 you need to compile with 64 and have
a 64bit kernel. Since you asked before about 2GB
large files this might indeed be your problem.

Roland

On Wed, Mar 3, 2010 at 12:48 AM, Amit Choubey
kgp.a...@gmail.com mailto:kgp.a...@gmail.com wrote:

Hi Tsjerk,

I tried to do a test run based on the
presentation. But there was a memory related
error (I had given a leverage of more than 2 GB).

I did not understand the 64 bit issue, could you
let me know wheres the documentation? I need to
look into that.

Thank you,
amit


On Tue, Mar 2, 2010 at 9:14 PM, Tsjerk Wassenaar
tsje...@gmail.com mailto:tsje...@gmail.com
wrote:

Hi Amit,

I think the presentation gives right what
you want: a rough estimate.
Now as Berk pointed out, to allocate more
than 2GB of memory, you need
to compile in 64bit. Then, if you want to
have a real feel for the
memory usage, there's no other way than
trying. But fortunately, the
memory requirements of a (very) long
simulation are equal to that of a
very short one, so it doesn't need to cost
much time.

Cheers,

Tsjerk

On Wed, Mar 3, 2010 at 5:31 AM, Amit Choubey
kgp.a...@gmail.com
mailto:kgp.a...@gmail.com wrote:
  Hi Mark,
 
 

Re: [gmx-users] g_mindist periodic boundary condition

2010-03-03 Thread Dian Jiao
The box is 24X24X24 (Angstrom). The dummy atom I added at the end is about
31 A away from the closest water in the box. But if it is periodic,
shouldn't there be waters near the dummy too?

On Wed, Mar 3, 2010 at 10:29 PM, Mark Abraham mark.abra...@anu.edu.auwrote:

 On 4/03/2010 11:30 AM, Dian Jiao wrote:

 Hi Gromacs users,

 I was trying to compute minimum distance between groups in a cubic water
 box with g_mindist using periodic boundary condition. In order to test
 this, I added one more atom which is far away from any of the other
 atoms in the pdb file. The mindist between that atom and all the waters
 were computed. The output of g_mindist is 3.089281e+00. (the unit is nm,
 right?)


 You haven't said how big your box is, or how far far away is, so we can't
 tell whether you think 3nm is too big, too small, etc.


  The manual shows that pbc is one of the option of g_mindist, but isn't
 the default yes? I even tried with -pbc in the command, still did
 not work. Can anyone tell me how to turn on PBC in g_mindist?


 See g_mindist -h. The -pbc flag turns PBC on, -nopbc turns it off.


 Mark
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] g_mindist periodic boundary condition

2010-03-03 Thread Tsjerk Wassenaar
Hi Dian,

Which version of gromacs are you using? Can you assert that the pdb
file has the correct box? It should have a line starting with CRYST1
(grep ^CRYST1 file.pdb). Some versions of gromacs (3.3.2  I think)
didn't write the CRYST1 record, and thus disallow PBC related
operations.

Cheers,

Tsjerk

On Thu, Mar 4, 2010 at 8:08 AM, Dian Jiao oscarj...@gmail.com wrote:
 The box is 24X24X24 (Angstrom). The dummy atom I added at the end is about
 31 A away from the closest water in the box. But if it is periodic,
 shouldn't there be waters near the dummy too?

 On Wed, Mar 3, 2010 at 10:29 PM, Mark Abraham mark.abra...@anu.edu.au
 wrote:

 On 4/03/2010 11:30 AM, Dian Jiao wrote:

 Hi Gromacs users,

 I was trying to compute minimum distance between groups in a cubic water
 box with g_mindist using periodic boundary condition. In order to test
 this, I added one more atom which is far away from any of the other
 atoms in the pdb file. The mindist between that atom and all the waters
 were computed. The output of g_mindist is 3.089281e+00. (the unit is nm,
 right?)

 You haven't said how big your box is, or how far far away is, so we
 can't tell whether you think 3nm is too big, too small, etc.

 The manual shows that pbc is one of the option of g_mindist, but isn't
 the default yes? I even tried with -pbc in the command, still did
 not work. Can anyone tell me how to turn on PBC in g_mindist?

 See g_mindist -h. The -pbc flag turns PBC on, -nopbc turns it off.

 Mark
 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php


 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-- 
Tsjerk A. Wassenaar, Ph.D.

Computational Chemist
Medicinal Chemist
Neuropharmacologist
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs memory usage

2010-03-03 Thread Alexey Shvetsov
Hi

Looks like your system simply runs out of memory. So power cycling nodes isnt 
needed. If your cluster runs linux then it already has OOM Killer that will 
kill processes that runs out of memory. Also having swap on nodes is a good 
idea even with huge amount  of memory.
Memory usage for mpi processes will strongly depend on mpi implentation 
because some of them are usualy caching slave process memory (like usualy do 
mvapich2)

So can you provide info about youre cluster setup.
OS version (including kernel version)  uname -a 
mpi version  mpirun --version or mpiexec --version
also compiler version that was used for compiling gromacs

On Четверг 04 марта 2010 03:15:53 Amit Choubey wrote:
 Hi Roland,
 
 I was using 32 nodes with 8 cores, each with 16 Gb memory. The system was
 about 154 M particles. This should be feasible according to the numbers.
 Assuming that it takes 50bytes per atoms and 1.76 KB per atom per core then
 
 Masternode - (50*154 M + 8*1.06)bytes ~ 16GB (There is no leverage here)
 All other nodes 8*1.06 ~ 8.5 GB
 
 I am planning to try the same run on 64 nodes with 8 cores each again but
 not until i am a little more confident. The problem is if gromacs crashes
 due to memory it makes the nodes to hang and people have to recycle the
 power supply.
 
 
 Thank you,
 
-- 
Best Regards,
Alexey 'Alexxy' Shvetsov
Petersburg Nuclear Physics Institute, Russia
Department of Molecular and Radiation Biophysics
Gentoo Team Ru
Gentoo Linux Dev
mailto:alexx...@gmail.com
mailto:ale...@gentoo.org
mailto:ale...@omrb.pnpi.spb.ru


signature.asc
Description: This is a digitally signed message part.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php