Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-28 Thread bharat v. adkar

On Mon, 28 Dec 2009, David van der Spoel wrote:


bharat v. adkar wrote:

 On Mon, 28 Dec 2009, Mark Abraham wrote:

  bharat v. adkar wrote:
On Sun, 27 Dec 2009, Mark Abraham wrote:
  
 bharat v. adkar wrote:

   On Sun, 27 Dec 2009, Mark Abraham wrote:
  bharat v. adkar wrote:
Dear all,
I am trying to perform replica exchange MD (REMD) on a  
 'protein in
  water' system. I am following instructions given on wiki  
 (How-Tos -
  REMD). I have to perform the REMD simulation with 35 
   different
  temperatures. As per advise on wiki, I equilibrated the 
  system at
  respective temperatures (total of 35 equilibration   
   simulations). After
  this I generated chk_0.tpr, chk_1.tpr, ..., chk_34.tpr 
  files from the

  equilibrated structures.
   Now when I submit final job for REMD with following  
 command-line, itgives

  some error:
   command line: mpiexec -np 70 mdrun -multi 35 -replex 
   1000 -schk_.tpr-v

error msg:
  ---
  Program mdrun_mpi, VERSION 4.0.7
  Source code file: ../../../SRC/src/gmxlib/smalloc.c, line: 
   179

Fatal error:
  Not enough memory. Failed to realloc 790760 bytes for   
nlist-jjnr,

  nlist-jjnr=0x9a400030
  (called from file ../../../SRC/src/mdlib/ns.c, line 503)
  ---
Thanx for Using GROMACS - Have a Nice Day
  : Cannot allocate memory
  Error on node 19, will try to stop all the nodes
  Halting parallel program mdrun_mpi on CPU 19 out of 70

   ***
 The individual node on the cluster has 8GB of 
  physical memory and 16GBof
  swap memory. Moreover, when logged onto the individual 
   nodes,   it  shows
  more than 1GB of free memory, so there should be no 
  problem   with  cluster
  memory. Also, the equilibration jobs for the same system 
  are run on the

  same cluster without any problem.
   What I have observed by submitting different test jobs 
  with   varying  number
 of processors (and no. of replicas, wherever necessary), 
  that any jobwith
 total number of processors = 64, runs faithfully without 
  any problem.As
  soon as total number of processors are more than 64, it 
  gives   the   above
 error. I have tested this with 65 processors/65 replicas 
 also.
 This sounds like you might be running on fewer physical 
   CPUsthan you   have available. If so, running multiple MPI 
   processes perphysical CPU   can lead to memory shortage 
   conditions.
I don't understand what you mean. Do you mean, there might 
   be morethan 8
   processes running per node (each node has 8 processors)? But 
  that  also
   does not seem to be the case, as SGE (sun grid engine) output 
  shows only

  eight processes per node.
  65 processes can't have 8 processes per node.
   why can't it have? as i said, there are 8 processors per node. what i 
   have
   not mentioned is that how many nodes it is using. The jobs got 
   distributed
   over 9 nodes. 8 of which corresponds to 64 processors + 1 processor 
   from

9th node.
 
  OK, that's a full description. Your symptoms are indicative of someone 
  making an error somewhere. Since GROMACS works over more than 64 
  processors elsewhere, the presumption is that you are doing something 
  wrong or the machine is not set up in the way you think it is or should 
  be. To get the most effective help, you need to be sure you're providing 
  full information - else we can't tell which error you're making or 
  (potentially) eliminate you as a source of error.


 Sorry for not being clear in statements.

   As far I can tell you, job distribution seems okay to me. It is 1 job 
   per

processor.
 
  Does non-REMD GROMACS run on more than 64 processors? Does your cluster 
  support using more than 8 nodes in a run? Can you run an MPI Hello 
  world application that prints the processor and node ID across more 
  than 64 processors?


 Yes, the cluster supports runs with more than 8 nodes. I generated a
 system with 10 nm water box and submitted on 80 processors. It was running
 fine. It printed all 80 NODEIDs. Also showed me when the job will get
 over.

 bharat


 
  Mark
 
 
bharat
  
  Mark

 I don't know what you mean by swap memory.
 Sorry, I meant cache memory..
 bharat
   Mark
   System: Protein + water + Na ions (total 46878 atoms)
  Gromacs version: tested with both v4.0.5 and v4.0.7
  compiled with: 

Re: [gmx-users] convert B-factor

2009-12-28 Thread Tsjerk Wassenaar
Hi Antonio,

You can do something like:

grep -v ^#@ rmsf1.xvg  rmsf1.dat
grep -v ^#@ rmsf2.xvg  rmsf2.dat
paste rmsf1.dat rmsf2.dat | awk '{print $4-$2}'  difference.dat

That will give you a file with the rmsf difference. You can use
editconf to read such data into the b-factor field, although you may
need to modify the file a bit. Check editconf -h for that.

Hope it helps,

Tsjerk



On Mon, Dec 28, 2009 at 7:52 AM, AntonioLeung royaltr...@live.cn wrote:
 I know how to calculate, and have calculated the RMSF of  two trajectories
 (of the same molecule), and I want to compare the two RMSFs. I want convert
 their

 discrepancy into B-factors. Can you tell me more detailed?



 -- Original --
 From:  Mark Abrahammark.abra...@anu.edu.au;
 Date:  Mon, Dec 28, 2009 11:04 AM
 To:  Discussion list for GROMACS usersgmx-users@gromacs.org;
 Subject:  Re: [gmx-users] convert B-factor

 AntonioLeung wrote:
 Dear all,
 I want to convert the difference of two rmsf data sets into B-factor of
 a coordinate (to illustrate their difference by coloring the structure
 by B-factor), can anyone tell me how to do it?

 g_rmsf -h

 Mark
 --
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php


 --
 gmx-users mailing list    gmx-us...@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-- 
Tsjerk A. Wassenaar, Ph.D.

Computational Chemist
Medicinal Chemist
Neuropharmacologist
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] protein simulation

2009-12-28 Thread Mark Abraham

edmund lee wrote:

Dear all,

I am trying to do a simulation of protein OMPA. At the step grompp, it 
shows a fatal error stated  Fatal error: Atomtype 'HC' not found!
I tried to configure the error but i failed.  So, hope that anyone can 
help me in this.


There are various underlying causes. Doing some (more) GROMACS tutorial 
material is probably a good idea.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Replica Exchange MD on more than 64 processors

2009-12-28 Thread Mark Abraham

bharat v. adkar wrote:

On Mon, 28 Dec 2009, David van der Spoel wrote:


bharat v. adkar wrote:

 On Mon, 28 Dec 2009, Mark Abraham wrote:

  bharat v. adkar wrote:
On Sun, 27 Dec 2009, Mark Abraham wrote:
   bharat v. adkar wrote:
   On Sun, 27 Dec 2009, Mark Abraham wrote:
  bharat v. adkar wrote:
Dear all,
I am trying to perform replica exchange MD (REMD) 
on a   'protein in
  water' system. I am following instructions given on 
wiki   (How-Tos -
  REMD). I have to perform the REMD simulation with 35 
   different
  temperatures. As per advise on wiki, I equilibrated 
the   system at
  respective temperatures (total of 35 equilibration  
simulations). After
  this I generated chk_0.tpr, chk_1.tpr, ..., 
chk_34.tpr   files from the

  equilibrated structures.
   Now when I submit final job for REMD with 
following   command-line, itgives

  some error:
   command line: mpiexec -np 70 mdrun -multi 35 
-replex1000 -schk_.tpr-v

error msg:
  ---
  Program mdrun_mpi, VERSION 4.0.7
  Source code file: ../../../SRC/src/gmxlib/smalloc.c, 
line:179

Fatal error:
  Not enough memory. Failed to realloc 790760 bytes for 
  nlist-jjnr,

  nlist-jjnr=0x9a400030
  (called from file ../../../SRC/src/mdlib/ns.c, line 503)
  ---
Thanx for Using GROMACS - Have a Nice Day
  : Cannot allocate memory
  Error on node 19, will try to stop all the nodes
  Halting parallel program mdrun_mpi on CPU 19 out of 70
   
***
 The individual node on the cluster has 8GB of 
  physical memory and 16GBof
  swap memory. Moreover, when logged onto the 
individualnodes,   it  shows
  more than 1GB of free memory, so there should be no  
 problem   with  cluster
  memory. Also, the equilibration jobs for the same 
system   are run on the

  same cluster without any problem.
   What I have observed by submitting different test 
jobs   with   varying  number
 of processors (and no. of replicas, wherever 
necessary),   that any jobwith
 total number of processors = 64, runs faithfully 
without   any problem.As
  soon as total number of processors are more than 64, 
it   gives   the   above
 error. I have tested this with 65 processors/65 
replicas  also.
 This sounds like you might be running on fewer 
physicalCPUsthan you   have available. If so, running 
multiple MPIprocesses perphysical CPU   can lead to 
memory shortageconditions.
I don't understand what you mean. Do you mean, there 
mightbe morethan 8
   processes running per node (each node has 8 processors)? 
But   that  also
   does not seem to be the case, as SGE (sun grid engine) 
output   shows only

  eight processes per node.
  65 processes can't have 8 processes per node.
   why can't it have? as i said, there are 8 processors per node. 
what ihave
   not mentioned is that how many nodes it is using. The jobs got  
  distributed
   over 9 nodes. 8 of which corresponds to 64 processors + 1 
processorfrom

9th node.
   OK, that's a full description. Your symptoms are indicative of 
someone   making an error somewhere. Since GROMACS works over more 
than 64   processors elsewhere, the presumption is that you are 
doing something   wrong or the machine is not set up in the way you 
think it is or should   be. To get the most effective help, you need 
to be sure you're providing   full information - else we can't tell 
which error you're making or   (potentially) eliminate you as a 
source of error.


 Sorry for not being clear in statements.

   As far I can tell you, job distribution seems okay to me. It is 
1 jobper

processor.
   Does non-REMD GROMACS run on more than 64 processors? Does your 
cluster   support using more than 8 nodes in a run? Can you run an 
MPI Hello   world application that prints the processor and node 
ID across more   than 64 processors?


 Yes, the cluster supports runs with more than 8 nodes. I generated a
 system with 10 nm water box and submitted on 80 processors. It was 
running

 fine. It printed all 80 NODEIDs. Also showed me when the job will get
 over.

 bharat


   Mark
  bharat
Mark
 I don't know what you mean by swap memory.
 Sorry, I meant cache memory..
 bharat
   Mark
   System: Protein + water + Na ions (total 46878 atoms)
  Gromacs version: tested with both v4.0.5 and v4.0.7
  

[gmx-users] conversion between harmonic bonds/angles and GROMOS96 bonds/angles

2009-12-28 Thread Michael Feig
I am interested in converting between GROMOS96 bond/angle potentials and
standard

harmonic potentials. The GROMACS manual (version 4) suggests formulae for
doing

so (Eq. 4.38 and Eq. 4.53). However, I am having trouble understanding where
these

come from and find that they seem to give rather poor approximations for the
conversions.

It seems that much more accurate (but somewhat more complex) conversion
expressions 

can be derived but I am wondering whether I do not fully understand the
GROMOS96 bond 

potentials vs. standard harmonic potentials.

 

Any insights? 

 

Thanks, Michael.

 

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] trjconv -pbc: how to keep all parts of the system clustered together in PDB?

2009-12-28 Thread Visvaldas K.
I thought I was sure -pbc cluster will work, but it doesn't :( trjconv get's 
stuck on an infinite loop while calculating center of mass.

In an index.ndx I created a new group which I called CLUSTER, as Mark suggested 
(I used make_ndx), then I ran trjconv:

trjconv -f 1Y2Elig_em.trr -s 1Y2Elig_em.tpr -o tmp.pdb -b 120 -e 125 -pbc 
cluster -n index.ndx

What I get is an infinite loop:

COM:2.784 1.968 3.409  iter = 1  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 2  Isq = 230100.828 
COM:2.784 1.968 3.409  iter = 3  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 4  Isq = 230100.828 
COM:2.784 1.968 3.409  iter = 5  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 6  Isq = 230100.828 
COM:2.784 1.968 3.409  iter = 7  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 8  Isq = 230100.828 
COM:2.784 1.968 3.409  iter = 9  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 10  Isq = 230100.828
...
which goes on forever...

What am I doing wrong? (Should I attach the files?)
Thank you for your time!
Vis


 Dear GROMACS users and gurus,
 
 I am sorry if it's a stupid question...I'm fairly new GROMACS, and something 
 is been driving me crazy.  I have a protein, two metal ions, and inhibitor in 
 my system. Somehow in some of the frames I can't keep all those pieces 
 clustered compactly for some postprocessing,  using trjconv for conversion 
 of trr/xtc into PDB format:
 
 -pbc mol option of trjconv: metal ions are far from the rest of the protein.
 -pbc nojump or -pbc whole: inhibitor far from the protein, metals are fine.
 -pbc atom or -pbc res: couple of residues are disattached from the protein; 
 metals and inhibitor are fine.
 -pbc cluster: doesn't work (irrelevant?)

-pbc cluster should work with a suitable index group of 
protein+metal+inhibitor - that's what it is for.

Once that's done, you may want to re-run trjconv to apply other effects. 
Two-pass processing is often necessary.

 Also -center and -boxcenter don't seem to help...
 Can anybody suggest some tricks? I used octahedral box for my runs.

don't seem to help also doesn't help. We can't guess what it was about 
your inputs and outputs that was contrary to your hopes :-)

Mark



  
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Problems with umbrella sampling

2009-12-28 Thread Amir Marcovitz
Hi,

my system is constructed of 2 parallel plates in a box with solvent. each
plate is made of 36 atoms which are positively charged on one of them (SRP)
and negatively charged on the other plate (SRN).

i want to perform a simulation with umbrella sampling between the 2 so i
defined the pulling section parameters in the .mdp parameter file as
following:

; COM PULLING
; Pull type: no, umbrella, constraint or constant_force
pull = umbrella
; Pull geometry: distance, direction, cylinder or position
pull_geometry= distance
; Select components for the pull vector. default: Y Y Y
pull_dim = Y Y Y
; Cylinder radius for dynamic reaction force groups (nm)
pull_r1  = 1
; Switch from r1 to r0 in case of dynamic reaction force
pull_r0  = 1.5
pull_constr_tol  = 1e-06
pull_start   = no
pull_nstxout = 10
pull_nstfout = 1
; Number of pull groups
pull_ngroups = 2
; Group name, weight (default all 1), vector, init, rate (nm/ps),
kJ/(mol*nm^2)
pull_group0  = SRP
pull_weights0= 1
pull_pbcatom0= 0
pull_group1  = SRN
pull_weights1= 1
pull_pbcatom1= 0
pull_vec1= 0.0 1.0 0.0
pull_init1   = 1.5
pull_rate1   = 0
pull_k1  = 1
pull_kB1 = 0

when proccesing the file with grompp i get the following error:

*Fatal error:
Number of weights (1) for pull group 0 'SRP' does not match the number of
atoms (36)*

is someone recognizing my mistake?
does someone has an experience with umbrella sampling in GROMACS?

thanks'
amir
*
*
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Problems with umbrella sampling

2009-12-28 Thread Justin A. Lemkul



Amir Marcovitz wrote:

Hi,
 
my system is constructed of 2 parallel plates in a box with solvent. 
each plate is made of 36 atoms which are positively charged on one of 
them (SRP) and negatively charged on the other plate (SRN).
 
i want to perform a simulation with umbrella sampling between the 2 so i 
defined the pulling section parameters in the .mdp parameter file as 
following:
 
; COM PULLING 
; Pull type: no, umbrella, constraint or constant_force

pull = umbrella
; Pull geometry: distance, direction, cylinder or position
pull_geometry= distance
; Select components for the pull vector. default: Y Y Y
pull_dim = Y Y Y
; Cylinder radius for dynamic reaction force groups (nm)
pull_r1  = 1
; Switch from r1 to r0 in case of dynamic reaction force
pull_r0  = 1.5
pull_constr_tol  = 1e-06
pull_start   = no
pull_nstxout = 10
pull_nstfout = 1
; Number of pull groups
pull_ngroups = 2
; Group name, weight (default all 1), vector, init, rate (nm/ps), 
kJ/(mol*nm^2)

pull_group0  = SRP
pull_weights0= 1
pull_pbcatom0= 0
pull_group1  = SRN
pull_weights1= 1
pull_pbcatom1= 0
pull_vec1= 0.0 1.0 0.0
pull_init1   = 1.5
pull_rate1   = 0
pull_k1  = 1
pull_kB1 = 0
 
when proccesing the file with grompp i get the following error:
 
*Fatal error:
Number of weights (1) for pull group 0 'SRP' does not match the number 
of atoms (36)*
 
is someone recognizing my mistake?


Please refer to the manual (manual.gromacs.org is quite handy), you will find:

Optional relative weights which are multiplied with the masses of the atoms to 
give the total weight for the COM. The number should be 0, meaning all 1, or the 
number of atoms in the pull group.


I also think your value for pull_ngroup is wrong.  It appears you are pulling 
SRN with respect to SRP, so you only have one pull group, not two.


-Justin


does someone has an experience with umbrella sampling in GROMACS?
 
thanks'

amir
*
*



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] trjconv -pbc: how to keep all parts of the system clustered together in PDB?

2009-12-28 Thread chris . neale

Search:
trjconv pbc cluster
on the gromacs mailing list and take a look at the first hit.  
Basically, you need to find a frame that *does* work with -pbc cluster  
and then make a new .tpr based on the clustered .gro and then run  
trjconv -pbc mol. Just ensure that this frame is as close to the start  
of your run as possible.


Chris.



I thought I was sure -pbc cluster will work, but it doesn't :( trjconv
get's stuck on an infinite loop while calculating center of mass.

In an index.ndx I created a new group which I called CLUSTER, as Mark
suggested (I used make_ndx), then I ran trjconv:

trjconv -f 1Y2Elig_em.trr -s 1Y2Elig_em.tpr -o tmp.pdb -b 120 -e 125
-pbc cluster -n index.ndx

What I get is an infinite loop:

COM:2.784 1.968 3.409  iter = 1  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 2  Isq = 230100.828
COM:2.784 1.968 3.409  iter = 3  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 4  Isq = 230100.828
COM:2.784 1.968 3.409  iter = 5  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 6  Isq = 230100.828
COM:2.784 1.968 3.409  iter = 7  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 8  Isq = 230100.828
COM:2.784 1.968 3.409  iter = 9  Isq = 1840514.500
COM:4.175 5.905 3.409  iter = 10  Isq = 230100.828
...
which goes on forever...

What am I doing wrong? (Should I attach the files?)
Thank you for your time!
Vis



Dear GROMACS users and gurus,

I am sorry if it's a stupid question...I'm fairly new GROMACS, and
something is been driving me crazy.  I have a protein, two metal
ions, and inhibitor in my system. Somehow in some of the frames I
can't keep all those pieces clustered compactly for some
postprocessing,  using trjconv for conversion of trr/xtc into PDB
format:

-pbc mol option of trjconv: metal ions are far from the rest of
the protein.
-pbc nojump or -pbc whole: inhibitor far from the protein, metals are fine.
-pbc atom or -pbc res: couple of residues are disattached from the
protein; metals and inhibitor are fine.
-pbc cluster: doesn't work (irrelevant?)


-pbc cluster should work with a suitable index group of
protein+metal+inhibitor - that's what it is for.

Once that's done, you may want to re-run trjconv to apply other effects.
Two-pass processing is often necessary.


Also -center and -boxcenter don't seem to help...
Can anybody suggest some tricks? I used octahedral box for my runs.


don't seem to help also doesn't help. We can't guess what it was about
your inputs and outputs that was contrary to your hopes :-)

Mark






--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] the code speed between Gromacs 3.0 and Gromacs 4.0

2009-12-28 Thread Dechang Li
Dear gmx-users, 

Are there any comparisons of the code speed between Gromacs 3.0 and 
Gromacs 4.0. In my calculation, I got a speed about 3.5ns/day of a system have 
about 50,000 atoms, using 8 CPUs with Gromacs 3.3.1. In contrast, the speed can 
reach up to 7.3ns/day when switch to Gromacs 4.0.7 while the other conditions 
are keep the same. Is this result normal? 



Best regards,




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] the code speed between Gromacs 3.0 and Gromacs 4.0

2009-12-28 Thread Justin A. Lemkul



Dechang Li wrote:

Dear gmx-users,

Are there any comparisons of the code speed between Gromacs 3.0 and Gromacs
4.0. In my calculation, I got a speed about 3.5ns/day of a system have about
50,000 atoms, using 8 CPUs with Gromacs 3.3.1. In contrast, the speed can
reach up to 7.3ns/day when switch to Gromacs 4.0.7 while the other conditions
are keep the same. Is this result normal?




I would suggest reading the Gromacs 4 paper.  There are lots of benchmarks in 
there.  Sounds to me like everything is normal.


-Justin



Best regards,






--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php