[gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Thamu

 Hi

 Recently I successfully installed the gromacs-4.0.5 mpi version.
 I could run in 8 cpu. but the speed is very slow.
 Total number of atoms in the system is 78424.
 while running all 8 cpu showing 95-100% CPU.

 How to speed up the calculation.

 Thanks


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Jussi Lehtola
On Thu, 2009-06-11 at 14:35 +0800, Thamu wrote:
 Hi
 
 Recently I successfully installed the gromacs-4.0.5 mpi
 version.
 I could run in 8 cpu. but the speed is very slow. 
 Total number of atoms in the system is 78424.
 while running all 8 cpu showing 95-100% CPU.

That's normal for a system that atoms/cpu ratio.
What's your system and what mdp file are you using?
-- 
--
Jussi Lehtola, FM, Tohtorikoulutettava
Fysiikan laitos, Helsingin Yliopisto
jussi.leht...@helsinki.fi, p. 191 50632
--
Mr. Jussi Lehtola, M. Sc., Doctoral Student
Department of Physics, University of Helsinki, Finland
jussi.leht...@helsinki.fi
--


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Mark Abraham

Thamu wrote:

Hi

Recently I successfully installed the gromacs-4.0.5 mpi version.


Possibly.


I could run in 8 cpu. but the speed is very slow.
Total number of atoms in the system is 78424.
while running all 8 cpu showing 95-100% CPU.

How to speed up the calculation.


You haven't given us any diagnostic information. The problem could be 
that you're not running an MPI GROMACS (show us your configure line, 
your mdrun command line and the top 50 lines of your .log file).


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Thamu
Hi Mark,

The top md.log is below. The mdrun command was mpirun -np 8
~/software/bin/mdrun_mpi -deffnm md


 :-)  G  R  O  M  A  C  S  (-:

  GROup of MAchos and Cynical Suckers

:-)  VERSION 4.0.5  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  /home/thamu/software/bin/mdrun_mpi  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

Input Parameters:
   integrator   = md
   nsteps   = 1000
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 1
   comm_mode= Linear
   nstlog   = 100
   nstxout  = 1000
   nstvout  = 0
   nstfout  = 0
   nstenergy= 100
   nstxtcout= 0
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 70
   nky  = 70
   nkz  = 70
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = TRUE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = V-rescale
   epc  = Parrinello-Rahman
   epctype  = Isotropic
   tau_p= 0.5
   ref_p (3x3):
  ref_p[0]={ 1.0e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  1.0e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  1.0e+00}
   compress (3x3):
  compress[0]={ 4.5e-05,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  4.5e-05,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  4.5e-05}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 0.0e+00
   posres_comB (3):
  posres_comB[0]= 0.0e+00
  posres_comB[1]= 0.0e+00
  posres_comB[2]= 0.0e+00
   andersen_seed= 815131
   rlist= 1
   rtpi = 0.05
   coulombtype  = PME
   rcoulomb_switch  = 0
   rcoulomb = 1
   vdwtype  = Cut-off
   rvdw_switch  = 0
   rvdw = 1.4
   epsilon_r= 1
   epsilon_rf   = 1
   tabext   = 1
   implicit_solvent = No
   gb_algorithm = Still
   gb_epsilon_solvent   = 80
   nstgbradii   = 1
   rgbradii = 2
   gb_saltconc  = 0
   gb_obc_alpha = 1
   gb_obc_beta  = 0.8
   gb_obc_gamma = 4.85
   sa_surface_tension   = 2.092
   DispCorr = No
   free_energy  = no
   init_lambda  = 0
   sc_alpha = 0
   sc_power = 0
   sc_sigma = 0.3
   delta_lambda = 0
   nwall= 0
   wall_type= 9-3
   wall_atomtype[0] = -1
   wall_atomtype[1] = -1
   wall_density[0]  = 0
   wall_density[1]  = 0
   wall_ewald_zfac  = 3
   pull = no
   disre= No
   disre_weighting  = Conservative
   disre_mixed  = FALSE
   dr_fc= 1000
   dr_tau   = 0
   nstdisreout  = 

Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Mark Abraham
On 06/11/09, Thamu  asth...@gmail.com wrote:
 
 Hi Mark,
 
 The top md.log is below. The mdrun command was mpirun -np 8 
 ~/software/bin/mdrun_mpi -deffnm md
In my experience, correctly-configured MPI gromacs running in parallel reports 
information about the number of nodes and the identity of the node writing the 
.log file. This is missing, so something is wrong with your setup.

I've assumed that you've compared this 8-processor runtime with a 
single-processor runtime and found them comparable...

Mark

 
 
 
   :-)   G  R  O  M  A  C  S  (-:
 
   GROup of MAchos and Cynical Suckers
 
      :-)   VERSION 4.0.5  (-:
 
 
   Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
    Copyright (c) 1991-2000, University of Groningen, The Netherlands.
  Copyright (c) 2001-2008, The GROMACS development team,
     check out http://www.gromacs.org (http://www.gromacs.org) for 
 more information.
 
  This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU General Public License
  as published by the Free Software Foundation; either version 2
  of the License, or (at your option) any later version.
 
    :-)   /home/thamu/software/bin/mdrun_mpi  (-:
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
 GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
 molecular simulation
 J. Chem. Theory Comput. 4 (2008) pp. 435-447
   --- Thank You ---  
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
 Berendsen
 GROMACS: Fast, Flexible and Free
 J. Comp. Chem. 26 (2005) pp. 1701-1719
   --- Thank You ---  
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 E. Lindahl and B. Hess and D. van der Spoel
 GROMACS 3.0: A package for molecular simulation and trajectory analysis
 J. Mol. Mod. 7 (2001) pp. 306-317
   --- Thank You ---  
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 H. J. C. Berendsen, D. van der Spoel and R. van Drunen
 GROMACS: A message-passing parallel molecular dynamics implementation
 Comp. Phys. Comm. 91 (1995) pp. 43-56
   --- Thank You ---  
 
 Input Parameters:
    integrator   = md
    nsteps   = 1000
    init_step    = 0
    ns_type  = Grid
    nstlist  = 10
    ndelta   = 2
    nstcomm  = 1
    comm_mode    = Linear
    nstlog   = 100
    nstxout  = 1000
    nstvout  = 0
    nstfout  = 0
    nstenergy    = 100
    nstxtcout    = 0
    init_t   = 0
    delta_t  = 0.002
    xtcprec  = 1000
    nkx  = 70
    nky  = 70
    nkz  = 70
    pme_order    = 4
    ewald_rtol   = 1e-05
    ewald_geometry   = 0
    epsilon_surface  = 0
    optimize_fft = TRUE
    ePBC = xyz
    bPeriodicMols    = FALSE
    bContinuation    = FALSE
    bShakeSOR    = FALSE
    etc  = V-rescale
    epc  = Parrinello-Rahman
    epctype  = Isotropic
    tau_p    = 0.5
    ref_p (3x3):
   ref_p[    0]={ 1.0e+00,  0.0e+00,  0.0e+00}
   ref_p[    1]={ 0.0e+00,  1.0e+00,  0.0e+00}
   ref_p[    2]={ 0.0e+00,  0.0e+00,  1.0e+00}
    compress (3x3):
   compress[    0]={ 4.5e-05,  0.0e+00,  0.0e+00}
   compress[    1]={ 0.0e+00,  4.5e-05,  0.0e+00}
   compress[    2]={ 0.0e+00,  0.0e+00,  4.5e-05}
    refcoord_scaling = No
    posres_com (3):
   posres_com[0]= 0.0e+00
   posres_com[1]= 0.0e+00
   posres_com[2]= 0.0e+00
    posres_comB (3):
   posres_comB[0]= 0.0e+00
   posres_comB[1]= 0.0e+00
   posres_comB[2]= 0.0e+00
    andersen_seed    = 815131
    rlist    = 1
    rtpi = 0.05
    coulombtype  = PME
    rcoulomb_switch  = 0
    rcoulomb = 1
    vdwtype  = Cut-off
    rvdw_switch  = 0
    rvdw = 1.4
    epsilon_r    = 1
    epsilon_rf   = 1
    tabext   = 1
    implicit_solvent = No
    gb_algorithm = Still
    gb_epsilon_solvent   = 80
    nstgbradii   = 1
    rgbradii = 2
    gb_saltconc  = 0
    gb_obc_alpha = 1
    gb_obc_beta  = 0.8
    gb_obc_gamma = 4.85
    sa_surface_tension   = 2.092
    DispCorr = No
    free_energy  

RE: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread jimkress_58
Mark is correct.  You should see node information at the top of the md log
file if you are truly running in parallel.

Apparently the default host (or machines) file (which contains the list of
available nodes on your cluster) has not been /is not being populated
correctly.

Your can build your own hosts file and then rerun the job using the command
line:

mpirun -np 8 -hostfile hostfile ~/software/bin/mdrun_mpi -deffnm md

The content and structure of the hostfile will depend on what version of MPI
you are using.  Hopefully, you are not using MPICH 1 but instead are using
OpenMPI, MPICH2, or Intel MPI.

Jim


-Original Message-
From: gmx-users-boun...@gromacs.org [mailto:gmx-users-boun...@gromacs.org]
On Behalf Of Thamu
Sent: Thursday, June 11, 2009 9:13 AM
To: gmx-users@gromacs.org
Subject: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

Hi Mark,

The top md.log is below. The mdrun command was mpirun -np 8
~/software/bin/mdrun_mpi -deffnm md


 :-)  G  R  O  M  A  C  S  (-:

  GROup of MAchos and Cynical Suckers

:-)  VERSION 4.0.5  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  /home/thamu/software/bin/mdrun_mpi  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

Input Parameters:
   integrator   = md
   nsteps   = 1000
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 1
   comm_mode= Linear
   nstlog   = 100
   nstxout  = 1000
   nstvout  = 0
   nstfout  = 0
   nstenergy= 100
   nstxtcout= 0
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 70
   nky  = 70
   nkz  = 70
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = TRUE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = V-rescale
   epc  = Parrinello-Rahman
   epctype  = Isotropic
   tau_p= 0.5
   ref_p (3x3):
  ref_p[0]={ 1.0e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  1.0e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  1.0e+00}
   compress (3x3):
  compress[0]={ 4.5e-05,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  4.5e-05,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  4.5e-05}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 0.0e+00
   posres_comB (3):
  posres_comB[0]= 0.0e+00
  posres_comB[1]= 0.0e+00
  posres_comB[2]= 0.0e+00
   andersen_seed= 815131
   rlist= 1
   rtpi = 0.05
   coulombtype  = PME
   rcoulomb_switch  = 0
   rcoulomb = 1
   vdwtype  = Cut-off
   rvdw_switch  = 0
   rvdw = 1.4
   epsilon_r= 1
   epsilon_rf   = 1
   tabext   = 1
   implicit_solvent