RE: [gmx-users] Gromacs in Parallel

2009-08-11 Thread Jim Kress
Yes, many people have encountered this problem with mpich 1.2.7  Use a
version of MPICH2 (like OpenMPI) v1.3

Jim 

 -Original Message-
 From: gmx-users-boun...@gromacs.org 
 [mailto:gmx-users-boun...@gromacs.org] On Behalf Of Andrew Paluch
 Sent: Monday, August 10, 2009 10:53 AM
 To: gmx-users@gromacs.org
 Subject: [gmx-users] Gromacs in Parallel
 
 To whom this may concern,
 
 I am receiving the following errors when attempting to run 
 Gromacs in parallel:
 
 Making 1D domain decomposition 4 x 1 x 1
 p2_21562:  p4_error: Timeout in establishing connection to 
 remote process: 0
 p2_21562: (302.964844) net_send: could not write to fd=5, errno = 32
 
 
 where I am using mpich 1.2.7 for 64 bit processors.  From 
 what I can find, it seems as if this is a mpich issue and not 
 an issue of Gromacs.  Has anyone else encountered such a 
 problem?  Also, does anyone have any suggestions for a solution?
 
 Thank you,
 
 Andrew 
 

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Gromacs in Parallel

2009-08-10 Thread Mark Abraham

Andrew Paluch wrote:

To whom this may concern,

I am receiving the following errors when attempting to run Gromacs in
parallel:

Making 1D domain decomposition 4 x 1 x 1
p2_21562:  p4_error: Timeout in establishing connection to remote process: 0
p2_21562: (302.964844) net_send: could not write to fd=5, errno = 32


where I am using mpich 1.2.7 for 64 bit processors.  From what I can find,
it seems as if this is a mpich issue and not an issue of Gromacs.  Has
anyone else encountered such a problem?  Also, does anyone have any
suggestions for a solution?


Indeed, this is not a problem intrinsic to GROMACS. I'm not aware of 
problems with particular MPI libraries, but you might try compiling 
GROMACS with another such library. Whoever configured this machine 
should probably look into the problem.


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Gromacs in Parallel

2009-08-10 Thread TJ Piggot

I agree. Mpich 1 is very old, you should try mpich 2 or lam or openmpi.

Tom

--On Tuesday, August 11, 2009 09:16:54 +1000 Mark Abraham 
mark.abra...@anu.edu.au wrote:



Andrew Paluch wrote:

To whom this may concern,

I am receiving the following errors when attempting to run Gromacs in
parallel:

Making 1D domain decomposition 4 x 1 x 1
p2_21562:  p4_error: Timeout in establishing connection to remote
process: 0 p2_21562: (302.964844) net_send: could not write to fd=5,
errno = 32


where I am using mpich 1.2.7 for 64 bit processors.  From what I can
find, it seems as if this is a mpich issue and not an issue of Gromacs.
Has anyone else encountered such a problem?  Also, does anyone have any
suggestions for a solution?


Indeed, this is not a problem intrinsic to GROMACS. I'm not aware of
problems with particular MPI libraries, but you might try compiling
GROMACS with another such library. Whoever configured this machine should
probably look into the problem.

Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




--
TJ Piggot
t.pig...@bristol.ac.uk
University of Bristol, UK.

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Thamu

 Hi

 Recently I successfully installed the gromacs-4.0.5 mpi version.
 I could run in 8 cpu. but the speed is very slow.
 Total number of atoms in the system is 78424.
 while running all 8 cpu showing 95-100% CPU.

 How to speed up the calculation.

 Thanks


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Jussi Lehtola
On Thu, 2009-06-11 at 14:35 +0800, Thamu wrote:
 Hi
 
 Recently I successfully installed the gromacs-4.0.5 mpi
 version.
 I could run in 8 cpu. but the speed is very slow. 
 Total number of atoms in the system is 78424.
 while running all 8 cpu showing 95-100% CPU.

That's normal for a system that atoms/cpu ratio.
What's your system and what mdp file are you using?
-- 
--
Jussi Lehtola, FM, Tohtorikoulutettava
Fysiikan laitos, Helsingin Yliopisto
jussi.leht...@helsinki.fi, p. 191 50632
--
Mr. Jussi Lehtola, M. Sc., Doctoral Student
Department of Physics, University of Helsinki, Finland
jussi.leht...@helsinki.fi
--


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Mark Abraham

Thamu wrote:

Hi

Recently I successfully installed the gromacs-4.0.5 mpi version.


Possibly.


I could run in 8 cpu. but the speed is very slow.
Total number of atoms in the system is 78424.
while running all 8 cpu showing 95-100% CPU.

How to speed up the calculation.


You haven't given us any diagnostic information. The problem could be 
that you're not running an MPI GROMACS (show us your configure line, 
your mdrun command line and the top 50 lines of your .log file).


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Thamu
Hi Mark,

The top md.log is below. The mdrun command was mpirun -np 8
~/software/bin/mdrun_mpi -deffnm md


 :-)  G  R  O  M  A  C  S  (-:

  GROup of MAchos and Cynical Suckers

:-)  VERSION 4.0.5  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  /home/thamu/software/bin/mdrun_mpi  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

Input Parameters:
   integrator   = md
   nsteps   = 1000
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 1
   comm_mode= Linear
   nstlog   = 100
   nstxout  = 1000
   nstvout  = 0
   nstfout  = 0
   nstenergy= 100
   nstxtcout= 0
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 70
   nky  = 70
   nkz  = 70
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = TRUE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = V-rescale
   epc  = Parrinello-Rahman
   epctype  = Isotropic
   tau_p= 0.5
   ref_p (3x3):
  ref_p[0]={ 1.0e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  1.0e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  1.0e+00}
   compress (3x3):
  compress[0]={ 4.5e-05,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  4.5e-05,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  4.5e-05}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 0.0e+00
   posres_comB (3):
  posres_comB[0]= 0.0e+00
  posres_comB[1]= 0.0e+00
  posres_comB[2]= 0.0e+00
   andersen_seed= 815131
   rlist= 1
   rtpi = 0.05
   coulombtype  = PME
   rcoulomb_switch  = 0
   rcoulomb = 1
   vdwtype  = Cut-off
   rvdw_switch  = 0
   rvdw = 1.4
   epsilon_r= 1
   epsilon_rf   = 1
   tabext   = 1
   implicit_solvent = No
   gb_algorithm = Still
   gb_epsilon_solvent   = 80
   nstgbradii   = 1
   rgbradii = 2
   gb_saltconc  = 0
   gb_obc_alpha = 1
   gb_obc_beta  = 0.8
   gb_obc_gamma = 4.85
   sa_surface_tension   = 2.092
   DispCorr = No
   free_energy  = no
   init_lambda  = 0
   sc_alpha = 0
   sc_power = 0
   sc_sigma = 0.3
   delta_lambda = 0
   nwall= 0
   wall_type= 9-3
   wall_atomtype[0] = -1
   wall_atomtype[1] = -1
   wall_density[0]  = 0
   wall_density[1]  = 0
   wall_ewald_zfac  = 3
   pull = no
   disre= No
   disre_weighting  = Conservative
   disre_mixed  = FALSE
   dr_fc= 1000
   dr_tau   = 0
   nstdisreout  = 

Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Mark Abraham
On 06/11/09, Thamu  asth...@gmail.com wrote:
 
 Hi Mark,
 
 The top md.log is below. The mdrun command was mpirun -np 8 
 ~/software/bin/mdrun_mpi -deffnm md
In my experience, correctly-configured MPI gromacs running in parallel reports 
information about the number of nodes and the identity of the node writing the 
.log file. This is missing, so something is wrong with your setup.

I've assumed that you've compared this 8-processor runtime with a 
single-processor runtime and found them comparable...

Mark

 
 
 
   :-)   G  R  O  M  A  C  S  (-:
 
   GROup of MAchos and Cynical Suckers
 
      :-)   VERSION 4.0.5  (-:
 
 
   Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
    Copyright (c) 1991-2000, University of Groningen, The Netherlands.
  Copyright (c) 2001-2008, The GROMACS development team,
     check out http://www.gromacs.org (http://www.gromacs.org) for 
 more information.
 
  This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU General Public License
  as published by the Free Software Foundation; either version 2
  of the License, or (at your option) any later version.
 
    :-)   /home/thamu/software/bin/mdrun_mpi  (-:
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
 GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
 molecular simulation
 J. Chem. Theory Comput. 4 (2008) pp. 435-447
   --- Thank You ---  
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
 Berendsen
 GROMACS: Fast, Flexible and Free
 J. Comp. Chem. 26 (2005) pp. 1701-1719
   --- Thank You ---  
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 E. Lindahl and B. Hess and D. van der Spoel
 GROMACS 3.0: A package for molecular simulation and trajectory analysis
 J. Mol. Mod. 7 (2001) pp. 306-317
   --- Thank You ---  
 
 
  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 H. J. C. Berendsen, D. van der Spoel and R. van Drunen
 GROMACS: A message-passing parallel molecular dynamics implementation
 Comp. Phys. Comm. 91 (1995) pp. 43-56
   --- Thank You ---  
 
 Input Parameters:
    integrator   = md
    nsteps   = 1000
    init_step    = 0
    ns_type  = Grid
    nstlist  = 10
    ndelta   = 2
    nstcomm  = 1
    comm_mode    = Linear
    nstlog   = 100
    nstxout  = 1000
    nstvout  = 0
    nstfout  = 0
    nstenergy    = 100
    nstxtcout    = 0
    init_t   = 0
    delta_t  = 0.002
    xtcprec  = 1000
    nkx  = 70
    nky  = 70
    nkz  = 70
    pme_order    = 4
    ewald_rtol   = 1e-05
    ewald_geometry   = 0
    epsilon_surface  = 0
    optimize_fft = TRUE
    ePBC = xyz
    bPeriodicMols    = FALSE
    bContinuation    = FALSE
    bShakeSOR    = FALSE
    etc  = V-rescale
    epc  = Parrinello-Rahman
    epctype  = Isotropic
    tau_p    = 0.5
    ref_p (3x3):
   ref_p[    0]={ 1.0e+00,  0.0e+00,  0.0e+00}
   ref_p[    1]={ 0.0e+00,  1.0e+00,  0.0e+00}
   ref_p[    2]={ 0.0e+00,  0.0e+00,  1.0e+00}
    compress (3x3):
   compress[    0]={ 4.5e-05,  0.0e+00,  0.0e+00}
   compress[    1]={ 0.0e+00,  4.5e-05,  0.0e+00}
   compress[    2]={ 0.0e+00,  0.0e+00,  4.5e-05}
    refcoord_scaling = No
    posres_com (3):
   posres_com[0]= 0.0e+00
   posres_com[1]= 0.0e+00
   posres_com[2]= 0.0e+00
    posres_comB (3):
   posres_comB[0]= 0.0e+00
   posres_comB[1]= 0.0e+00
   posres_comB[2]= 0.0e+00
    andersen_seed    = 815131
    rlist    = 1
    rtpi = 0.05
    coulombtype  = PME
    rcoulomb_switch  = 0
    rcoulomb = 1
    vdwtype  = Cut-off
    rvdw_switch  = 0
    rvdw = 1.4
    epsilon_r    = 1
    epsilon_rf   = 1
    tabext   = 1
    implicit_solvent = No
    gb_algorithm = Still
    gb_epsilon_solvent   = 80
    nstgbradii   = 1
    rgbradii = 2
    gb_saltconc  = 0
    gb_obc_alpha = 1
    gb_obc_beta  = 0.8
    gb_obc_gamma = 4.85
    sa_surface_tension   = 2.092
    DispCorr = No
    free_energy  

RE: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread jimkress_58
Mark is correct.  You should see node information at the top of the md log
file if you are truly running in parallel.

Apparently the default host (or machines) file (which contains the list of
available nodes on your cluster) has not been /is not being populated
correctly.

Your can build your own hosts file and then rerun the job using the command
line:

mpirun -np 8 -hostfile hostfile ~/software/bin/mdrun_mpi -deffnm md

The content and structure of the hostfile will depend on what version of MPI
you are using.  Hopefully, you are not using MPICH 1 but instead are using
OpenMPI, MPICH2, or Intel MPI.

Jim


-Original Message-
From: gmx-users-boun...@gromacs.org [mailto:gmx-users-boun...@gromacs.org]
On Behalf Of Thamu
Sent: Thursday, June 11, 2009 9:13 AM
To: gmx-users@gromacs.org
Subject: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

Hi Mark,

The top md.log is below. The mdrun command was mpirun -np 8
~/software/bin/mdrun_mpi -deffnm md


 :-)  G  R  O  M  A  C  S  (-:

  GROup of MAchos and Cynical Suckers

:-)  VERSION 4.0.5  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  /home/thamu/software/bin/mdrun_mpi  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

Input Parameters:
   integrator   = md
   nsteps   = 1000
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 1
   comm_mode= Linear
   nstlog   = 100
   nstxout  = 1000
   nstvout  = 0
   nstfout  = 0
   nstenergy= 100
   nstxtcout= 0
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 70
   nky  = 70
   nkz  = 70
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = TRUE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = V-rescale
   epc  = Parrinello-Rahman
   epctype  = Isotropic
   tau_p= 0.5
   ref_p (3x3):
  ref_p[0]={ 1.0e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  1.0e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  1.0e+00}
   compress (3x3):
  compress[0]={ 4.5e-05,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  4.5e-05,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  4.5e-05}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 0.0e+00
   posres_comB (3):
  posres_comB[0]= 0.0e+00
  posres_comB[1]= 0.0e+00
  posres_comB[2]= 0.0e+00
   andersen_seed= 815131
   rlist= 1
   rtpi = 0.05
   coulombtype  = PME
   rcoulomb_switch  = 0
   rcoulomb = 1
   vdwtype  = Cut-off
   rvdw_switch  = 0
   rvdw = 1.4
   epsilon_r= 1
   epsilon_rf   = 1
   tabext   = 1
   implicit_solvent

Re: [gmx-users] gromacs in parallel version

2009-03-09 Thread Diego Enry Gomes

Looks like you are using MPICH2 as mpi software.
Try including mpirun before mdrun_mpi.

mpirun -n 4 mdun_mpi -v -s topol.tpr


If that doesn't work you shoud run start the MPI DAEMON ( MPD ) before  
mpirun:


mpdboot
mpirun -n 2 mdun_mpi -v -s topol.tpr

after you job finishes you might want to stop the mpi daemon by running:
mpdallexit

Diego.

--
=
Diego Enry B Gomes | PhD Student @ UFRJ - Brazil
/tmp/home/@ Pacific Northwest National Laboratory
Richland, WA.  +1 (509) 372.6363
diegoenry.go...@pnl.gov
=


On Mar 6, 2009, at 5:13 AM, ANINDITA GAYEN wrote:


Dear all,
Sorry for the former post without any subject.
I want to install gromacs in parallel version. I already  
have the normal version of gromacs and i want an MPI version of  
mdrun. The commands i have used are as followed.

make distclean
./configure --enable-float --enable-mpi --prefix=/home/x --program- 
suffix=_mpi

make mdrun -j 4
make install-mdrun
[ i have installed fftw with --prefix=/home/x/fftw-3.2.1 and  
in .bashrc include ...

   export CPPFLAGS=-I/home/x/fftw-3.2.1/include
   export LDFFLAGS=-L/home/x/fftw-3.2.1/lib]
grmmacs installation run successfully.
But when i run
mdrun_mpi ..i got the message  Can't read  
MPIRUN_MPD and the mdrun_mpi program does not run.

Any suggestion regarding this problem will be highly acceptable.
thanks in advance,

Ms. Anindita Gayen
C/O Dr. Chaitali Mukhopadhyay
Senior Research Fellow
Department of Chemistry
University of Calcutta
92, A. P. C. Road
Kolkata-700 009
India

Add more friends to your messenger and enjoy! Invite them  
now.___

gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] gromacs in parallel version

2009-03-06 Thread ANINDITA GAYEN
Dear all,
    Sorry for the former post without any subject.
    I want to install gromacs in parallel version. I already have the 
normal version of gromacs and i want an MPI version of mdrun. The commands i 
have used are as followed.
make distclean
../configure --enable-float --enable-mpi --prefix=/home/x --program-suffix=_mpi
make mdrun -j 4
make install-mdrun
[ i have installed fftw with --prefix=/home/x/fftw-3.2.1 and in .bashrc include 
...
   export CPPFLAGS=-I/home/x/fftw-3.2.1/include
   export LDFFLAGS=-L/home/x/fftw-3.2.1/lib]
grmmacs installation run successfully.
But when i run 
mdrun_mpi ..i got the message  Can't read MPIRUN_MPD and the 
mdrun_mpi program does not run.
Any suggestion regarding this problem will be highly acceptable.
thanks in advance,

Ms. Anindita Gayen
C/O Dr. Chaitali Mukhopadhyay
Senior Research Fellow
Department of Chemistry
University of Calcutta
92, A. P. C. Road
Kolkata-700 009
India


  Connect with friends all over the world. Get Yahoo! India Messenger at 
http://in.messenger.yahoo.com/?wm=n/___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] gromacs in parallel version

2009-03-06 Thread Justin A. Lemkul



ANINDITA GAYEN wrote:

Dear all,

Sorry for the former post without any subject.
I want to install gromacs in parallel version. I already 
have the normal version of gromacs and i want an MPI version of mdrun. 
The commands i have used are as followed.

*make distclean*
*./configure --enable-float --enable-mpi --prefix=/home/x 
--program-suffix=_mpi*

*make mdrun -j 4*
*make install-mdrun*
[ i have installed fftw with --prefix=/home/x/fftw-3.2.1 and in .bashrc 
include ...

   export CPPFLAGS=-I/home/x/fftw-3.2.1/include
   export LDFFLAGS=-L/home/x/fftw-3.2.1/lib]
grmmacs installation run successfully.
But when i run
mdrun_mpi ..i got the message  Can't read MPIRUN_MPD 
and the mdrun_mpi program does not run.


What command are you actually issuing to run mdrun_mpi?  This sounds like more 
of an MPI environment problem, not a Gromacs problem.


-Justin


Any suggestion regarding this problem will be highly acceptable.
thanks in advance,

Ms. Anindita Gayen
C/O Dr. Chaitali Mukhopadhyay
Senior Research Fellow
Department of Chemistry
University of Calcutta
92, A. P. C. Road
Kolkata-700 009
India



Add more friends to your messenger and enjoy! Invite them now. 
http://in.rd.yahoo.com/tagline_messenger_6/*http://messenger.yahoo.com/invite/ 






___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--


Justin A. Lemkul
Graduate Research Assistant
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs-4.0.2, parallel performance in two quadcore xeon machines

2009-03-03 Thread Antoine FORTUNE
Hi Nikos,

I experienced the same king of things with a core i7 on one node and a
corequad on an second node (Gromacs 4.0.3).
Running on 8 threads (i7) or 4 cores in a single node is 30% faster than
8 or 12 cores on 2 nodes. I noticed that my gigabit switch is not
limiting bandwidth using openmpi with rsh (~30 Gbps/70 max).
Running on a single node cpu is 100% used by user (mdruns) while using 2
nodes each cpu is only 50% used by user, the 50% remaining being used by
system. The top command shows 4 mdrun jobs using 100% CPU.
I guess system usage is for network transferts ... Using ssh, system
usage is quite the same and bandwidth is doubled.

Any ideas about that system activity and how to reduce it ?

Thanks



Berk Hess a écrit :
 Hi,

 Oops, I meant 72000, which is only a factor of 10.
 I guess it might be faster one two nodes then, but probably not 2 times.
 If you use PME you can also experiment with putting all the PME nodes
 on one machine and the non-PME nodes on the other,
 probably with mdrun -ddorder pp_pme

 Gromacs supports near to maxint atoms.
 The question is much more what kind of system size you are
 scientifically interested in.

 Ethernet will never scale very well for small numbers of atoms per core.
 Infiniband will scale very well.

 Berk


 
 Date: Wed, 18 Feb 2009 12:56:16 -0800
 From: lastexile...@yahoo.de
 Subject: RE: [gmx-users] gromacs-4.0.2, parallel performance in two
 quad core xeon machines
 To: gmx-users@gromacs.org

 Hello,

 thank you for your answer. I just wondering though. How am I supposed
 to have a system with more than 9 atoms, while the gro file has a
 fixed format giving up to 5 digits in the number of atoms? 


 What else should I change in order to succeed better performance from
 my hardware if I can succeed having a much bigger system? You say so
 that ethernet has reached its limits.. 

 I was concidering using a supercomputing center in Europe and as far
 as I know they are using nodes which are using the Cell 9 core
 processors technology in each node. How someone there can accomplish a
 better performance using gromacs 4 using more nodes? Which might be
 the limit there in such machines.  

 Thank you once again,
 Nikos

 --- Berk Hess /g...@hotmail.com/ schrieb am *Mi, 18.2.2009:
 *

 *Von: Berk Hess g...@hotmail.com
 Betreff: RE: [gmx-users] gromacs-4.0.2, parallel performance in
 two quad core xeon machines
 An: lastexile...@yahoo.de
 Datum: Mittwoch, 18. Februar 2009, 19:16

 *
 * Hi,

 You can not scale a system of just 7200 atoms
 to 16 cores which are connected by ethernet.
 400 atoms per core is already the scaling limit of Gromacs
 on current hardware with the fastest available network.

 On ethernet a system 100 times as large might scale well to two nodes.

 Berk


 *
 
 *Date: Wed, 18 Feb 2009 09:40:28 -0800
 From: lastexile...@yahoo.de
 To: gmx-users@gromacs.org
 Subject: [gmx-users] gromacs-4.0.2, parallel performance in two
 quad core xeon machines

 *
 Hello,

 we have built a cluster with nodes that are comprised by the
 following: dual core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz. The
 memory of each node has 16Gb of memory. The switch that we use is
 a dell power connect model. Each node has a Gigabyte ethernet card..

 I tested the performance for a system of 7200 atoms in 4cores of
 one node, in 8 cores of one node and in 16 cores of two nodes. In
 one node the performance is getting better.
 The problem I get is that moving from one node to two, the
 performance decreases dramatically (almost two days for a run that
 finishes in less than 3 hours!).

 I have compiled gromacs with --enable-mpi option. I also have read
 previous archives from Mr Kurtzner, yet from what I saw is that
 they are focused on errors in gromacs 4 or on problems that
 previous versions of gromacs had. I get no errors, just low
 performance.

 Is there any option that I must enable in order to succeed better
 performance in more than one nodes?  Or do you think according to
 your experience that the switch we use might be the problem? Or
 maybe should we have to activate anything from the nodes?

 Thank you in advance,
 Nikos

 *

 *
 
 *Express yourself instantly with MSN Messenger! MSN Messenger
 http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/ *



 
 Express yourself instantly with MSN Messenger! MSN Messenger
 http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01

[gmx-users] gromacs-4.0.2, parallel performance in two quad core xeon machines

2009-02-18 Thread Claus Valka
Hello,

we have built a cluster with nodes that are comprised by the following: dual 
core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz. The memory of each node has 16Gb of 
memory. The switch that we use is a dell power connect model. Each node has a 
Gigabyte ethernet card.

I tested the performance for a system of 7200 atoms in 4cores of one node, in 8 
cores of one node and in 16 cores of two nodes. In one node the performance is 
getting better.
The problem I get is that moving from one node to two, the performance 
decreases dramatically (almost two days for a run that finishes in less than 3 
hours!).

I have compiled gromacs with --enable-mpi option. I also have read previous 
archives from Mr Kurtzner, yet from what I saw is that they are focused on 
errors in gromacs 4 or on problems that previous versions of gromacs had. I get 
no errors, just low performance.

Is there any option that I must enable in order to succeed better performance 
in more than one nodes?  Or do you think according to your experience that the 
switch we use might be the problem? Or maybe should we have to activate 
anything from the nodes?

Thank you in advance,
Nikos




  ___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core xeon machines

2009-02-18 Thread Claus Valka
Hello,thank you for your answer. I just wondering though. How am I supposed to 
have a system with more than 9 atoms, while the gro file has a fixed format 
giving up to 5 digits in the number of atoms? 
What else should I change in order to succeed better performance from my 
hardware if I can succeed having a much bigger system? You say so that ethernet 
has reached its limits. I was concidering using a supercomputing center in 
Europe and as far as I know they are using nodes which are using the Cell 9 
core processors technology in each node. How someone there can accomplish a 
better performance using gromacs 4 using more nodes? Which might be the limit 
there in such machines.  Thank you once again,Nikos
--- Berk Hess g...@hotmail.com schrieb am Mi, 18.2.2009:
Von: Berk Hess g...@hotmail.com
Betreff: RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core 
xeon machines
An: lastexile...@yahoo.de
Datum: Mittwoch, 18. Februar 2009, 19:16




#yiv278737063 .hmmessage P
{
margin:0px;padding:0px;}
#yiv278737063 {
font-size:10pt;font-family:Verdana;}


 
Hi,

You can not scale a system of just 7200 atoms
to 16 cores which are connected by ethernet.
400 atoms per core is already the scaling limit of Gromacs
on current hardware with the fastest available network.

On ethernet a system 100 times as large might scale well to two nodes.

Berk


Date: Wed, 18 Feb 2009 09:40:28 -0800
From: lastexile...@yahoo.de
To: gmx-users@gromacs.org
Subject: [gmx-users] gromacs-4.0.2, parallel performance in two quad core 
xeon machines 

Hello,

we have built a cluster with nodes that are comprised by the following: dual 
core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz. The memory of each node has 16Gb of 
memory. The switch that we use is a dell power connect model. Each node has a 
Gigabyte ethernet card..

I tested the performance for a system of 7200 atoms in 4cores of one node, in 8 
cores of one node and in 16 cores of two nodes. In one node the performance is 
getting better.
The problem I get is that moving from one node to two, the performance 
decreases dramatically (almost two days for a run that finishes in less than 3 
hours!).

I have compiled gromacs with --enable-mpi option. I also have read previous 
archives from Mr Kurtzner, yet from what I saw is that they are focused on 
errors in gromacs 4 or on problems that previous versions of gromacs had. I get 
no errors, just low
 performance.

Is there any option that I must enable in order to succeed better performance 
in more than one nodes?  Or do you think according to your experience that the 
switch we use might be the problem? Or maybe should we have to activate 
anything from the nodes?

Thank you in advance,
Nikos



Express yourself instantly with MSN Messenger! MSN Messenger 



  ___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core xeon machines

2009-02-18 Thread Berk Hess

Hi,

Oops, I meant 72000, which is only a factor of 10.
I guess it might be faster one two nodes then, but probably not 2 times.
If you use PME you can also experiment with putting all the PME nodes
on one machine and the non-PME nodes on the other,
probably with mdrun -ddorder pp_pme

Gromacs supports near to maxint atoms.
The question is much more what kind of system size you are scientifically 
interested in.

Ethernet will never scale very well for small numbers of atoms per core.
Infiniband will scale very well.

Berk


Date: Wed, 18 Feb 2009 12:56:16 -0800
From: lastexile...@yahoo.de
Subject: RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core 
xeon machines
To: gmx-users@gromacs.org

Hello,

thank you for your answer. I just wondering though. How am I supposed to have a 
system with more than 9 atoms, while the gro file has a fixed format giving 
up to 5 digits in the number of atoms? 


What else should I change in order to succeed better performance from my 
hardware if I can succeed having a much bigger system? You say so that ethernet 
has reached its limits.. 

I was concidering using a supercomputing center in Europe and as far as I know 
they are using nodes which are using the Cell 9 core processors technology in 
each node. How someone there can accomplish a better performance using gromacs 
4 using more nodes? Which might be the limit there in such machines.  

Thank you once again,
Nikos

--- Berk Hess g...@hotmail.com schrieb am Mi,
 18.2.2009:
Von: Berk Hess g...@hotmail.com
Betreff: RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core 
xeon machines
An: lastexile...@yahoo.de
Datum: Mittwoch, 18. Februar 2009, 19:16





 
Hi,

You can not scale a system of just 7200 atoms
to 16 cores which are connected by ethernet.
400 atoms per core is already the scaling limit of Gromacs
on current hardware with the fastest available network.

On ethernet a system 100 times as large might scale well to two nodes.

Berk


Date: Wed, 18 Feb 2009 09:40:28 -0800
From: lastexile...@yahoo.de
To: gmx-users@gromacs.org
Subject: [gmx-users] gromacs-4.0.2, parallel performance in two quad core 
xeon machines 

Hello,

we have built a cluster with nodes that are comprised by the following: dual 
core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz. The memory of each node has 16Gb of
 memory. The switch that we use is a dell power connect model. Each node has a 
Gigabyte ethernet card..

I tested the performance for a system of 7200 atoms in 4cores of one node, in 8 
cores of one node and in 16 cores of two nodes. In one node the performance is 
getting better.
The problem I get is that moving from one node to two, the performance 
decreases dramatically (almost two days for a run that finishes in less than 3 
hours!).

I have compiled gromacs with --enable-mpi option. I also have read previous 
archives from Mr Kurtzner, yet from what I saw is that they are focused on 
errors in gromacs 4 or on problems that previous versions of gromacs had. I get 
no errors, just low
 performance.

Is there any option that I must enable in order to succeed better performance 
in more than one nodes?  Or do you think according to your experience that the 
switch we use might be the problem? Or maybe should we have to activate 
anything from the nodes?

Thank you in advance,
Nikos



Express yourself instantly with MSN Messenger! MSN Messenger 


_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] GROMACS in parallel on a multicore PC?

2007-11-21 Thread Vasilii Artyukhov
Hi everybody,

Sorry for a somewhat technical question, but I'd like to know which is the
best way to run GROMACS on a SMP machine (in particular, a multicore PC).
The (known) points of interest are:

- Does GROMACS support multithreaded execution  how efficient is it?

- Should I rather use some kind of MPI  which (LAM/Open/MPICH) is better 
why?

Surely, there's always the option to run two serial jobs instead with a
greater efficiency, but having some means to boost the single job
performance by something like 1.9x would be very useful...

Thanks in advance,

Vasilii
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] GROMACS in parallel on a multicore PC?

2007-11-21 Thread Carsten Kutzner
Hi Vasilii,


Vasilii Artyukhov wrote:
 Hi everybody,
 
 Sorry for a somewhat technical question, but I'd like to know which is
 the best way to run GROMACS on a SMP machine (in particular, a multicore
 PC). The (known) points of interest are:
 
 - Does GROMACS support multithreaded execution  how efficient is it?
As far as I know this is planned, but not supported yet.

 
 - Should I rather use some kind of MPI  which (LAM/Open/MPICH) is
 better  why?
Yes. This is the way to go. On an SMP box you will probably not see
large differences between the different MPI implementations, but why not
try a few? My own experience is that LAM tends to be the fastest, partly
due to the memory mangager it uses.

Carsten

 Surely, there's always the option to run two serial jobs instead with a
 greater efficiency, but having some means to boost the single job
 performance by something like 1.9x would be very useful...
 
 Thanks in advance,
 
 Vasilii
 
 
 
 
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] GROMACS in parallel on a multicore PC?

2007-11-21 Thread Vasilii Artyukhov
Thanks for the quick response :)
2007/11/21, Carsten Kutzner [EMAIL PROTECTED] :

 Hi Vasilii,


  - Does GROMACS support multithreaded execution  how efficient is it?
 As far as I know this is planned, but not supported yet.

Ok, but what about the underlying math libs? Would using threaded libs boost
performance to any noticeable extent?
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] GROMACS in parallel on a multicore PC?

2007-11-21 Thread David van der Spoel

Vasilii Artyukhov wrote:

Thanks for the quick response :)
2007/11/21, Carsten Kutzner [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:

Hi Vasilii,


  - Does GROMACS support multithreaded execution  how efficient is it?
As far as I know this is planned, but not supported yet.

Ok, but what about the underlying math libs? Would using threaded libs 
boost performance to any noticeable extent?

No. We're not using any.






___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David van der Spoel, Ph.D.
Molec. Biophys. group, Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205. Fax: +4618511755.
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] GROMACS in parallel on a multicore PC?

2007-11-21 Thread Yang Ye

On 11/21/2007 7:33 PM, Vasilii Artyukhov wrote:


Hi everybody,

Sorry for a somewhat technical question, but I'd like to know which is 
the best way to run GROMACS on a SMP machine (in particular, a 
multicore PC). The (known) points of interest are:


- Does GROMACS support multithreaded execution  how efficient is it?


No.


- Should I rather use some kind of MPI  which (LAM/Open/MPICH) is 
better  why?



Yes. This is the only way.


Surely, there's always the option to run two serial jobs instead with 
a greater efficiency, but having some means to boost the single job 
performance by something like 1.9x would be very useful...


Thanks in advance,

Vasilii



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] GRomacs 3.3.1 parallel run

2007-07-31 Thread fabio tombolato
Good morning, I started using Gromacs only few months ago doing MD on
proteins in membranes.
I' m using GRomacs 3.3.1 in parallel but I have some problems. I'm
doing my simulations on a 70  nodes cluster using 4 nodes (16
processors). the system uses PBS torque and SCALIMPI libraries.
When I run my jobs, the jobs start with no problems, even if there is
the following error message :
 Jul 30 15:56:05: ([EMAIL PROTECTED])(2688)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(2690)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(4367)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(4370)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(4369)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(4368)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(2687)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(2689)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(698)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(32587)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(696)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(32584)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(32585)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(32586)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(695)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
Jul 30 15:56:05: ([EMAIL PROTECTED])(697)
Mutable error: Opening configfile: /opt/scali/etc/iba_params.conf
failed: No such file or directory
NNODES=16, MYRANK=9, HOSTNAME=avogadro-61.n16.chimica.unipd.it
NNODES=16, MYRANK=11, HOSTNAME=avogadro-61.n16.chimica.unipd.it
NNODES=16, MYRANK=8, HOSTNAME=avogadro-61.n16.chimica.unipd.it
NNODES=16, MYRANK=10, HOSTNAME=avogadro-61.n16.chimica.unipd.it
NNODES=16, MYRANK=7, HOSTNAME=avogadro-60.n16.chimica.unipd.it
NNODES=16, MYRANK=5, HOSTNAME=avogadro-60.n16.chimica.unipd.it
NNODES=16, MYRANK=0, HOSTNAME=avogadro-22.n16.chimica.unipd.it
NNODES=16, MYRANK=4, HOSTNAME=avogadro-60.n16.chimica.unipd.it
NNODES=16, MYRANK=2, HOSTNAME=avogadro-22.n16.chimica.unipd.it
NNODES=16, MYRANK=15, HOSTNAME=avogadro-62.n16.chimica.unipd.it
NNODES=16, MYRANK=13, HOSTNAME=avogadro-62.n16.chimica.unipd.it
NNODES=16, MYRANK=6, HOSTNAME=avogadro-60.n16.chimica.unipd.it
NNODES=16, MYRANK=3, HOSTNAME=avogadro-22.n16.chimica.unipd.it
NNODES=16, MYRANK=1, HOSTNAME=avogadro-22.n16.chimica.unipd.it
NODEID=2 argc=12
NODEID=1 argc=12
NODEID=0 argc=12
NODEID=3 argc=12
 :-)  G  R  O  M  A  C  S  (-:

   Groningen Machine for Chemical Simulation

:-)  VERSION 3.3.1  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2006, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  mdrun-0(mpi:[EMAIL PROTECTED])  (-:

Option Filename  Type Description
..

When I want to delete a job, I use qdel; at this point the job
desappears from queue, but apparently the job is still running on the
processors. The output files are not updated and I have to enter the
nodes and force killing the processes with kill -9 .
This happens also if MD starts but crashes in a few seconds due for
example two overlapping atoms. The job desappears from queue but is
apparently running on nodes.
The error message is the following:

--- mpimon --- Aborting run after interrupt ---
Jul 30 15:57:07: 

[gmx-users] GROMACS 3.3.1 Parallel Run

2007-04-04 Thread Sunny

Hi,

I am trying to run my simulation using parallel Gmx 3.3.1 on a cluster with 
Linux 2.6.9-42.0.3.ELsmp, gcc 3.4.6 and ScaliMPI. My simulation causes 
abortion with the following error message:


--- mpimon --- Aborting run after process-3 terminated abnormally 
Childprocess 26151 exited with exitcode 0 ---



Each time it reports a different process being terminated such as process-1 
or process-10 in addition to the process-3 as above.


The Gmx examples such as tutor/water can be run on this system.

Also, my simulation has successfully run on another system under Gmx 3.3.1 
parallel run.


I'd like to know what causes the abortion in my simulation on this system. 
Is it caused by ScaliMPI?


Thanks,

Sunny

_
The average US Credit Score is 675. The cost to see yours: $0 by Experian. 
http://www.freecreditreport.com/pm/default.aspx?sc=660600bcd=EMAILFOOTERAVERAGE


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] GROMACS 3.3.1 Parallel Run

2007-04-04 Thread Mark Abraham

Sunny wrote:

Hi,

I am trying to run my simulation using parallel Gmx 3.3.1 on a cluster 
with Linux 2.6.9-42.0.3.ELsmp, gcc 3.4.6 and ScaliMPI. My simulation 
causes abortion with the following error message:


--- mpimon --- Aborting run after process-3 terminated abnormally 
Childprocess 26151 exited with exitcode 0 ---



Each time it reports a different process being terminated such as 
process-1 or process-10 in addition to the process-3 as above.


The Gmx examples such as tutor/water can be run on this system.


So what's different between them and your attempted simulation?

Also, my simulation has successfully run on another system under Gmx 
3.3.1 parallel run.


I'd like to know what causes the abortion in my simulation on this 
system. Is it caused by ScaliMPI?


We can't tell yet.

Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] GROMACS in parallel

2006-06-14 Thread Akshay Patny








Dear Sir



I am trying to install GROMACS. I have tried to install
program and it goes through okay. However, when I try to compile the program in
parallel using the enable-mpi option, it gives me an error:
Cannot compile and link MPI code with cc



See below for the command and the error.



Can you suggest what I can do to fix the same?

_

redwood r0914/gromacs-3.3.1 ./configure
--prefix=/ptmp/r0914/gromacsp --enable-mpi



checking build system type... ia64-unknown-linux-gnu

checking host system type... ia64-unknown-linux-gnu

checking for a BSD-compatible install... /usr/bin/install -c

checking whether build environment is sane... yes

checking for gawk... gawk

checking whether make sets $(MAKE)... yes

checking how to create a ustar tar archive... cpio

checking for cc... cc

checking for C compiler default output file name... a.out

checking whether the C compiler works... yes

checking whether we are cross compiling... no

checking for suffix of executables... 

checking for suffix of object files... o

checking whether we are using the GNU C compiler... yes

checking whether cc accepts -g... yes

checking for cc option to accept ANSI C... none needed

checking for style of include used by make... GNU

checking dependency style of cc... gcc3

checking for mpxlc... no

checking for mpicc... no

checking for mpcc... no

checking for hcc... no

checking whether the MPI cc command works... configure:
error: Cannot compile and link MPI code with cc

_



Regards

Akshay





Akshay Patny

Graduate Research Assistant
Faser Hall 417, Department of Medicinal Chemistry

Research Institute of Pharmaceutical
Sciences
University
 of Mississippi
University,
 MS 38677
E-mail: [EMAIL PROTECTED]
Tel: 662-915-1286 (office); Web: www.olemiss.edu 








___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] GROMACS in parallel

2006-06-14 Thread Mark Abraham

Akshay Patny wrote:

Dear Sir

 

I am trying to install GROMACS. I have tried to install program and it 
goes through okay. However, when I try to compile the program in 
parallel using the –enable-mpi option, it gives me an error: “Cannot 
compile and link MPI code with cc”


 


See below for the command and the error.

 


Can you suggest what I can do to fix the same?


Use an MPI compiler. Find out what it is called on your system and take 
steps to use it - see ./configure --help for some options that can help 
here.


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php