[gmx-users] parallel simulations
Hi all My apologies for the lack of detail in my previous e-mail. I am trying to run gromacs-4.0.7 for a system that I am studying. I have ran several simulations on serial on my own computer that have to date worked fine. I am now however trying to run the simulations on our local cluster in parallel using mpich-1.2.7 and experiencing some difficulty. Please note that the version of gromacs mentioned above is installed in parallel. Right when I run a short simulation of 500 steps in one two or three nodes the simulations runs fine (takes about 10 seconds) and all the data is written to the log file. However when I increase the nodes to 4 there is no stepwise info written and the simulation does not progress. For clarity I have attached the log file that iam getting for the 4 node simulation. I realise that this maybe a cluster problem, but if anyone has experienced similar issues I would be grateful of some feedback. Here is the script I use: #!/bin/bash #PBS -N hex #PBS -r n #PBS -q longterm #PBS -l walltime=00:30:00 #PBS -l nodes=4 cd $PBS_O_WORKDIR export P4_GLOBMEMSIZE=1 /usr/local/bin/mpiexec mdrun -s Also here is my path: # Gromacs export GMXLIB=/k/gavin/gromacs-4.0.7-parallel/share/gromacs/top export PATH="$PATH:/k/gavin/gromacs-4.0.7-parallel/bin" Cheers Gavin Log file opened on Wed Mar 3 14:46:51 2010 Host: kari57 pid: 32586 nodeid: 0 nnodes: 4 The Gromacs distribution was built Wed Jan 20 10:02:46 GMT 2010 by ga...@kari (Linux 2.6.17asc64 x86_64) :-) G R O M A C S (-: GROningen MAchine for Chemical Simulation :-) VERSION 4.0.7 (-: Written by David van der Spoel, Erik Lindahl, Berk Hess, and others. Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2008, The GROMACS development team, check out http://www.gromacs.org for more information. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. :-) mdrun (-: PLEASE READ AND CITE THE FOLLOWING REFERENCE B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation J. Chem. Theory Comput. 4 (2008) pp. 435-447 --- Thank You --- PLEASE READ AND CITE THE FOLLOWING REFERENCE D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C. Berendsen GROMACS: Fast, Flexible and Free J. Comp. Chem. 26 (2005) pp. 1701-1719 --- Thank You --- PLEASE READ AND CITE THE FOLLOWING REFERENCE E. Lindahl and B. Hess and D. van der Spoel GROMACS 3.0: A package for molecular simulation and trajectory analysis J. Mol. Mod. 7 (2001) pp. 306-317 --- Thank You --- PLEASE READ AND CITE THE FOLLOWING REFERENCE H. J. C. Berendsen, D. van der Spoel and R. van Drunen GROMACS: A message-passing parallel molecular dynamics implementation Comp. Phys. Comm. 91 (1995) pp. 43-56 --- Thank You --- parameters of the run: integrator = md nsteps = 500 init_step= 0 ns_type = Grid nstlist = 10 ndelta = 2 nstcomm = 1 comm_mode= Linear nstlog = 25 nstxout = 25 nstvout = 25 nstfout = 25 nstenergy= 25 nstxtcout= 0 init_t = 0 delta_t = 0.002 xtcprec = 1000 nkx = 35 nky = 35 nkz = 35 pme_order= 4 ewald_rtol = 1e-05 ewald_geometry = 0 epsilon_surface = 0 optimize_fft = FALSE ePBC = xyz bPeriodicMols= FALSE bContinuation= FALSE bShakeSOR= FALSE etc = Nose-Hoover epc = Parrinello-Rahman epctype = Isotropic tau_p= 1 ref_p (3x3): ref_p[0]={ 1.01325e+00, 0.0e+00, 0.0e+00} ref_p[1]={ 0.0e+00, 1.01325e+00, 0.0e+00} ref_p[2]={ 0.0e+00, 0.0e+00, 1.01325e+00} compress (3x3): compress[0]={ 4.5e-05, 0.0e+00, 0.0e+00} compress[1]={ 0.0e+00, 4.5e-05, 0.0e+00} compress[2]={ 0.0e+00, 0.0e+00, 4.5e-05} refcoord_scaling = No posres_com (3): posres_com[0]= 0.0e+00 posres_com[1]= 0.0e+00 posres_com[2]= 0.0e+
[gmx-users] parallel simulations
Hi all My apologies for the lack of detail in my previous e-mail. I am trying to run gromacs-4.0.7 for a system that I am studying. I have ran several simulations on serial on my own computer that have to date worked fine. I am now however trying to run the simulations on our local cluster in parallel using mpich-1.2.7 and experiencing some difficulty. Please note that the version of gromacs mentioned above is installed in parallel. Right when I run a short simulation of 500 steps in one two or three nodes the simulations runs fine (takes about 10 seconds) and all the data is written to the log file. However when I increase the nodes to 4 there is no stepwise info written and the simulation does not progress. For clarity I have attached the log file that iam getting for the 4 node simulation. I realise that this maybe a cluster problem, but if anyone has experienced similar issues I would be grateful of some feedback. Here is the script I use: #!/bin/bash #PBS -N hex #PBS -r n #PBS -q longterm #PBS -l walltime=00:30:00 #PBS -l nodes=4 cd $PBS_O_WORKDIR export P4_GLOBMEMSIZE=1 /usr/local/bin/mpiexec mdrun -s Also here is my path: # Gromacs export GMXLIB=/k/gavin/gromacs-4.0.7-parallel/share/gromacs/top export PATH="$PATH:/k/gavin/gromacs-4.0.7-parallel/bin" Cheers Gavin Log file opened on Wed Mar 3 14:46:51 2010 Host: kari57 pid: 32586 nodeid: 0 nnodes: 4 The Gromacs distribution was built Wed Jan 20 10:02:46 GMT 2010 by ga...@kari (Linux 2.6.17asc64 x86_64) :-) G R O M A C S (-: GROningen MAchine for Chemical Simulation :-) VERSION 4.0.7 (-: Written by David van der Spoel, Erik Lindahl, Berk Hess, and others. Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2008, The GROMACS development team, check out http://www.gromacs.org for more information. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. :-) mdrun (-: PLEASE READ AND CITE THE FOLLOWING REFERENCE B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation J. Chem. Theory Comput. 4 (2008) pp. 435-447 --- Thank You --- PLEASE READ AND CITE THE FOLLOWING REFERENCE D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C. Berendsen GROMACS: Fast, Flexible and Free J. Comp. Chem. 26 (2005) pp. 1701-1719 --- Thank You --- PLEASE READ AND CITE THE FOLLOWING REFERENCE E. Lindahl and B. Hess and D. van der Spoel GROMACS 3.0: A package for molecular simulation and trajectory analysis J. Mol. Mod. 7 (2001) pp. 306-317 --- Thank You --- PLEASE READ AND CITE THE FOLLOWING REFERENCE H. J. C. Berendsen, D. van der Spoel and R. van Drunen GROMACS: A message-passing parallel molecular dynamics implementation Comp. Phys. Comm. 91 (1995) pp. 43-56 --- Thank You --- parameters of the run: integrator = md nsteps = 500 init_step= 0 ns_type = Grid nstlist = 10 ndelta = 2 nstcomm = 1 comm_mode= Linear nstlog = 25 nstxout = 25 nstvout = 25 nstfout = 25 nstenergy= 25 nstxtcout= 0 init_t = 0 delta_t = 0.002 xtcprec = 1000 nkx = 35 nky = 35 nkz = 35 pme_order= 4 ewald_rtol = 1e-05 ewald_geometry = 0 epsilon_surface = 0 optimize_fft = FALSE ePBC = xyz bPeriodicMols= FALSE bContinuation= FALSE bShakeSOR= FALSE etc = Nose-Hoover epc = Parrinello-Rahman epctype = Isotropic tau_p= 1 ref_p (3x3): ref_p[0]={ 1.01325e+00, 0.0e+00, 0.0e+00} ref_p[1]={ 0.0e+00, 1.01325e+00, 0.0e+00} ref_p[2]={ 0.0e+00, 0.0e+00, 1.01325e+00} compress (3x3): compress[0]={ 4.5e-05, 0.0e+00, 0.0e+00} compress[1]={ 0.0e+00, 4.5e-05, 0.0e+00} compress[2]={ 0.0e+00, 0.0e+00, 4.5e-05} refcoord_scaling = No posres_com (3): posres_com[0]= 0.0e+00 posres_com[1]= 0.0e+00 posres_com[2]= 0.0e+