Hi all

My apologies for the lack of  detail in my previous e-mail. I am trying
to run gromacs-4.0.7 for a system that I am studying. I have ran several
simulations on serial on my own computer that have to date worked fine.
I am now however trying to run the simulations on our local cluster in
parallel using mpich-1.2.7 and experiencing some difficulty. Please note
that the version of gromacs mentioned above is installed in parallel.
Right when I run a short simulation of 500 steps in one two or three nodes the 
simulations
runs fine (takes about 10 seconds) and all the data is written to the
log file. However when I increase the nodes to 4 there is no stepwise
info written and the simulation does not progress. For clarity I have
attached the log file that iam getting for the 4 node simulation. I realise that
this maybe a cluster problem, but if anyone has experienced similar
issues I would be grateful of some feedback.

Here is the script I use:

#!/bin/bash
#PBS -N hex
#PBS -r n
#PBS -q longterm
#PBS -l walltime=00:30:00
#PBS -l nodes=4

cd $PBS_O_WORKDIR
export P4_GLOBMEMSIZE=100000000

/usr/local/bin/mpiexec mdrun -s

Also here is my path:
# Gromacs
export GMXLIB=/k/gavin/gromacs-4.0.7-parallel/share/gromacs/top
export PATH="$PATH:/k/gavin/gromacs-4.0.7-parallel/bin"


Cheers

Gavin

Log file opened on Wed Mar  3 14:46:51 2010
Host: kari57  pid: 32586  nodeid: 0  nnodes:  4
The Gromacs distribution was built Wed Jan 20 10:02:46 GMT 2010 by
ga...@kari (Linux 2.6.17asc64 x86_64)


                         :-)  G  R  O  M  A  C  S  (-:

                   GROningen MAchine for Chemical Simulation

                            :-)  VERSION 4.0.7  (-:


      Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
       Copyright (c) 1991-2000, University of Groningen, The Netherlands.
             Copyright (c) 2001-2008, The GROMACS development team,
            check out http://www.gromacs.org for more information.

         This program is free software; you can redistribute it and/or
          modify it under the terms of the GNU General Public License
         as published by the Free Software Foundation; either version 2
             of the License, or (at your option) any later version.

                                :-)  mdrun  (-:


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------

parameters of the run:
   integrator           = md
   nsteps               = 500
   init_step            = 0
   ns_type              = Grid
   nstlist              = 10
   ndelta               = 2
   nstcomm              = 1
   comm_mode            = Linear
   nstlog               = 25
   nstxout              = 25
   nstvout              = 25
   nstfout              = 25
   nstenergy            = 25
   nstxtcout            = 0
   init_t               = 0
   delta_t              = 0.002
   xtcprec              = 1000
   nkx                  = 35
   nky                  = 35
   nkz                  = 35
   pme_order            = 4
   ewald_rtol           = 1e-05
   ewald_geometry       = 0
   epsilon_surface      = 0
   optimize_fft         = FALSE
   ePBC                 = xyz
   bPeriodicMols        = FALSE
   bContinuation        = FALSE
   bShakeSOR            = FALSE
   etc                  = Nose-Hoover
   epc                  = Parrinello-Rahman
   epctype              = Isotropic
   tau_p                = 1
   ref_p (3x3):
      ref_p[    0]={ 1.01325e+00,  0.00000e+00,  0.00000e+00}
      ref_p[    1]={ 0.00000e+00,  1.01325e+00,  0.00000e+00}
      ref_p[    2]={ 0.00000e+00,  0.00000e+00,  1.01325e+00}
   compress (3x3):
      compress[    0]={ 4.50000e-05,  0.00000e+00,  0.00000e+00}
      compress[    1]={ 0.00000e+00,  4.50000e-05,  0.00000e+00}
      compress[    2]={ 0.00000e+00,  0.00000e+00,  4.50000e-05}
   refcoord_scaling     = No
   posres_com (3):
      posres_com[0]= 0.00000e+00
      posres_com[1]= 0.00000e+00
      posres_com[2]= 0.00000e+00
   posres_comB (3):
      posres_comB[0]= 0.00000e+00
      posres_comB[1]= 0.00000e+00
      posres_comB[2]= 0.00000e+00
   andersen_seed        = 815131
   rlist                = 1.5
   rtpi                 = 0.05
   coulombtype          = PME
   rcoulomb_switch      = 0
   rcoulomb             = 1.5
   vdwtype              = Switch
   rvdw_switch          = 1.2
   rvdw                 = 1.4
   epsilon_r            = 1
   epsilon_rf           = 1
   tabext               = 1
   implicit_solvent     = No
   gb_algorithm         = Still
   gb_epsilon_solvent   = 80
   nstgbradii           = 1
   rgbradii             = 2
   gb_saltconc          = 0
   gb_obc_alpha         = 1
   gb_obc_beta          = 0.8
   gb_obc_gamma         = 4.85
   sa_surface_tension   = 2.092
   DispCorr             = No
   free_energy          = no
   init_lambda          = 0
   sc_alpha             = 0
   sc_power             = 0
   sc_sigma             = 0.3
   delta_lambda         = 0
   nwall                = 0
   wall_type            = 9-3
   wall_atomtype[0]     = -1
   wall_atomtype[1]     = -1
   wall_density[0]      = 0
   wall_density[1]      = 0
   wall_ewald_zfac      = 3
   pull                 = no
   disre                = No
   disre_weighting      = Conservative
   disre_mixed          = FALSE
   dr_fc                = 1000
   dr_tau               = 0
   nstdisreout          = 100
   orires_fc            = 0
   orires_tau           = 0
   nstorireout          = 100
   dihre-fc             = 1000
   em_stepsize          = 0.01
   em_tol               = 10
   niter                = 20
   fc_stepsize          = 0
   nstcgsteep           = 1000
   nbfgscorr            = 10
   ConstAlg             = Lincs
   shake_tol            = 1e-04
   lincs_order          = 4
   lincs_warnangle      = 30
   lincs_iter           = 1
   bd_fric              = 0
   ld_seed              = 1993
   cos_accel            = 0
   deform (3x3):
      deform[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      deform[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      deform[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   userint1             = 0
   userint2             = 0
   userint3             = 0
   userint4             = 0
   userreal1            = 0
   userreal2            = 0
   userreal3            = 0
   userreal4            = 0
grpopts:
   nrdf:       13821
   ref_t:         300
   tau_t:         0.1
anneal:          No
ann_npoints:           0
   acc:            0           0           0
   nfreeze:           N           N           N
   energygrp_flags[  0]: 0
   efield-x:
      n = 0
   efield-xt:
      n = 0
   efield-y:
      n = 0
   efield-yt:
      n = 0
   efield-z:
      n = 0
   efield-zt:
      n = 0
   bQMMM                = FALSE
   QMconstraints        = 0
   QMMMscheme           = 0
   scalefactor          = 1
qm_opts:
   ngQM                 = 0

Initializing Domain Decomposition on 4 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
    two-body bonded interactions: 0.867 nm, Bond, atoms 535 536
  multi-body bonded interactions: 0.867 nm, Fourier Dih., atoms 508 515
Minimum cell size due to bonded interactions: 0.953 nm
Using 0 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 4 cells with a minimum initial size of 1.191 nm
The maximum allowed number of cells is: X 8 Y 8 Z 8
Domain decomposition grid 4 x 1 x 1, separate PME nodes 0
Domain decomposition nodeid 0, coordinates 0 0 0

Using two step summing over 2 groups of on average 2.0 processes

Table routines are used for coulomb: TRUE
Table routines are used for vdw:     TRUE
Will do PME sum in reciprocal space.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essman, L. Perela, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen 
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------

Using a Gaussian width (1/beta) of 0.480244 nm for Ewald
Using shifted Lennard-Jones, switch between 1.2 and 1.4 nm
Cut-off's:   NS: 1.5   Coulomb: 1.5   LJ: 1.4
System total charge: -0.000
Generated table with 1250 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1250 data points for LJ6Switch.
Tabscale = 500 points/nm
Generated table with 1250 data points for LJ12Switch.
Tabscale = 500 points/nm
Configuring nonbonded kernels...
Testing x86_64 SSE support... present.


Removing pbc first time

Linking all bonded interactions to atoms
There are 11072 inter charge-group exclusions,
will use an extra communication step for exclusion forces for PME

The initial number of communication pulses is: X 1
The initial domain decomposition cell size is: X 2.50 nm

The maximum allowed distance for charge groups involved in interactions is:
                 non-bonded interactions           1.500 nm
(the following are initial values, they could change due to box deformation)
            two-body bonded interactions  (-rdd)   1.500 nm
          multi-body bonded interactions  (-rdd)   1.500 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1
The minimum size for domain decomposition cells is 1.500 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.60
The maximum allowed distance for charge groups involved in interactions is:
                 non-bonded interactions           1.500 nm
            two-body bonded interactions  (-rdd)   1.500 nm
          multi-body bonded interactions  (-rdd)   1.500 nm


Making 1D domain decomposition grid 4 x 1 x 1, home cell index 0 0 0

Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest
There are: 4608 Atoms
Charge group distribution at step 0: 35 734 751 16
-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to