Hi,
compiler could be a problem:
on local machine it was gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4)
on the cluster it was some icc compiler (sorry don't know which version, but we used also the intel mkl 10.0.011 libraris). (But the parallel and serial version were compiled with the same settings:

./configure CC="icc" CPPFLAGS="-I/share/apps/intel/mkl/10.0.011/include"
LDFLAGS="-L/share/apps/intel/mkl/10.0.011/lib/em64t
-lmkl_solver_lp64_sequential -Wl,--start-group -lmkl_intel_lp64
-lmkl_sequential -lmkl_core -Wl,--end-group -lpthread" --with-fft=mkl
--prefix="/share/apps/gromacs/4.0.5"
make
make install
make clean
./configure CC="icc" CPPFLAGS="-I/share/apps/intel/mkl/10.0.011/include"
LDFLAGS="-L/share/apps/intel/mkl/10.0.011/lib/em64t
-lmkl_solver_lp64_sequential -Wl,--start-group -lmkl_intel_lp64
-lmkl_sequential -lmkl_core -Wl,--end-group -lpthread" --with-fft=mkl
--prefix="/share/apps/gromacs/4.0.5" --enable-mpi --disable-nice
--program-suffix=_mpi
make mdrun
make install-mdrun

I will try now the compile the GROMACS 4.0.5 version, on the cluster with a gcc compiler. Think best will also to try 4.0.7 with the icc compiler.

below are the mdp and the log (input; 2 steps of actual output, and the averages) file from one run (spc water, with normal spc.itp and ffG53a5):

MDP-file:
title           = OPLS Lysozyme NVT equilibration
; Run parameters
integrator      = md
nsteps          = 250000
dt              = 0.002
; Output control
nstxout         = 2500
nstvout         = 0
nstenergy       = 2500
nstlog          = 2500
; Bond parameters
continuation    = no
constraint_algorithm = lincs
constraints     = all-bonds
lincs_iter      = 1
lincs_order     = 4
; Neighborsearching
ns_type         = grid
nstlist         = 5
rlist           = 1.4
rcoulomb        = 1.4
rvdw            = 1.4
; Electrostatics
coulombtype     = PME
pme_order       = 4
fourierspacing  = 0.16
; Temperature coupling is on
tcoupl          = V-rescale
ld-seed         = -1
tc-grps         = system
tau_t           = 0.1
ref_t           = 300
; Pressure coupling is off
pcoupl          = no
; Periodic boundary conditions
pbc             = xyz
; Dispersion correction
DispCorr        = EnerPres
; Velocity generation
gen_vel         = yes
gen_temp        = 300
gen_seed        = -1

LOG-file
Input Parameters:
   integrator           = md
   nsteps               = 250000
   init_step            = 0
   ns_type              = Grid
   nstlist              = 5
   ndelta               = 2
   nstcomm              = 1
   comm_mode            = Linear
   nstlog               = 2500
   nstxout              = 2500
   nstvout              = 0
   nstfout              = 0
   nstenergy            = 2500
   nstxtcout            = 0
   init_t               = 0
   delta_t              = 0.002
   xtcprec              = 1000
   nkx                  = 25
   nky                  = 20
   nkz                  = 20
   pme_order            = 4
   ewald_rtol           = 1e-05
   ewald_geometry       = 0
   epsilon_surface      = 0
   optimize_fft         = FALSE
   ePBC                 = xyz
   bPeriodicMols        = FALSE
   bContinuation        = FALSE
   bShakeSOR            = FALSE
   etc                  = V-rescale
   epc                  = No
   epctype              = Isotropic
   tau_p                = 1
   ref_p (3x3):
      ref_p[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      ref_p[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      ref_p[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   compress (3x3):
      compress[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      compress[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      compress[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   refcoord_scaling     = No
   compress (3x3):
      posres_com[0]= 0.00000e+00
      posres_com[1]= 0.00000e+00
      posres_com[2]= 0.00000e+00
   posres_comB (3):
      posres_comB[0]= 0.00000e+00
      posres_comB[1]= 0.00000e+00
      posres_comB[2]= 0.00000e+00
   andersen_seed        = 815131
   rlist                = 1.4
   rtpi                 = 0.05
   coulombtype          = PME
   rcoulomb_switch      = 0
   rcoulomb             = 1.4
   vdwtype              = Cut-off
   rvdw_switch          = 0
   rvdw                 = 1.4
   epsilon_r            = 1
   epsilon_rf           = 1
   tabext               = 1
   implicit_solvent     = No
   gb_algorithm         = Still
   gb_epsilon_solvent   = 80
   nstgbradii           = 1
   rgbradii             = 2
   gb_saltconc          = 0
   gb_obc_alpha         = 1
   gb_obc_beta          = 0.8
   gb_obc_gamma         = 4.85
   sa_surface_tension   = 2.092
   DispCorr             = EnerPres
   free_energy          = no
   init_lambda          = 0
   sc_alpha             = 0
   sc_power             = 0
   sc_sigma             = 0.3
   delta_lambda         = 0
   nwall                = 0
   wall_type            = 9-3
   wall_atomtype[0]     = -1
   wall_atomtype[1]     = -1
   wall_density[0]      = 0
   wall_density[1]      = 0
   wall_ewald_zfac      = 3
   pull                 = no
   disre                = No
   disre_weighting      = Conservative
   disre_mixed          = FALSE
   dr_fc                = 1000
   dr_tau               = 0
   nstdisreout          = 100
   orires_fc            = 0
   orires_tau           = 0
   nstorireout          = 100
   dihre-fc             = 1000
   em_stepsize          = 0.01
   em_tol               = 10
   niter                = 20
   fc_stepsize          = 0
   nstcgsteep           = 1000
   nbfgscorr            = 10
   ConstAlg             = Lincs
   shake_tol            = 0.0001
   lincs_order          = 4
   lincs_warnangle      = 30
   lincs_iter           = 1
   bd_fric              = 0
   ld_seed              = 703303
   cos_accel            = 0
   deform (3x3):
      deform[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      deform[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      deform[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   userint1             = 0
   userint2             = 0
   userint3             = 0
   userint4             = 0
   userreal1            = 0
   userreal2            = 0
   userreal3            = 0
   userreal4            = 0
grpopts:
   nrdf:        7101
   ref_t:         300
   tau_t:         0.1
anneal:          No
ann_npoints:           0
   acc:            0           0           0
   nfreeze:           N           N           N
   energygrp_flags[  0]: 0
   efield-x:
      n = 0
   efield-xt:
      n = 0
   efield-y:
      n = 0
   efield-yt:
      n = 0
   efield-z:
      n = 0
   efield-zt:
      n = 0
   bQMMM                = FALSE
   QMconstraints        = 0
   QMMMscheme           = 0
   scalefactor          = 1
qm_opts:
   ngQM                 = 0
Table routines are used for coulomb: TRUE
Table routines are used for vdw:     FALSE
Will do PME sum in reciprocal space.

*snip*

Using a Gaussian width (1/beta) of 0.448228 nm for Ewald
Cut-off's:   NS: 1.4   Coulomb: 1.4   LJ: 1.4
System total charge: 0.000
Generated table with 1200 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ12.
Tabscale = 500 points/nm

Enabling SPC water optimization for 1184 molecules.

Configuring nonbonded kernels...
Testing x86_64 SSE support... present.


Removing pbc first time

*snip*

There are: 3552 Atoms
Max number of connections per atom is 2
Total number of connections is 4736
Max number of graph edges per atom is 2
Total number of graph edges is 4736

Constraining the starting coordinates (step 0)

Constraining the coordinates at t0-dt (step 0)
RMS relative constraint deviation after constraining: 0.00e+00
Initial temperature: 303.041 K

Started mdrun on node 0 Thu Mar  4 11:20:10 2010

           Step           Time         Lambda
              0        0.00000        0.00000

Grid: 6 x 4 x 4 cells
Long Range LJ corr.: <C6> 2.9082e-04
Long Range LJ corr.: Epot   -77.7923, Pres:   -71.7651, Vir:    77.7923
   Energies (kJ/mol)
        LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.      Potential
    1.13442e+04   -7.77923e+01   -7.05664e+04   -1.53951e+03   -6.08396e+04
    Kinetic En.   Total Energy  Conserved En.    Temperature Pressure (bar)
    8.93685e+03   -5.19027e+04   -5.18779e+04    3.02732e+02   -2.59307e+03

           Step           Time         Lambda
           2500        5.00000        0.00000

   Energies (kJ/mol)
        LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.      Potential
    6.18686e+03   -7.77923e+01   -4.50870e+04   -1.36173e+03   -4.03397e+04
    Kinetic En.   Total Energy  Conserved En.    Temperature Pressure (bar)
    1.25168e+04   -2.78229e+04    1.22037e+05    4.24002e+02    2.25670e+03

*snip*

        <======  ###############  ==>
        <====  A V E R A G E S  ====>
        <==  ###############  ======>

   Energies (kJ/mol)
        LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.      Potential
    6.15659e+03   -7.77923e+01   -4.59217e+04   -1.32129e+03   -4.11642e+04
    Kinetic En.   Total Energy  Conserved En.    Temperature Pressure (bar)
    1.26205e+04   -2.85438e+04    9.37069e+06    4.27512e+02    1.82506e+03


Greetings
Thomas





Hi,

Can you also post your .mdp?

Ran

Berk Hess wrote:
Hi,

I have never heard about problems like this before.
It seems highly unlikely to me that the innerloops are causing this.

Are your running exacly the same tpr file on your local machine
and the cluster?

You probably want to update to version 4.0.7 to be sure you have
all the latest bugfixes.

Please keep us updated on this issue, since things like this should
never happen (unless there is a compiler bug).

Berk

Date: Fri, 5 Mar 2010 14:41:02 +0100
From: schl...@uni-mainz.de
To: gmx-users@gromacs.org
Subject: Re: [gmx-users] Turn-off water optimisation

Hi,
i have the following problem: (GROMACS 4.0.5)

when i simulated water in serial on our cluster with the brendsen or
v-rescale thermostat i get to high temperatures (300 K goes in very
short time up to around 425 K). If i simulate in parallel or at my
local
machine i get no problems. Also if i change water to another molecule
there are no such problems. (I use the same mdp file for all the
simulations).

Because the problem appears with water (spc and tip4p) but not with
mesitylene i thought probably the special things for water (settle, and
so on) could be the problem. So i wanted to simulate water without that
fancy stuff.
Thanks for the info with the enviroment variable, but where can i
set it?
For the other problems (why it works on the cluster in parallel, but
not
in serial, but works on the local-pc in serial) i have so far no idea,
where to look. But first i'm happy to know if the problem comes from
the
special water-loops.

Thomas



Hi,

You don't want to mess with the topology, you will be simulating a
quit >different
system when you turn off constraints. Also Gromacs does not optimize
based
on names, since the name might not say anything about the molecule.
I don't know what effect of what optimizations you want to test,
but setting the environment variable GMX_NO_SOLV_OPT will turn off
the special inner-loops for water.

Berk
Date: Fri, 5 Mar 2010 11:31:50 +0100
From: schlesi at uni-mainz.de
To: gmx-users at gromacs.org
Subject: [gmx-users] Turn-off water optimisation

Dear all,
I simulated water (spc) with the ffG53a5 force field. For testing
propose i want to turn of the water optimisation. How do i do this?
So far i have tried:
* contraints = none
* define = -DFLEXIBLE
* took the spc.itp file and deleted all the stuff for settle and the
other force fields, changed resname from SOL to WAT (also spc.itp ->
wat.itp)

But all the time i the log file there is this line:
Enabling SPC water optimization for 1184 molecules.

Espically with the last option (change of the spc.itp) i don't how
GROMACS recorgnises that i simulate SPC water, because i has a
different
name and so.

Thanks for your help in advance.
Greetings
Thomas
--
gmx-users mailing list gmx-users at gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-request at gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
gmx-users mailing list gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
------------------------------------------------------------------------
New Windows 7: Simplify what you do everyday. Find the right PC for
you. <http://windows.microsoft.com/shop>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
http://lists.gromacs.org/pipermail/gmx-users/attachments/20100305/011ae49c/attachment.html

------------------------------

--
gmx-users mailing list
gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!

End of gmx-users Digest, Vol 71, Issue 32
*****************************************

--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to