Sikandar Mashayak wrote:
It happens immediately at step 0., and the log file looks like :
That suggests to me that the system is inherently unstable, which can occur for
a variety of reasons.
http://www.gromacs.org/Documentation/Terminology/Blowing_Up
A few more comments below.
Input Parameters:
integrator = md
nsteps = 20000
init_step = 0
ns_type = Grid
nstlist = 10
ndelta = 2
nstcomm = 0
comm_mode = Linear
nstlog = 1000
nstxout = 600
nstvout = 600
nstfout = 600
nstenergy = 1000
nstxtcout = 1000
init_t = 0
delta_t = 0.001
xtcprec = 10000
nkx = 44
nky = 42
nkz = 120
pme_order = 4
ewald_rtol = 1e-05
ewald_geometry = 1
epsilon_surface = -1
optimize_fft = FALSE
ePBC = xyz
bPeriodicMols = FALSE
bContinuation = FALSE
bShakeSOR = FALSE
etc = Nose-Hoover
epc = No
epctype = Isotropic
tau_p = 1
ref_p (3x3):
ref_p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref_p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref_p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress (3x3):
compress[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord_scaling = No
posres_com (3):
posres_com[0]= 0.00000e+00
posres_com[1]= 0.00000e+00
posres_com[2]= 0.00000e+00
posres_comB (3):
posres_comB[0]= 0.00000e+00
posres_comB[1]= 0.00000e+00
posres_comB[2]= 0.00000e+00
andersen_seed = 815131
rlist = 1.1
rtpi = 0.05
coulombtype = PME
rcoulomb_switch = 0
rcoulomb = 1.1
vdwtype = Cut-off
rvdw_switch = 0
rvdw = 1.1
epsilon_r = 1
epsilon_rf = 1
tabext = 1
implicit_solvent = No
gb_algorithm = Still
gb_epsilon_solvent = 80
nstgbradii = 1
rgbradii = 2
gb_saltconc = 0
gb_obc_alpha = 1
gb_obc_beta = 0.8
gb_obc_gamma = 4.85
sa_surface_tension = 2.092
DispCorr = EnerPres
free_energy = no
init_lambda = 0
sc_alpha = 0
sc_power = 0
sc_sigma = 0.3
delta_lambda = 0
nwall = 0
wall_type = 9-3
wall_atomtype[0] = -1
wall_atomtype[1] = -1
wall_density[0] = 0
wall_density[1] = 0
wall_ewald_zfac = 3
pull = no
disre = No
disre_weighting = Conservative
disre_mixed = FALSE
dr_fc = 1000
dr_tau = 0
nstdisreout = 100
orires_fc = 0
orires_tau = 0
nstorireout = 100
dihre-fc = 1000
em_stepsize = 0.01
em_tol = 100
niter = 20
fc_stepsize = 0
nstcgsteep = 1000
nbfgscorr = 10
ConstAlg = Lincs
shake_tol = 0.0001
lincs_order = 4
lincs_warnangle = 30
lincs_iter = 1
bd_fric = 0
ld_seed = 1993
cos_accel = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
grpopts:
nrdf: 0 12822
ref_t: 0 300
tau_t: 0 0.2
anneal: No No
ann_npoints: 0 0
acc: 0 0 0
nfreeze: Y Y Y N
N N
energygrp_flags[ 0]: 1 0
energygrp_flags[ 1]: 0 0
efield-x:
n = 0
efield-xt:
n = 0
efield-y:
n = 0
efield-yt:
n = 0
efield-z:
n = 0
efield-zt:
n = 0
bQMMM = FALSE
QMconstraints = 0
QMMMscheme = 0
scalefactor = 1
qm_opts:
ngQM = 0
Table routines are used for coulomb: TRUE
Table routines are used for vdw: FALSE
Will do PME sum in reciprocal space.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essman, L. Perela, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------
Using the Ewald3DC correction for systems with a slab geometry.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
Y. In-Chul and M. L. Berkowitz
Ewald summation for systems with slab geometry
J. Chem. Phys. 111 (1999) pp. 3155-3162
-------- -------- --- Thank You --- -------- --------
Using a Gaussian width (1/beta) of 0.352179 nm for Ewald
Cut-off's: NS: 1.1 Coulomb: 1.1 LJ: 1.1
System total charge: 0.000
Generated table with 4200 data points for Ewald.
Tabscale = 2000 points/nm
Generated table with 4200 data points for LJ6.
Tabscale = 2000 points/nm
Generated table with 4200 data points for LJ12.
Tabscale = 2000 points/nm
Enabling SPC water optimization for 2137 molecules.
Configuring nonbonded kernels...
Testing ia32 SSE2 support... present.
Removing pbc first time
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
-------- -------- --- Thank You --- -------- --------
There are: 7699 Atoms
Max number of connections per atom is 2
Total number of connections is 8548
Max number of graph edges per atom is 2
Total number of graph edges is 8548
Constraining the starting coordinates (step 0)
Constraining the coordinates at t0-dt (step 0)
RMS relative constraint deviation after constraining: 0.00e+00
Initial temperature: 304.27 K
Started mdrun on node 0 Thu May 20 16:33:38 2010
Step Time Lambda
0 0.00000 0.00000
Grid: 5 x 5 x 14 cells
Long Range LJ corr.: <C6> 1.1930e-03
Long Range LJ corr.: Epot -415.301, Pres: -51.4754, Vir: 415.301
Energies (kJ/mol)
LJ (SR) Disper. corr. Coulomb (SR) Coul. recip. Potential
6.44979e+04 -4.15301e+02 -1.00185e+05 -4.94669e+03 -4.10490e+04
Kinetic En. Total Energy Conserved En. Temperature Pressure (bar)
1.72141e+04 -2.38349e+04 -2.38347e+04 3.22940e+02 1.36187e+04
Here's where I see the biggest problems. Your actual temperature is far greater
than the desired reference temperature, and the pressure is astronomical. These
to facts suggest that your system is shearing apart. Is this the first MD for
this system? Have you done prior equilibration? If you haven't stably
minimized and equilibrated the system, using the N-H thermostat is a bad idea.
The temperature of a system that is far from equilibrium will fluctuate
unpredictably using N-H. It is better to use a weak coupling scheme (i.e.
Berendsen or V-rescale) to equilibrate the system, then switch to N-H for data
collection.
Other than that, see the diagnostic tips at the link above.
-Justin
On Thu, May 20, 2010 at 4:30 PM, Justin A. Lemkul <jalem...@vt.edu
<mailto:jalem...@vt.edu>> wrote:
Sikandar Mashayak wrote:
Hi
I have gromacs input files for md simulation, with these set up
files (*.mdp,*.top,*.itp *.gro), I can successfully run grompp
and mdrun on one machine, but when I move it to other machine, I
get segmentation fault when I do mdrun. Both the machines have
exactly same types of installation of gromacs 4.0.7 . Also, I
can run water tutorials successfully on both the machines.
So what could be the source of segmentation fault?
MD is chaotic, so you may not get the same result every time you run
a simulation. Since you've not said how quickly the seg fault
occurs it is exceptionally hard to diagnose. Generally, seg faults
with mdrun occur because the system crashes from an instability.
Without substantially more information (system contents, .mdp
settings, relevant log file output, etc) there is not much more to
suggest.
-Justin
thanks
sikandar
--
========================================
Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu <http://vt.edu> | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
========================================
--
gmx-users mailing list gmx-users@gromacs.org
<mailto:gmx-users@gromacs.org>
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org
<mailto:gmx-users-requ...@gromacs.org>.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
========================================
Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
========================================
--
gmx-users mailing list gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php