[gmx-users] segmentation fault with mdrun

2012-08-21 Thread Deepak Ojha
Dear All
I am trying to perform the  azide ion in water simulation with
Gromacs. I generated to topology file with PRODG server for azide ion
and ran the calculations.I got one error at grompp  level which was  

  327 non-matching atom names
  atom names from azide.top will be used
  atom names from azide.gro will be ignored

I continued with the maxwarn and performed energy minimization which
went smoothly.However no sooner I started equilibration in NVT run
using mdrun
it crashed with segmentation fault. Please help me to locate the
error. I went through the previous mails on the mailing list but I
could not sort it out.


The topology file is :

; Include forcefield parameters
#include ffG43a1.itp

;Include azide topology
#include azide.itp

; Include water topology
#include spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include generic topology for ions
#include ions.itp

[ system ]
; Name
azide in water

[ molecules ]
; Compound#mols
SOL   108
AZI   1


and the itp file for azide which I made from PRODG is

[ moleculetype ]
; Name nrexcl
AZI  3

[ atoms ]
;   nr  type  resnr resid  atom  cgnr   charge mass
 1 N 1  AZI  N1 1   -1.000  14.0067
 2 N 1  AZI  N2 12.000  14.0067
 3 N 1  AZI  N3 1   -1.000  14.0067

[ bonds ]
; ai  aj  fuc0, c1, ...
   2   1   20.112   4527362.40.112   4527362.4 ;N2   N1
   2   3   20.112   4527362.40.112   4527362.4 ;N2   N3

[ pairs ]
; ai  aj  fuc0, c1, ...

[ angles ]
; ai  aj  ak  fuc0, c1, ...
   1   2   3   2180.0  41840001.2180.0  41840001.2 ;N1   N2   N3

[ dihedrals ]
; ai  aj  ak  al  fuc0, c1, m, ...

--

DeepaK Ojha
School Of Chemistry

Selfishness is not living as one wishes to live, it is asking others
to live as one wishes to live
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault with mdrun

2012-08-21 Thread Justin Lemkul



On 8/21/12 6:00 AM, Deepak Ojha wrote:

Dear All
I am trying to perform the  azide ion in water simulation with
Gromacs. I generated to topology file with PRODG server for azide ion
and ran the calculations.I got one error at grompp  level which was  

   327 non-matching atom names
   atom names from azide.top will be used
   atom names from azide.gro will be ignored

I continued with the maxwarn and performed energy minimization which
went smoothly.However no sooner I started equilibration in NVT run
using mdrun
it crashed with segmentation fault. Please help me to locate the
error. I went through the previous mails on the mailing list but I
could not sort it out.



Don't use -maxwarn unless you know exactly why you're doing it.  The fact that 
you have 327 non-matching names and 327 atoms in the system (108*3 + 3) suggests 
the contents of your coordinate file do not match that of the topology in terms 
of the order of the [molecules] section.  Likely your azide should be listed 
first, presumably if you took the coordinate file for this molecule and solvated it.


Also beware that PRODRG topologies are notoriously unreliable and that linear 
molecules should not be constructed in this way (180 degree angles are not 
stable).  See, for instance, the following tutorial for a more robust method:


http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/vsites/index.html

-Justin



The topology file is :

; Include forcefield parameters
#include ffG43a1.itp

;Include azide topology
#include azide.itp

; Include water topology
#include spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
11   1000   1000   1000
#endif

; Include generic topology for ions
#include ions.itp

[ system ]
; Name
azide in water

[ molecules ]
; Compound#mols
SOL   108
AZI   1


and the itp file for azide which I made from PRODG is

[ moleculetype ]
; Name nrexcl
AZI  3

[ atoms ]
;   nr  type  resnr resid  atom  cgnr   charge mass
  1 N 1  AZI  N1 1   -1.000  14.0067
  2 N 1  AZI  N2 12.000  14.0067
  3 N 1  AZI  N3 1   -1.000  14.0067

[ bonds ]
; ai  aj  fuc0, c1, ...
2   1   20.112   4527362.40.112   4527362.4 ;N2   N1
2   3   20.112   4527362.40.112   4527362.4 ;N2   N3

[ pairs ]
; ai  aj  fuc0, c1, ...

[ angles ]
; ai  aj  ak  fuc0, c1, ...
1   2   3   2180.0  41840001.2180.0  41840001.2 ;N1   N2   N3

[ dihedrals ]
; ai  aj  ak  al  fuc0, c1, m, ...

--

DeepaK Ojha
School Of Chemistry

Selfishness is not living as one wishes to live, it is asking others
to live as one wishes to live



--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread rainy908
Dear gmx-users:

Thanks Justin for your help.  But now I am experiencing a Segmentation fault 
error when executing mdrun.  I've perused the archives but found none of the 
threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

My files are as follows:

md.mdp

title   = 1JFF MD
cpp = /lib/cpp ; location of cpp on SGI
constraints = all-bonds
integrator  = md
dt  = 0.0001 ; ps
nsteps  = 25000 ;
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 0
nstfout = 0
nstlist = 10
ns_type = grid
rlist   = 0.9
coulombtype = PME
rcoulomb= 0.9
rvdw= 1.0
fourierspacing  = 0.12
fourier_nx= 0
fourier_ny= 0
fourier_nz= 0
pme_order = 6
ewald_rtol= 1e-5
optimize_fft  = yes
; Berendsen temperature coupling is on in four groups
Tcoupl= berendsen
tau_t = 0.1
tc-grps   = system
ref_t = 310
; Pressure coupling is on
Pcoupl  = berendsen
pcoupltype  = isotropic
tau_p   = 0.5
compressibility = 4.5e-5
ref_p   = 1.0
; Generate velocites is on at 310 K.
gen_vel = yes
gen_temp = 310.0
gen_seed = 173529




error output file:

..
..
Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
Starting 8 threads
Loaded with Money
 
Making 3D domain decomposition 2 x 2 x 2
 
Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#

Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.001453, max 0.034820 (between atoms 315 and 317)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.048739, max 1.100685 (between atoms 8422 and 8421)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
..
..
snip
..
..

starting mdrun 'TUBULIN ALPHA CHAIN'
25000 steps, 50.0 ps.
Warning: 1-4 interaction between 8443 and 8446 at distance 2.853 which is 
larger than the 1-4 table size 2.000 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size   
..
..
snip
..
..
step 0: Water molecule starting at atom 23781 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.1#

Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.1#

Back Off! I just backed up step0b_n5.pdb to ./#step0b_n5.pdb.2#

Back Off! I just backed up step0b_n3.pdb to ./#step0b_n3.pdb.2#

Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.1#

Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.1#

Back Off! I just backed up step0c_n5.pdb to ./#step0c_n5.pdb.2#

Back Off! I just backed up step0c_n3.pdb to ./#step0c_n3.pdb.2#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
^Mstep 0/opt/sge/jacobson/spool/node-2-05/job_scripts/1097116: line 21:  1473 
Segmentation fault  (core dumped) $MDRUN -machinefile $TMPDIR/machines -np 
$NSLOTS $MDRUN -v -nice 0 -np $NSLOTS -s 1JFF_md.tpr -o 1JFF_md.trr -c 
1JFF_pmd.gro -x 1JFF_md.xtc -e 1JFF_md.edr



On 16 August 2011 10:58, Justin A. Lemkul jalem...@vt.edu wrote:



rainy908 wrote:

Hi,

I get the error Atomtype CR1 not found when I execute grompp.  After 
perusing the gmx archives, I understand this error has to do with the lack of 
CR1 being specified in the force field.  However, I did include the 
appropriate .itp files in my .top file (shown below).  As you can see, 
obviously CR1 is specified in taxol.itp and gtp.itp.  Therefore, I'm not sure 
what exactly is the problem here.


You're mixing and matching force fields.  

Re: [gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread Justin A. Lemkul



rainy908 wrote:

Dear gmx-users:

Thanks Justin for your help.  But now I am experiencing a Segmentation fault 
error when executing mdrun.  I've perused the archives but found none of the 
threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

My files are as follows:

md.mdp

title   = 1JFF MD
cpp = /lib/cpp ; location of cpp on SGI
constraints = all-bonds
integrator  = md
dt  = 0.0001 ; ps
nsteps  = 25000 ;
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 0
nstfout = 0
nstlist = 10
ns_type = grid
rlist   = 0.9
coulombtype = PME
rcoulomb= 0.9
rvdw= 1.0
fourierspacing  = 0.12
fourier_nx= 0
fourier_ny= 0
fourier_nz= 0
pme_order = 6
ewald_rtol= 1e-5
optimize_fft  = yes
; Berendsen temperature coupling is on in four groups
Tcoupl= berendsen
tau_t = 0.1
tc-grps   = system
ref_t = 310
; Pressure coupling is on
Pcoupl  = berendsen
pcoupltype  = isotropic
tau_p   = 0.5
compressibility = 4.5e-5
ref_p   = 1.0
; Generate velocites is on at 310 K.
gen_vel = yes
gen_temp = 310.0
gen_seed = 173529




error output file:

..
..
Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
Starting 8 threads
Loaded with Money
 
Making 3D domain decomposition 2 x 2 x 2
 
Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#


Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.001453, max 0.034820 (between atoms 315 and 317)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length



If mdrun is failing at step 0, it indicates that your system is physically 
unreasonable.  Either the starting configuration has atomic clashes that have 
not been resolved (and thus you need better EM and/or equilibration) or that the 
parameters assigned to the molecules in your system are unreasonable.


-Justin


Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.048739, max 1.100685 (between atoms 8422 and 8421)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
..
..
snip
..
..

starting mdrun 'TUBULIN ALPHA CHAIN'
25000 steps, 50.0 ps.
Warning: 1-4 interaction between 8443 and 8446 at distance 2.853 which is 
larger than the 1-4 table size 2.000 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size   
..

..
snip
..
..
step 0: Water molecule starting at atom 23781 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.1#

Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.1#

Back Off! I just backed up step0b_n5.pdb to ./#step0b_n5.pdb.2#

Back Off! I just backed up step0b_n3.pdb to ./#step0b_n3.pdb.2#

Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.1#

Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.1#

Back Off! I just backed up step0c_n5.pdb to ./#step0c_n5.pdb.2#

Back Off! I just backed up step0c_n3.pdb to ./#step0c_n3.pdb.2#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
^Mstep 0/opt/sge/jacobson/spool/node-2-05/job_scripts/1097116: line 21:  1473 
Segmentation fault  (core dumped) $MDRUN -machinefile $TMPDIR/machines -np 
$NSLOTS $MDRUN -v -nice 0 -np $NSLOTS -s 1JFF_md.tpr -o 1JFF_md.trr -c 
1JFF_pmd.gro -x 1JFF_md.xtc -e 1JFF_md.edr



On 16 August 2011 10:58, Justin A. Lemkul jalem...@vt.edu wrote:



rainy908 wrote:

Hi,

I get the error Atomtype CR1 not found when I execute grompp.  After perusing 
the gmx archives, I 

Re: [gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread rainy908
Hi Justin,

THanks for the input.  So I traced back to my energy minimization steps, and am 
getting the error message after I execute the following line:

$mdrun -s 1JFF_em.tpr -o 1JFF_em.trr -c 1JFF_b4pr.gro -e em.edr

output:
Back Off! I just backed up md.log to ./#md.log.2#
Reading file 1JFF_em.tpr, VERSION 4.5.3 (single precision)
Starting 24 threads

Will use 15 particle-particle and 9 PME only nodes
This is a guess, check the performance at the end of the log file

---
Program mdrun, VERSION 4.5.3
Source code file: domdec.c, line: 6428

Fatal error:
There is no domain decomposition for 15 nodes that is compatible with the given 
box and a minimum cell size of 2.92429 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

---

I figure the problem must lie within my em.mdp file:

title = 1JFF
cpp = /lib/cpp ; location of cpp on SGI
define = -DFLEX_SPC ; Use Ferguson’s Flexible water model [4]
constraints = none
integrator = steep
dt = 0.001 ; ps !
nsteps = 1
nstlist = 10
ns_type = grid
rlist = 0.9
coulombtype = PME ; Use particle-mesh ewald
rcoulomb = 0.9
rvdw = 1.0
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
ewald_rtol = 1e-5
optimize_fft = yes
;
; Energy minimizing stuff
;
emtol = 1000.0
emstep = 0.01
~

I figure this is an issue related to with PME and the Fourier spacing?

Thanks,

rainy908



On 17 August 2011 17:55, Justin A. Lemkul jalem...@vt.edu wrote:



rainy908 wrote:

Dear gmx-users:

Thanks Justin for your help.  But now I am experiencing a Segmentation 
fault error when executing mdrun.  I've perused the archives but found none of 
the threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

My files are as follows:

md.mdp

title   = 1JFF MD
cpp = /lib/cpp ; location of cpp on SGI
constraints = all-bonds
integrator  = md
dt  = 0.0001 ; ps
nsteps  = 25000 ;
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 0
nstfout = 0
nstlist = 10
ns_type = grid
rlist   = 0.9
coulombtype = PME
rcoulomb= 0.9
rvdw= 1.0
fourierspacing  = 0.12
fourier_nx= 0
fourier_ny= 0
fourier_nz= 0
pme_order = 6
ewald_rtol= 1e-5
optimize_fft  = yes
; Berendsen temperature coupling is on in four groups
Tcoupl= berendsen
tau_t = 0.1
tc-grps   = system
ref_t = 310
; Pressure coupling is on
Pcoupl  = berendsen
pcoupltype  = isotropic
tau_p   = 0.5
compressibility = 4.5e-5
ref_p   = 1.0
; Generate velocites is on at 310 K.
gen_vel = yes
gen_temp = 310.0
gen_seed = 173529




error output file:

..
..
Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
Starting 8 threads
Loaded with Money
 Making 3D domain decomposition 2 x 2 x 2
 Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#

Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.001453, max 0.034820 (between atoms 315 and 317)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length

If mdrun is failing at step 0, it indicates that your system is physically 
unreasonable.  Either the starting 

Re: [gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread Mark Abraham

On 18/08/2011 2:41 PM, rainy908 wrote:

Hi Justin,

THanks for the input.  So I traced back to my energy minimization steps, and am 
getting the error message after I execute the following line:

$mdrun -s 1JFF_em.tpr -o 1JFF_em.trr -c 1JFF_b4pr.gro -e em.edr

output:
Back Off! I just backed up md.log to ./#md.log.2#
Reading file 1JFF_em.tpr, VERSION 4.5.3 (single precision)
Starting 24 threads

Will use 15 particle-particle and 9 PME only nodes
This is a guess, check the performance at the end of the log file

---
Program mdrun, VERSION 4.5.3
Source code file: domdec.c, line: 6428

Fatal error:
There is no domain decomposition for 15 nodes that is compatible with the given 
box and a minimum cell size of 2.92429 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

---

I figure the problem must lie within my em.mdp file:


It could, but if you follow the above advice you will learn about some 
other considerations.


Mark



title = 1JFF
cpp = /lib/cpp ; location of cpp on SGI
define = -DFLEX_SPC ; Use Ferguson’s Flexible water model [4]
constraints = none
integrator = steep
dt = 0.001 ; ps !
nsteps = 1
nstlist = 10
ns_type = grid
rlist = 0.9
coulombtype = PME ; Use particle-mesh ewald
rcoulomb = 0.9
rvdw = 1.0
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
ewald_rtol = 1e-5
optimize_fft = yes
;
; Energy minimizing stuff
;
emtol = 1000.0
emstep = 0.01
~

I figure this is an issue related to with PME and the Fourier spacing?

Thanks,

rainy908



On 17 August 2011 17:55, Justin A. Lemkuljalem...@vt.edu  wrote:



 rainy908 wrote:

 Dear gmx-users:

 Thanks Justin for your help.  But now I am experiencing a Segmentation 
fault error when executing mdrun.  I've perused the archives but found none of 
the threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

 My files are as follows:

 md.mdp
 
 title   = 1JFF MD
 cpp = /lib/cpp ; location of cpp on SGI
 constraints = all-bonds
 integrator  = md
 dt  = 0.0001 ; ps
 nsteps  = 25000 ;
 nstcomm = 1
 nstxout = 500 ; output coordinates every 1.0 ps
 nstvout = 0
 nstfout = 0
 nstlist = 10
 ns_type = grid
 rlist   = 0.9
 coulombtype = PME
 rcoulomb= 0.9
 rvdw= 1.0
 fourierspacing  = 0.12
 fourier_nx= 0
 fourier_ny= 0
 fourier_nz= 0
 pme_order = 6
 ewald_rtol= 1e-5
 optimize_fft  = yes
 ; Berendsen temperature coupling is on in four groups
 Tcoupl= berendsen
 tau_t = 0.1
 tc-grps   = system
 ref_t = 310
 ; Pressure coupling is on
 Pcoupl  = berendsen
 pcoupltype  = isotropic
 tau_p   = 0.5
 compressibility = 4.5e-5
 ref_p   = 1.0
 ; Generate velocites is on at 310 K.
 gen_vel = yes
 gen_temp = 310.0
 gen_seed = 173529
 



 error output file:
 
 ..
 ..
 Back Off! I just backed up md.log to ./#md.log.1#
 Getting Loaded...
 Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
 Starting 8 threads
 Loaded with Money
  Making 3D domain decomposition 2 x 2 x 2
  Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#

 Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

 Step 0, time 0 (ps)  LINCS WARNING
 relative constraint deviation after LINCS:
 rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

 Step 0, time 0 (ps)  LINCS WARNING
 relative constraint deviation after LINCS:
 rms 0.001453, max 0.034820 (between atoms 315 and 317)
 bonds that rotated more than 30 degrees:
  atom 1 atom 2  angle  previous, current, constraint length
 bonds that rotated more than 30 

Re: [gmx-users] segmentation fault after mdrun

2009-02-23 Thread Carsten Kutzner

On Feb 23, 2009, at 9:22 AM, Nikit sharan wrote:


Dear justin,

As you suggested i have changed the coloumb type to PME and set
pme_order to 5 as  said in the mailing list as it will increase the

Hi Nikit,

I think pme_order should be an even number in gromacs. You might
want to try with order = 6 then.

Carsten



--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] segmentation fault after mdrun

2009-02-23 Thread Justin A. Lemkul



Nikit sharan wrote:

Dear justin,

As you suggested i have changed the coloumb type to PME and set
pme_order to 5 as  said in the mailing list as it will increase the


Try an even number.


accuracy.But then the simulation is not getting completed.Its taking
hours together.Normally what's the running time for a lipid bilayer?I


Timings will be dependent on your hardware, especially the communication between 
nodes in a cluster.


-Justin


dint find anything wrong with the topology as it is downloaded from
tielman's website.it is also well equilibriated.Kindly give me your
suggestions.My mdp.file is below:


title= Yo
cpp  = cpp
include  =
define   =
integrator   = md
tinit= 0
dt   = 0.005
nsteps   = 5
init_step= 0
comm-mode= Linear
nstcomm  = 1
comm-grps= system
nstxout  = 200
nstvout  = 0
nstfout  = 0
nstcheckpoint= 1000
nstlog   = 100
nstenergy= 100
nstxtcout= 1000
xtc-precision= 1000
nstlist  = 10
ns_type  = grid
pbc  = xyz
rlist= 1.0
domain-decomposition = no
coulombtype  = PME
rcoulomb-switch  = 0
rcoulomb = 1.0
epsilon-r= 1
vdw-type = Cut-off
rvdw-switch  = 0
rvdw = 1.0
DispCorr = EnerPres
table-extension  = 1
fourierspacing   = 0.12
Pcoupl   = berendsen
Pcoupltype   = semiisotropic
tau_p= 0.5 0.5 0.5
compressibility  = 4.5e-5 4.5e-5 4.5e-5
ref_p= 1.0 1.0 1.0
andersen_seed= 815131
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--


Justin A. Lemkul
Graduate Research Assistant
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] segmentation fault after mdrun

2009-02-21 Thread Nikit sharan
Hello sir,

Thanks for your suggestion.Then as u suggested i start the work from
scratch and did the equilibriation  again, there is nothing wrong with
the topology also because it was taken from tiemann's website.The only
message that it gives after equlibriation run is

Steepest Descents converged to machine precision in 159 steps,
but did not reach the requested Fmax  1e-05.
Potential Energy  = -3.3737231e+05
Maximum force =  2.6356035e+03 on atom 4665
Norm of force =  1.2290095e+04

do i have to upgrade to double precision for simuating dppc
bilayer?but many of them worked fine with single precision alone.


When i proceded with the production run, again either its not giving
run with 5 steps or when i reduced step from 5 to 50 to check
what's the output then it gives me an error saying 

Warning: pressure scaling more than 1%, mu: 1.12479 1.12479 1.00476

Back Off! I just backed up step7018.pdb to ./#step7018.pdb.1#
Wrote pdb files with previous and current coordinates
Segmentation fault 

since its giving an error with pressure scaling, the parameter that i
need to adjust is ref_p??by default i said that to 1.0?Now what value
i have to change.


I willl be really thanful if you have a look at the segmentation fault also.

title= Yo
cpp  = cpp
include  =
define   =
integrator   = md
tinit= 0
dt   = 0.005
nsteps   = 5
init_step= 0
comm-mode= Linear
nstcomm  = 1
comm-grps= system
nstxout  = 200
nstvout  = 0
nstfout  = 0
nstcheckpoint= 1000
nstlog   = 100
nstenergy= 100
nstxtcout= 1000
xtc-precision= 1000
nstlist  = 10
ns_type  = grid
pbc  = xyz
rlist= 1.0
domain-decomposition = no
coulombtype  = Cut-off
rcoulomb-switch  = 0
rcoulomb = 1.0
epsilon-r= 1
vdw-type = Cut-off
rvdw-switch  = 0
rvdw = 1.0
DispCorr = EnerPres
table-extension  = 1
fourierspacing   = 0.12
Pcoupl   = berendsen
Pcoupltype   = semiisotropic
tau_p= 0.5 0.5 0.5
compressibility  = 4.5e-5 4.5e-5 4.5e-5
ref_p= 1.0 1.0 1.0
andersen_seed= 815131
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] segmentation fault after mdrun

2009-02-21 Thread Mark Abraham

Nikit sharan wrote:

Hello sir,

Thanks for your suggestion.Then as u suggested i start the work from
scratch and did the equilibriation  again, there is nothing wrong with
the topology also because it was taken from tiemann's website.The only
message that it gives after equlibriation run is

Steepest Descents converged to machine precision in 159 steps,
but did not reach the requested Fmax  1e-05.
Potential Energy  = -3.3737231e+05
Maximum force =  2.6356035e+03 on atom 4665
Norm of force =  1.2290095e+04

do i have to upgrade to double precision for simuating dppc
bilayer?but many of them worked fine with single precision alone.


I would guess not. 1e-5 is an extremely small Fmax. Why are you using 
that value?



When i proceded with the production run, again either its not giving
run with 5 steps or when i reduced step from 5 to 50 to check
what's the output then it gives me an error saying 

Warning: pressure scaling more than 1%, mu: 1.12479 1.12479 1.00476

Back Off! I just backed up step7018.pdb to ./#step7018.pdb.1#
Wrote pdb files with previous and current coordinates
Segmentation fault 

since its giving an error with pressure scaling, the parameter that i
need to adjust is ref_p??by default i said that to 1.0?Now what value
i have to change.


Probably this is a symptom of the problem, not the problem itself. See 
http://wiki.gromacs.org/index.php/Errors#Pressure_scaling_more_than_1.25



I willl be really thanful if you have a look at the segmentation fault also.


Segmentation fault just indicates some catastrophic failure.


title= Yo
cpp  = cpp
include  =
define   =
integrator   = md
tinit= 0
dt   = 0.005


See the final line of the above link.


nsteps   = 5
init_step= 0
comm-mode= Linear
nstcomm  = 1
comm-grps= system
nstxout  = 200
nstvout  = 0
nstfout  = 0
nstcheckpoint= 1000
nstlog   = 100
nstenergy= 100
nstxtcout= 1000
xtc-precision= 1000
nstlist  = 10
ns_type  = grid
pbc  = xyz
rlist= 1.0
domain-decomposition = no
coulombtype  = Cut-off
rcoulomb-switch  = 0
rcoulomb = 1.0
epsilon-r= 1
vdw-type = Cut-off
rvdw-switch  = 0
rvdw = 1.0
DispCorr = EnerPres
table-extension  = 1
fourierspacing   = 0.12
Pcoupl   = berendsen
Pcoupltype   = semiisotropic
tau_p= 0.5 0.5 0.5
compressibility  = 4.5e-5 4.5e-5 4.5e-5
ref_p= 1.0 1.0 1.0
andersen_seed= 815131


Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] segmentation fault after mdrun

2009-02-21 Thread Justin A. Lemkul



Mark Abraham wrote:


coulombtype  = Cut-off


In addition to everything Mark said, you should also *never* run a bilayer 
simulation with a simply cutoff scheme.  You will get terrible artefacts.  That 
goes for most other simulations too, but it has been demonstrated very recently 
for bilayers that one should always use PME.


-Justin



Mark
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--


Justin A. Lemkul
Graduate Research Assistant
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE : RE : [gmx-users] segmentation fault in mdrun when using PME

2006-05-03 Thread Diane Fournier
It seems that in my case this is a bug (see Bugzilla, bug # 74) related to 
using the Intel  Math Kernel Library (MKL) v. 8.0.1 for Fourier transforms. The 
team managing the Altix are trying different FFT libraries. Eric Lindahl says 
that using a FFT library that is not optimized for Itanium 2 shouldn't hamper 
the perfomance very much since FT doesn't represent a very big part of the 
computation.
 
I think the error I get with the version compiled without Fortran is boggus, 
because this tutorial (John Kerrigan's) was tried sucessfully by many people, 
so there shouldn't be any mistakes in the .mdp file. Also, the runs I do with 
that version have strange output and anyway, I don't get that error with the 
Fortran-enabled version. Segmentation fault has been documented in the position 
restrained dynamics stage of this tutorial with gromacs 3.3.0 though, and was 
solved when upgrading to 3.3.1. 
 
An exploding system is often caused by extreme forces due to bad contacts, 
which can be relieved by a minimization step. Have you used a steepest descents 
minimization step on your system before doing the pr ?



De: [EMAIL PROTECTED] de la part de Arneh Babakhani
Date: mar. 2006-05-02 23:21
À: Discussion list for GROMACS users
Objet : Re: RE : [gmx-users] segmentation fault in mdrun when using PME


Hello, I'm experiencing the exact same problem, when trying to do some 
restrained molecular dynamics of a small peptide in a water box. Have you had 
any luck in trouble-shooting this? (I've pasted my mdp file below, for your 
reference).  Also running Gromacs 3.3.1

Arneh

title = ResMD
warnings = 10
cpp = /usr/bin/cpp ; location of cpp on SGI
define = -DPOSRES
constraints = all-bonds
integrator = md
dt = 0.002 ; ps !
nsteps = 25000 ; total 50.0 ps.
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 1000 ; output velocities every 2.0 ps
nstfout = 0
nstlog = 10
nstenergy = 10
nstlist = 10
ns_type = grid
rlist = 0.9
coulombtype = PME
rcoulomb = 0.9
rvdw = 1.0
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 6
ewald_rtol = 1e-5
optimize_fft = yes
; Berendsen temperature coupling is on in four groups
Tcoupl = berendsen
tau_t = 0.1 0.1
tc_grps = protein sol
ref_t = 300 300
; Pressure coupling is on
Pcoupl = berendsen
pcoupltype = isotropic
tau_p = 0.5
compressibility = 4.5e-5
ref_p = 1.0
; Generate velocites is on at 300 K.
gen_vel = yes
gen_temp = 300.0
gen_seed = 173529

Diane Fournier wrote: 

 



De: [EMAIL PROTECTED] de la part de David van der Spoel
Date: lun. 2006-05-01 13:33
À: Discussion list for GROMACS users
Objet : Re: [gmx-users] segmentation fault in mdrun when using PME




Have you enabled fortran at the compilation stage? In that case try it
without, otherwise please file a bugzilla, such that we can document
this problem (and try to fix it of course).

  

Still doesn't work. logfile ends in the usual way. Except this 
time, I get this output:



Reading file trp_em.tpr, VERSION 3.3.1 (single precision)

Steepest Descents:

Tolerance (Fmax) = 1.0e+03

Number of steps = 500

Warning: 1-4 interaction between 1 and 7 at distance 39513.957 which is 
larger than the 1-4 table size 1.000 nm

These are ignored for the rest of the simulation

This usually means your system is exploding,

if not, you should increase table-extension in your mdp file

Wrote pdb files with previous and current coordinates

Back Off! I just backed up step0.pdb to ./#step0.pdb.1#

Wrote pdb files with previous and current coordinates

and then these files get written:

step0.pdb

#step0.pdb.1# 

step-1.pdb 

step1.pdb 

Will file a bugzilla.
  




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. 
Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read 
http://www.gromacs.org/mailing_lists/users.php
  





___
gmx

RE: [gmx-users] segmentation fault in mdrun when using PME

2006-05-01 Thread Diane Fournier



-Original Message-
From: [EMAIL PROTECTED] on behalf of David van der Spoel
Sent: Sat 4/29/2006 2:25 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] segmentation fault in mdrun when using PME
 
Diane Fournier wrote:
 Hi, I'm new to Gromacs and I'm trying to run a enzyme-ligand complex 
 molecular dynamics simulation. I have tried doing John 
 Kerrigan's Drug-Enzyme tutorial and mdrun crashes with segmentation 
 fault and core dump at the steepest descents minimization step. However, 
 mdrun works fine when using cutoff instead of PME. 
  
 I'm working with Gromacs v. 3.3.1 on a SGI altix 3700 with 32 Intel 
 Itanium 2 processors (but I'm currently using a single node, so it's not 
 a MPI problem) under Red Hat Enterprise Linux AS release 3 with Intel 
 Math Kernel Librarary (MKL) v. 8.0.1 as FFT library (which is optimized 
 for Itanium 2).
  
 the em.mdp file looks like:
  
 title   =  drg_trp
 cpp =  /usr/bin/cpp
 define  =  -DFLEX_SPC
 constraints =  none
 integrator  =  steep
 dt  =  0.002; ps !
 nsteps  =  500
 nstlist =  10
 ns_type =  grid
 rlist   =  0.9
 coulombtype =  PME
 rcoulomb=  0.9
 rvdw=  0.9
 fourierspacing  =  0.12
 fourier_nx  =  0
 fourier_ny  =  0
 fourier_nz  =  0
 pme_order   =  4
 ewald_rtol  =  1e-5
 optimize_fft=  yes
 ;
 ;   Energy minimizing stuff
 ;
 emtol   =  1000.0
 emstep  =  0.01
 Is it possible this could be related to insufficient memory allocation ? 
 How demanding is this PME calculation ?
Not likely a memory problem. It could be a compiler issue but we need 
more info! Where does it crash? Is it reproducible? DOes the same tpr 
file cause a crash on another architecture (e.g. your desktop)?

I installed gromacs 3.3.1 on my desktop (Pentium 4 under linux fedora core 4 
and using the fftw3 fourier transform library). I used the .tpr file generated 
on the altix in mdrun and it worked fine. 

When I run that same file on the altix, it crashes every time without any 
iteration in the .log file:

Removing pbc first time
Done rmpbc
Initiating Steepest Descents
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest, initial mass: 14580
Started Steepest Descents on node 0 Mon May  1 11:47:39 2006

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
  --- Thank You ---  


After that the file ends. There is no other error message than segmentation 
fault with core dump.

The compilers that are used on the altix are:
   C++ Version 99.0-023 - 9.0-031
   C++ Version 88.1-033 - 8.1-036
   Fortran 99.0-021 - 9.0-032
   Fortran 88.1-029 - 8.1-033
   IPP  4.1 - 5.0


  
 
 
 
 
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php


-- 
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

winmail.dat___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] segmentation fault in mdrun when using PME

2006-05-01 Thread David van der Spoel

Diane Fournier wrote:



-Original Message-
From: [EMAIL PROTECTED] on behalf of David van der Spoel
Sent: Sat 4/29/2006 2:25 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] segmentation fault in mdrun when using PME
 
Diane Fournier wrote:


Hi, I'm new to Gromacs and I'm trying to run a enzyme-ligand complex 
molecular dynamics simulation. I have tried doing John 
Kerrigan's Drug-Enzyme tutorial and mdrun crashes with segmentation 
fault and core dump at the steepest descents minimization step. However, 
mdrun works fine when using cutoff instead of PME. 

I'm working with Gromacs v. 3.3.1 on a SGI altix 3700 with 32 Intel 
Itanium 2 processors (but I'm currently using a single node, so it's not 
a MPI problem) under Red Hat Enterprise Linux AS release 3 with Intel 
Math Kernel Librarary (MKL) v. 8.0.1 as FFT library (which is optimized 
for Itanium 2).


the em.mdp file looks like:

title   =  drg_trp
cpp =  /usr/bin/cpp
define  =  -DFLEX_SPC
constraints =  none
integrator  =  steep
dt  =  0.002; ps !
nsteps  =  500
nstlist =  10
ns_type =  grid
rlist   =  0.9
coulombtype =  PME
rcoulomb=  0.9
rvdw=  0.9
fourierspacing  =  0.12
fourier_nx  =  0
fourier_ny  =  0
fourier_nz  =  0
pme_order   =  4
ewald_rtol  =  1e-5
optimize_fft=  yes
;
;   Energy minimizing stuff
;
emtol   =  1000.0
emstep  =  0.01
Is it possible this could be related to insufficient memory allocation ? 
How demanding is this PME calculation ?


Not likely a memory problem. It could be a compiler issue but we need 
more info! Where does it crash? Is it reproducible? DOes the same tpr 
file cause a crash on another architecture (e.g. your desktop)?


I installed gromacs 3.3.1 on my desktop (Pentium 4 under linux fedora core 4 and using the fftw3 fourier transform library). I used the .tpr file generated on the altix in mdrun and it worked fine. 


When I run that same file on the altix, it crashes every time without any 
iteration in the .log file:

Removing pbc first time
Done rmpbc
Initiating Steepest Descents
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest, initial mass: 14580
Started Steepest Descents on node 0 Mon May  1 11:47:39 2006

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
  --- Thank You ---  


After that the file ends. There is no other error message than segmentation 
fault with core dump.

The compilers that are used on the altix are:
   C++ Version 99.0-023 - 9.0-031
   C++ Version 88.1-033 - 8.1-036
   Fortran 99.0-021 - 9.0-032
   Fortran 88.1-029 - 8.1-033
   IPP  4.1 - 5.0


Have you enabled fortran at the compilation stage? In that case try it 
without, otherwise please file a bugzilla, such that we can document 
this problem (and try to fix it of course).












___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php







___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] segmentation fault in mdrun when using PME

2006-04-29 Thread David van der Spoel

Diane Fournier wrote:
Hi, I'm new to Gromacs and I'm trying to run a enzyme-ligand complex 
molecular dynamics simulation. I have tried doing John 
Kerrigan's Drug-Enzyme tutorial and mdrun crashes with segmentation 
fault and core dump at the steepest descents minimization step. However, 
mdrun works fine when using cutoff instead of PME. 
 
I'm working with Gromacs v. 3.3.1 on a SGI altix 3700 with 32 Intel 
Itanium 2 processors (but I'm currently using a single node, so it's not 
a MPI problem) under Red Hat Enterprise Linux AS release 3 with Intel 
Math Kernel Librarary (MKL) v. 8.0.1 as FFT library (which is optimized 
for Itanium 2).
 
the em.mdp file looks like:
 
title   =  drg_trp

cpp =  /usr/bin/cpp
define  =  -DFLEX_SPC
constraints =  none
integrator  =  steep
dt  =  0.002; ps !
nsteps  =  500
nstlist =  10
ns_type =  grid
rlist   =  0.9
coulombtype =  PME
rcoulomb=  0.9
rvdw=  0.9
fourierspacing  =  0.12
fourier_nx  =  0
fourier_ny  =  0
fourier_nz  =  0
pme_order   =  4
ewald_rtol  =  1e-5
optimize_fft=  yes
;
;   Energy minimizing stuff
;
emtol   =  1000.0
emstep  =  0.01
Is it possible this could be related to insufficient memory allocation ? 
How demanding is this PME calculation ?
Not likely a memory problem. It could be a compiler issue but we need 
more info! Where does it crash? Is it reproducible? DOes the same tpr 
file cause a crash on another architecture (e.g. your desktop)?


 





___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] segmentation fault in mdrun when using PME

2006-04-28 Thread Diane Fournier
Hi, I'm new to Gromacs and I'm trying to run a enzyme-ligand 
complex molecular dynamics simulation. I have tried doing John 
Kerrigan'sDrug-Enzyme tutorial and mdrun crashes withsegmentation 
fault and core dump at the steepest descents minimization step. However, mdrun 
works fine when using cutoff instead of PME.

I'm working withGromacs v. 3.3.1on a 
SGI altix 3700 with 32 Intel Itanium2 processors (but I'm currently 
using a single node, so it's not a MPI problem) under Red Hat Enterprise Linux 
AS release 3 with Intel Math Kernel Librarary (MKL) v. 8.0.1 as FFT 
library(which is optimized for Itanium 2).

the em.mdp file looks like:

title 
= 
drg_trpcpp 
= 
/usr/bin/cppdefine 
= 
-DFLEX_SPCconstraints 
= noneintegrator 
= 
steepdt 
= 0.002 ; ps 
!nsteps 
= 
500nstlist 
= 
10ns_type 
= 
gridrlist 
= 
0.9coulombtype 
= 
PMErcoulomb 
= 
0.9rvdw 
= 
0.9fourierspacing 
= 
0.12fourier_nx 
= 
0fourier_ny 
= 
0fourier_nz 
= 
0pme_order 
= 
4ewald_rtol 
= 
1e-5optimize_fft 
= yes;; Energy minimizing 
stuff;emtol 
= 
1000.0emstep 
= 0.01
Is it 
possible this could be related to insufficient memory allocation ? How demanding 
is this PME calculation ?
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php