Re: [gmx-users] Segmentation fault after mdrun for MD simulation
On 18/08/2011 2:41 PM, rainy908 wrote: Hi Justin, THanks for the input. So I traced back to my energy minimization steps, and am getting the error message after I execute the following line: $mdrun -s 1JFF_em.tpr -o 1JFF_em.trr -c 1JFF_b4pr.gro -e em.edr output: Back Off! I just backed up md.log to ./#md.log.2# Reading file 1JFF_em.tpr, VERSION 4.5.3 (single precision) Starting 24 threads Will use 15 particle-particle and 9 PME only nodes This is a guess, check the performance at the end of the log file --- Program mdrun, VERSION 4.5.3 Source code file: domdec.c, line: 6428 Fatal error: There is no domain decomposition for 15 nodes that is compatible with the given box and a minimum cell size of 2.92429 nm Change the number of nodes or mdrun option -rdd Look in the log file for details on the domain decomposition For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors --- I figure the problem must lie within my em.mdp file: It could, but if you follow the above advice you will learn about some other considerations. Mark title = 1JFF cpp = /lib/cpp ; location of cpp on SGI define = -DFLEX_SPC ; Use Ferguson’s Flexible water model [4] constraints = none integrator = steep dt = 0.001 ; ps ! nsteps = 1 nstlist = 10 ns_type = grid rlist = 0.9 coulombtype = PME ; Use particle-mesh ewald rcoulomb = 0.9 rvdw = 1.0 fourierspacing = 0.12 fourier_nx = 0 fourier_ny = 0 fourier_nz = 0 pme_order = 4 ewald_rtol = 1e-5 optimize_fft = yes ; ; Energy minimizing stuff ; emtol = 1000.0 emstep = 0.01 ~ I figure this is an issue related to with PME and the Fourier spacing? Thanks, rainy908 On 17 August 2011 17:55, Justin A. Lemkul wrote: rainy908 wrote: Dear gmx-users: Thanks Justin for your help. But now I am experiencing a Segmentation fault error when executing mdrun. I've perused the archives but found none of the threads on segmentation faults similar to my case here. I believe the segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 with respect to one another, but am not 100%. Any advice would be especially welcome. My files are as follows: md.mdp title = 1JFF MD cpp = /lib/cpp ; location of cpp on SGI constraints = all-bonds integrator = md dt = 0.0001 ; ps nsteps = 25000 ; nstcomm = 1 nstxout = 500 ; output coordinates every 1.0 ps nstvout = 0 nstfout = 0 nstlist = 10 ns_type = grid rlist = 0.9 coulombtype = PME rcoulomb= 0.9 rvdw= 1.0 fourierspacing = 0.12 fourier_nx= 0 fourier_ny= 0 fourier_nz= 0 pme_order = 6 ewald_rtol= 1e-5 optimize_fft = yes ; Berendsen temperature coupling is on in four groups Tcoupl= berendsen tau_t = 0.1 tc-grps = system ref_t = 310 ; Pressure coupling is on Pcoupl = berendsen pcoupltype = isotropic tau_p = 0.5 compressibility = 4.5e-5 ref_p = 1.0 ; Generate velocites is on at 310 K. gen_vel = yes gen_temp = 310.0 gen_seed = 173529 error output file: .. .. Back Off! I just backed up md.log to ./#md.log.1# Getting Loaded... Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision) Starting 8 threads Loaded with Money Making 3D domain decomposition 2 x 2 x 2 Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1# Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1# Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.046849, max 1.014038 (between atoms 8541 and 8539) Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.001453, max 0.034820 (between atoms 315 and 317) bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length bonds that rotated more than 30 degrees:
Re: [gmx-users] Segmentation fault after mdrun for MD simulation
Hi Justin, THanks for the input. So I traced back to my energy minimization steps, and am getting the error message after I execute the following line: $mdrun -s 1JFF_em.tpr -o 1JFF_em.trr -c 1JFF_b4pr.gro -e em.edr output: Back Off! I just backed up md.log to ./#md.log.2# Reading file 1JFF_em.tpr, VERSION 4.5.3 (single precision) Starting 24 threads Will use 15 particle-particle and 9 PME only nodes This is a guess, check the performance at the end of the log file --- Program mdrun, VERSION 4.5.3 Source code file: domdec.c, line: 6428 Fatal error: There is no domain decomposition for 15 nodes that is compatible with the given box and a minimum cell size of 2.92429 nm Change the number of nodes or mdrun option -rdd Look in the log file for details on the domain decomposition For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors --- I figure the problem must lie within my em.mdp file: title = 1JFF cpp = /lib/cpp ; location of cpp on SGI define = -DFLEX_SPC ; Use Ferguson’s Flexible water model [4] constraints = none integrator = steep dt = 0.001 ; ps ! nsteps = 1 nstlist = 10 ns_type = grid rlist = 0.9 coulombtype = PME ; Use particle-mesh ewald rcoulomb = 0.9 rvdw = 1.0 fourierspacing = 0.12 fourier_nx = 0 fourier_ny = 0 fourier_nz = 0 pme_order = 4 ewald_rtol = 1e-5 optimize_fft = yes ; ; Energy minimizing stuff ; emtol = 1000.0 emstep = 0.01 ~ I figure this is an issue related to with PME and the Fourier spacing? Thanks, rainy908 On 17 August 2011 17:55, Justin A. Lemkul wrote: rainy908 wrote: Dear gmx-users: Thanks Justin for your help. But now I am experiencing a Segmentation fault error when executing mdrun. I've perused the archives but found none of the threads on segmentation faults similar to my case here. I believe the segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 with respect to one another, but am not 100%. Any advice would be especially welcome. My files are as follows: md.mdp title = 1JFF MD cpp = /lib/cpp ; location of cpp on SGI constraints = all-bonds integrator = md dt = 0.0001 ; ps nsteps = 25000 ; nstcomm = 1 nstxout = 500 ; output coordinates every 1.0 ps nstvout = 0 nstfout = 0 nstlist = 10 ns_type = grid rlist = 0.9 coulombtype = PME rcoulomb= 0.9 rvdw= 1.0 fourierspacing = 0.12 fourier_nx= 0 fourier_ny= 0 fourier_nz= 0 pme_order = 6 ewald_rtol= 1e-5 optimize_fft = yes ; Berendsen temperature coupling is on in four groups Tcoupl= berendsen tau_t = 0.1 tc-grps = system ref_t = 310 ; Pressure coupling is on Pcoupl = berendsen pcoupltype = isotropic tau_p = 0.5 compressibility = 4.5e-5 ref_p = 1.0 ; Generate velocites is on at 310 K. gen_vel = yes gen_temp = 310.0 gen_seed = 173529 error output file: .. .. Back Off! I just backed up md.log to ./#md.log.1# Getting Loaded... Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision) Starting 8 threads Loaded with Money Making 3D domain decomposition 2 x 2 x 2 Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1# Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1# Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.046849, max 1.014038 (between atoms 8541 and 8539) Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.001453, max 0.034820 (between atoms 315 and 317) bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length If mdrun is failing at step 0, it indicates that your system is physically unreasonable. Either the starting configuration has ato
Re: [gmx-users] Segmentation fault after mdrun for MD simulation
rainy908 wrote: Dear gmx-users: Thanks Justin for your help. But now I am experiencing a Segmentation fault error when executing mdrun. I've perused the archives but found none of the threads on segmentation faults similar to my case here. I believe the segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 with respect to one another, but am not 100%. Any advice would be especially welcome. My files are as follows: md.mdp title = 1JFF MD cpp = /lib/cpp ; location of cpp on SGI constraints = all-bonds integrator = md dt = 0.0001 ; ps nsteps = 25000 ; nstcomm = 1 nstxout = 500 ; output coordinates every 1.0 ps nstvout = 0 nstfout = 0 nstlist = 10 ns_type = grid rlist = 0.9 coulombtype = PME rcoulomb= 0.9 rvdw= 1.0 fourierspacing = 0.12 fourier_nx= 0 fourier_ny= 0 fourier_nz= 0 pme_order = 6 ewald_rtol= 1e-5 optimize_fft = yes ; Berendsen temperature coupling is on in four groups Tcoupl= berendsen tau_t = 0.1 tc-grps = system ref_t = 310 ; Pressure coupling is on Pcoupl = berendsen pcoupltype = isotropic tau_p = 0.5 compressibility = 4.5e-5 ref_p = 1.0 ; Generate velocites is on at 310 K. gen_vel = yes gen_temp = 310.0 gen_seed = 173529 error output file: .. .. Back Off! I just backed up md.log to ./#md.log.1# Getting Loaded... Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision) Starting 8 threads Loaded with Money Making 3D domain decomposition 2 x 2 x 2 Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1# Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1# Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.046849, max 1.014038 (between atoms 8541 and 8539) Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.001453, max 0.034820 (between atoms 315 and 317) bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length If mdrun is failing at step 0, it indicates that your system is physically unreasonable. Either the starting configuration has atomic clashes that have not been resolved (and thus you need better EM and/or equilibration) or that the parameters assigned to the molecules in your system are unreasonable. -Justin Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.048739, max 1.100685 (between atoms 8422 and 8421) bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length .. .. .. .. starting mdrun 'TUBULIN ALPHA CHAIN' 25000 steps, 50.0 ps. Warning: 1-4 interaction between 8443 and 8446 at distance 2.853 which is larger than the 1-4 table size 2.000 nm These are ignored for the rest of the simulation This usually means your system is exploding, if not, you should increase table-extension in your mdp file or with user tables increase the table size .. .. .. .. step 0: Water molecule starting at atom 23781 can not be settled. Check for bad contacts and/or reduce the timestep if appropriate. Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.1# Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.1# Back Off! I just backed up step0b_n5.pdb to ./#step0b_n5.pdb.2# Back Off! I just backed up step0b_n3.pdb to ./#step0b_n3.pdb.2# Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.1# Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.1# Back Off! I just backed up step0c_n5.pdb to ./#step0c_n5.pdb.2# Back Off! I just backed up step0c_n3.pdb to ./#step0c_n3.pdb.2# Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates ^Mstep 0/opt/sge/jacobson/spool/node-2-05/job_scripts/1097116: line 21: 1473 Segmentation fault (core dumped) $MDRUN -machinefile $TMPDIR/machines -np $NSLOTS $MDRUN -v -nice 0 -np $NSLOTS -s 1JFF_md.tpr -o 1JFF_md.trr -c 1JFF_pmd.gro -x 1JFF_md.xtc -e 1JFF_md.edr On 16 August 2011 10:58, Justin A. Lemkul wrote: rainy908 wrote: Hi, I get the error "Atomtype CR1 not found" when I execute grompp. After perusing the gmx archives, I understand this error has to d
[gmx-users] Segmentation fault after mdrun for MD simulation
Dear gmx-users: Thanks Justin for your help. But now I am experiencing a Segmentation fault error when executing mdrun. I've perused the archives but found none of the threads on segmentation faults similar to my case here. I believe the segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 with respect to one another, but am not 100%. Any advice would be especially welcome. My files are as follows: md.mdp title = 1JFF MD cpp = /lib/cpp ; location of cpp on SGI constraints = all-bonds integrator = md dt = 0.0001 ; ps nsteps = 25000 ; nstcomm = 1 nstxout = 500 ; output coordinates every 1.0 ps nstvout = 0 nstfout = 0 nstlist = 10 ns_type = grid rlist = 0.9 coulombtype = PME rcoulomb= 0.9 rvdw= 1.0 fourierspacing = 0.12 fourier_nx= 0 fourier_ny= 0 fourier_nz= 0 pme_order = 6 ewald_rtol= 1e-5 optimize_fft = yes ; Berendsen temperature coupling is on in four groups Tcoupl= berendsen tau_t = 0.1 tc-grps = system ref_t = 310 ; Pressure coupling is on Pcoupl = berendsen pcoupltype = isotropic tau_p = 0.5 compressibility = 4.5e-5 ref_p = 1.0 ; Generate velocites is on at 310 K. gen_vel = yes gen_temp = 310.0 gen_seed = 173529 error output file: .. .. Back Off! I just backed up md.log to ./#md.log.1# Getting Loaded... Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision) Starting 8 threads Loaded with Money Making 3D domain decomposition 2 x 2 x 2 Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1# Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1# Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.046849, max 1.014038 (between atoms 8541 and 8539) Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.001453, max 0.034820 (between atoms 315 and 317) bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 0.048739, max 1.100685 (between atoms 8422 and 8421) bonds that rotated more than 30 degrees: atom 1 atom 2 angle previous, current, constraint length .. .. .. .. starting mdrun 'TUBULIN ALPHA CHAIN' 25000 steps, 50.0 ps. Warning: 1-4 interaction between 8443 and 8446 at distance 2.853 which is larger than the 1-4 table size 2.000 nm These are ignored for the rest of the simulation This usually means your system is exploding, if not, you should increase table-extension in your mdp file or with user tables increase the table size .. .. .. .. step 0: Water molecule starting at atom 23781 can not be settled. Check for bad contacts and/or reduce the timestep if appropriate. Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.1# Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.1# Back Off! I just backed up step0b_n5.pdb to ./#step0b_n5.pdb.2# Back Off! I just backed up step0b_n3.pdb to ./#step0b_n3.pdb.2# Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.1# Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.1# Back Off! I just backed up step0c_n5.pdb to ./#step0c_n5.pdb.2# Back Off! I just backed up step0c_n3.pdb to ./#step0c_n3.pdb.2# Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates Wrote pdb files with previous and current coordinates ^Mstep 0/opt/sge/jacobson/spool/node-2-05/job_scripts/1097116: line 21: 1473 Segmentation fault (core dumped) $MDRUN -machinefile $TMPDIR/machines -np $NSLOTS $MDRUN -v -nice 0 -np $NSLOTS -s 1JFF_md.tpr -o 1JFF_md.trr -c 1JFF_pmd.gro -x 1JFF_md.xtc -e 1JFF_md.edr On 16 August 2011 10:58, Justin A. Lemkul wrote: rainy908 wrote: Hi, I get the error "Atomtype CR1 not found" when I execute grompp. After perusing the gmx archives, I understand this error has to do with the lack of "CR1" being specified in the force field. However, I did include the appropriate .itp files in my .top file (shown below). As you can see, obviously CR1 is specified in taxol.itp and gtp.itp. Therefore, I'm not sure what exactly is the problem here. You're mixing and matching force fields. PRODRG produces Gro