Re: [gmx-users] segmentation fault on gromacs 4.5.5 after mdrun

2013-11-11 Thread Justin Lemkul



On 11/11/13 11:24 AM, Carlos Javier Almeciga Diaz wrote:

Hello evryone,

I doing a simulation of a ligand-protein interaction with gromacs 4.5.5. 
Everything looks fine after I equilibrate the protein-ligand complex. I'm 
running these commands:


grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

mdrun -deffnm nvt

Nevertheless, I got this error:

Reading file nvt.tpr, VERSION 4.5.5 (double precision)
Segmentation fault

What should I do?



Instantaneous failure typically indicates that the forces are nonsensically high 
and the constraint algorithm immediately fails.  Likely the previous energy 
minimization did not adequately complete.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault on gromacs 4.5.5 after mdrun

2013-11-11 Thread Carlos Javier Almeciga Diaz
Hello evryone,

I doing a simulation of a ligand-protein interaction with gromacs 4.5.5. 
Everything looks fine after I equilibrate the protein-ligand complex. I'm 
running these commands:


grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr

mdrun -deffnm nvt

Nevertheless, I got this error:

Reading file nvt.tpr, VERSION 4.5.5 (double precision)
Segmentation fault

What should I do?



Carlos Javier Alméciga Díaz, QF., PhD.

Profesor Asistente

Pontificia Universidad Javeriana

Facultad de Ciencias

Instituto de Errores Innatos del Metabolismo

Tel: 57-1-3208320 Ext 4140-4099

Fax: 57-1-3208320 Ext 4099

Bogotá. D.C. - COLOMBIA

cjalmec...@javeriana.edu.co

http://www.javeriana.edu.co/ieim

AVISO LEGAL: El presente correo electronico no representa la opinion o el 
consentimiento oficial de la PONTIFICIA UNIVERSIDAD JAVERIANA. Este mensaje es 
confidencial y puede contener informacion privilegiada la cual no puede ser 
usada ni divulgada a personas distintas de su destinatario. Esta prohibida la 
retencion, grabacion, utilizacion, aprovechamiento o divulgacion con cualquier 
proposito. Si por error recibe este mensaje, por favor destruya su contenido y 
avise a su remitente.
En este aviso legal se omiten intencionalmente las tildes.

Este mensaje ha sido revisado por un sistema antivirus, por lo que su contenido 
esta libre de virus.
This e-mail has been scanned by an antivirus system, so its contents is virus 
free.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault on g_protonate

2013-08-09 Thread Pedro Lacerda
Hi,

My heteromolecule structure is missing hydrogens. I did an aminoacids.hdb
entry which I suppose being right. When running `g_protonate -s conf.pdb -o
prot.pdb` to add the hydrogens happens an segmentation fault. The traceback
for 4.6.4-dev-20130808-afc6131 follows. I could add them by any other ways,
but g_protonate seems the right way to do. Can you help me to use
g_protonate?

Program received signal SIGSEGV, Segmentation fault.
0x77b22450 in calc_all_pos (pdba=0x619d20, x=0x61c6a0,
nab=0x61c310, ab=0x61f9e0, bCheckMissing=0) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:392
392if (ab[i][j].oname == NULL  ab[i][j].tp  0)
(gdb) bt
#0  0x77b22450 in calc_all_pos (pdba=0x619d20, x=0x61c6a0,
nab=0x61c310, ab=0x61f9e0, bCheckMissing=0) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:392
#1  0x77b22cd7 in add_h_low (pdbaptr=0x7fffc1e8,
xptr=0x7fffced8, nah=50, ah=0x613370, nterpairs=1, ntdb=0x616440,
ctdb=0x619cc0, rN=0x619ce0, rC=0x619d00,
bCheckMissing=0, nabptr=0x7fffdf40, abptr=0x7fffdf48,
bUpdate_pdba=1, bKeep_old_pdba=1) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:540
#2  0x77b23b66 in add_h (pdbaptr=0x7fffc1e8,
xptr=0x7fffced8, nah=50, ah=0x613370, nterpairs=1, ntdb=0x616440,
ctdb=0x619cc0, rN=0x619ce0, rC=0x619d00,
bAllowMissing=1, nabptr=0x7fffdf40, abptr=0x7fffdf48,
bUpdate_pdba=1, bKeep_old_pdba=1) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:781
#3  0x77b24080 in protonate (atomsptr=0x7fffceb8,
xptr=0x7fffced8, protdata=0x7fffdf30) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:894
#4  0x004020ff in cmain (argc=1, argv=0x7fffe0d8) at
/home/peu/Downloads/gromacs/src/kernel/g_protonate.c:195
#5  0x0040224c in main (argc=5, argv=0x7fffe0d8) at
/home/peu/Downloads/gromacs/src/kernel/main.c:29


abraços,
Pedro Lacerda
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault on g_protonate

2013-08-09 Thread Justin Lemkul



On 8/9/13 2:35 PM, Pedro Lacerda wrote:

Hi,

My heteromolecule structure is missing hydrogens. I did an aminoacids.hdb
entry which I suppose being right. When running `g_protonate -s conf.pdb -o
prot.pdb` to add the hydrogens happens an segmentation fault. The traceback
for 4.6.4-dev-20130808-afc6131 follows. I could add them by any other ways,
but g_protonate seems the right way to do. Can you help me to use
g_protonate?

Program received signal SIGSEGV, Segmentation fault.
0x77b22450 in calc_all_pos (pdba=0x619d20, x=0x61c6a0,
nab=0x61c310, ab=0x61f9e0, bCheckMissing=0) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:392
392if (ab[i][j].oname == NULL  ab[i][j].tp  0)
(gdb) bt
#0  0x77b22450 in calc_all_pos (pdba=0x619d20, x=0x61c6a0,
nab=0x61c310, ab=0x61f9e0, bCheckMissing=0) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:392
#1  0x77b22cd7 in add_h_low (pdbaptr=0x7fffc1e8,
xptr=0x7fffced8, nah=50, ah=0x613370, nterpairs=1, ntdb=0x616440,
ctdb=0x619cc0, rN=0x619ce0, rC=0x619d00,
 bCheckMissing=0, nabptr=0x7fffdf40, abptr=0x7fffdf48,
bUpdate_pdba=1, bKeep_old_pdba=1) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:540
#2  0x77b23b66 in add_h (pdbaptr=0x7fffc1e8,
xptr=0x7fffced8, nah=50, ah=0x613370, nterpairs=1, ntdb=0x616440,
ctdb=0x619cc0, rN=0x619ce0, rC=0x619d00,
 bAllowMissing=1, nabptr=0x7fffdf40, abptr=0x7fffdf48,
bUpdate_pdba=1, bKeep_old_pdba=1) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:781
#3  0x77b24080 in protonate (atomsptr=0x7fffceb8,
xptr=0x7fffced8, protdata=0x7fffdf30) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:894
#4  0x004020ff in cmain (argc=1, argv=0x7fffe0d8) at
/home/peu/Downloads/gromacs/src/kernel/g_protonate.c:195
#5  0x0040224c in main (argc=5, argv=0x7fffe0d8) at
/home/peu/Downloads/gromacs/src/kernel/main.c:29



Please file a bug report on redmine.gromacs.org.  g_protonate has been in 
varying states of disrepair for years.  I hacked a fix a long time ago, but 
apparently something has broken again.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault (core dumped)

2013-06-06 Thread Ishwor Poudyal

Dear all

I want to study the diffusion coeffcient of CO in water.  I have done the 
energy minimization step and got the problem segmentation fault during 
Equilibration. I am confused whether my input file has 
got some error or the error is in the processing machine.I have made the
 following .mdp file

;

;PREPROCESSING parameters

cpp =  /lib/cpp

define  =  -DFLEX_SPCE

integrator  =  md

dt  =.002

nsteps  = 2500

nstcomm = 1


;OUPUT CONTROL parameters.

nstxout =  250

nstvout =  1000

nstfout =  0

nstlog  =  100

nstenergy   =  100

energygrps  =  system

;NEIGHBOUR SEARCHING parameters.

nstlist =  10

ns_type =  grid

rlist   =  1.0

;ELECTROSTATIC and VdW parameters.

rcoulomb=  1.0

rvdw=  1.0

epsilon-r   =  1   

;BERENDSEN TEMPERATURE COUPLING is on in two groups

Tcoupl  =  berendsen

tc-grps =  system

tau_t   =  0.01 

ref_t   =  300  

;PRESSURE COUPLING is on

Pcoupl  =  berendsen

tau_p   =  0.1  

compressibility =  4.6e-5

ref_p   =  1.0

;SIMULATED ANNEALING parameters are not specified.

;GENERATE VELOCITIES is on at 300 K.

gen_vel =  yes; ; generate initially

gen_temp=  300

gen_seed=  173259   ;give different values for different trials.

;BONDS parameters

pbc = xyz   ; 3-D PBC

constraints = all-bonds

constraint-algorithm = shake

unconstrained-start  = no

 The box size is 2.1 nm.

I got no information then Segmentation default.It says nothing regarding the 
input . I will be pleased if you provide me some suggestions.

Ishwor Poudyal

TU Nepal




  --
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault (core dumped)

2013-06-06 Thread Justin Lemkul



On 6/6/13 4:45 AM, Ishwor Poudyal wrote:


Dear all

I want to study the diffusion coeffcient of CO in water.  I have done the 
energy minimization step and got the problem segmentation fault during 
Equilibration. I am confused whether my input file has
got some error or the error is in the processing machine.I have made the
  following .mdp file

;

;PREPROCESSING parameters

cpp =  /lib/cpp

define  =  -DFLEX_SPCE



Here's the first suspect.  The water models in Gromacs were intended to be 
rigid.  Flexibility should only be used during EM, and only if necessary to 
improve the outcome.  Running MD with flexible water is not advised.



integrator  =  md

dt  =.002

nsteps  = 2500

nstcomm = 1


;OUPUT CONTROL parameters.

nstxout =  250

nstvout =  1000

nstfout =  0

nstlog  =  100

nstenergy   =  100

energygrps  =  system

;NEIGHBOUR SEARCHING parameters.

nstlist =  10

ns_type =  grid

rlist   =  1.0

;ELECTROSTATIC and VdW parameters.

rcoulomb=  1.0

rvdw=  1.0

epsilon-r   =  1

;BERENDSEN TEMPERATURE COUPLING is on in two groups

Tcoupl  =  berendsen

tc-grps =  system

tau_t   =  0.01 



This is a very restrictive value of tau_t.  Normally something like 0.1 or 0.5 
is more appropriate.



ref_t   =  300  

;PRESSURE COUPLING is on

Pcoupl  =  berendsen

tau_p   =  0.1



Again, very restrictive, especially for pressure.  Try 1.0 or 2.0 instead.


compressibility =  4.6e-5

ref_p   =  1.0

;SIMULATED ANNEALING parameters are not specified.

;GENERATE VELOCITIES is on at 300 K.

gen_vel =  yes; ; generate initially

gen_temp=  300

gen_seed=  173259   ;give different values for different trials.

;BONDS parameters

pbc = xyz   ; 3-D PBC

constraints = all-bonds

constraint-algorithm = shake

unconstrained-start  = no

  The box size is 2.1 nm.



You're playing with fire here - if the box deviates just a little bit due to 
pressure oscillations, your 1.0 nm will begin to double-count interactions and 
violate the minimum image convention.  In that case, your trajectory is junk.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault

2012-12-19 Thread Shine A
Sir,

  I am doing membrane protein dynamics in lipid bilayer, using oplsaa
force field. When I am doing minimization after genion I getting message
like this
Back Off! I just backed up ions_1.tpr.trr to ./#ions_1.tpr.trr.2#

   ack Off! I just backed up ions_1.tpr.edr to ./#ions_1.tpr.edr.2#

   Steepest Descents:
   Tolerance (Fmax)   =  1.0e+03
   Number of steps=5
   Segmentation fault
 Why this coming?I also tried mdrun -nt 1 -deffnm ions.Is it due to any
installation problem?I installed gromacs simply from the Ubuntu software
center.I searched in mailing list also.
Thanks in advance.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault

2012-11-29 Thread Shine A
Sir,

 I have one more doubt. During NVT equilibration mdrun giving
segmentation fault and not generating any gro files and generating two pdb
files. The message is like this


  Wrote pdb files with previous and current coordinates
  Warning: 1-4 interaction between 485 and 490 at distance 3.874 which is
larger than the 1-4 table size 2.200 nm
 These are ignored for the rest of the simulation
 This usually means your system is exploding,
 if not, you should increase table-extension in your mdp file
 or with user tables increase the table size
 Segmentation fault
why this fault?plz give a solution to overcome it.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault

2012-11-29 Thread Justin Lemkul



On 11/29/12 10:44 AM, Shine A wrote:

Sir,

  I have one more doubt. During NVT equilibration mdrun giving
segmentation fault and not generating any gro files and generating two pdb
files. The message is like this


   Wrote pdb files with previous and current coordinates
   Warning: 1-4 interaction between 485 and 490 at distance 3.874 which is
larger than the 1-4 table size 2.200 nm
  These are ignored for the rest of the simulation
  This usually means your system is exploding,
  if not, you should increase table-extension in your mdp file
  or with user tables increase the table size
  Segmentation fault
why this fault?plz give a solution to overcome it.



http://www.gromacs.org/Documentation/Terminology/Blowing_Up

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault while calculating water mediated H-bond with g_hbond

2012-10-23 Thread bipin singh
Hello all,

I was trying to calculate solvent mediated H-bond between a amino acid
residue (Tyr) and solvent molecule present within cutoff of 0.5nm (after
creating separate index) with the help of g_hbond version 4.5.3. But I am
getting segmentation fault while running g_hbond. Moreover I am getting
error only for this particular residue, whereas with other residues the
samilar calculation is working fine.
I have used the following command for the calculation:

g_hbond -f traj.xtc -s md.tpr -num hbnum.xvg -hbn hbond.ndx -g hbond.log
-dist hbdist.xvg -hbm hbmap.xvg -n index.ndx

Please provide your suggestions to rectify the error.

-- 
---
*Thanks and Regards,*
Bipin Singh
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault while calculating water mediated H-bond with g_hbond

2012-10-23 Thread Erik Marklund
There were a few bugfixes done to g_hbond since 4.5.3. Try a more recent 
version.

Erik


23 okt 2012 kl. 17.41 skrev bipin singh:

 Hello all,
 
 I was trying to calculate solvent mediated H-bond between a amino acid
 residue (Tyr) and solvent molecule present within cutoff of 0.5nm (after
 creating separate index) with the help of g_hbond version 4.5.3. But I am
 getting segmentation fault while running g_hbond. Moreover I am getting
 error only for this particular residue, whereas with other residues the
 samilar calculation is working fine.
 I have used the following command for the calculation:
 
 g_hbond -f traj.xtc -s md.tpr -num hbnum.xvg -hbn hbond.ndx -g hbond.log
 -dist hbdist.xvg -hbm hbmap.xvg -n index.ndx
 
 Please provide your suggestions to rectify the error.
 
 -- 
 ---
 *Thanks and Regards,*
 Bipin Singh
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

---
Erik Marklund, PhD
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 6688fax: +46 18 511 755
er...@xray.bmc.uu.se
http://www2.icm.uu.se/molbio/elflab/index.html

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault, mdrun_mpi

2012-10-04 Thread Ladasky
So I have spent the past few weeks debugging my equilibration protocols,
which were an odd hybrid of examples ranging from GROMACS 3.3 up to GROMACS
4.5.  I have cleaned out old code.  I added an in vacuo energy minimization
step for the protein without solvent, and a missing NVT step after solvent
is defined.  I have dimly grasped that, as long as you don't require
compatibility with an older simulation, the V-rescale thermostat is the
current recommended choice, and that switching thermostats (unlike
barostats) can cause instabilities.  I now know how to examine and graph
macroscopic system parameters to assess stability.  I think that everything
should be looking good right now -- except that it isn't, not quite.

When I finally start the production MD runs, I have received two
segmentation faults on two different test structures.  They take a LONG time
to appear -- over 1,070,000 iterations on one run, and over 2,360,000
iterations on another.  On top of that, I'm not getting my usual error
messages -- PME errors, or SETTLE errors.  I'm not getting a dump of the
last frame of my simulation.

I had enough trouble accepting that my simulation parameters were set up
incorrectly when I had failures 100,000 steps after starting the production
MD run.  Am I really supposed to believe that I still have instability
problems?

Here is the terminal output from one run (executing mdrun_mpi):

Reading file test-prep.tpr, VERSION 4.5.4 (single precision)
Making 1D domain decomposition 5 x 1 x 1
starting mdrun 'Protein t=   0.0 in water'
250 steps,   5000.0 ps.
[john-linux:09596] *** Process received signal ***
[john-linux:09596] Signal: Segmentation fault (11)
[john-linux:09596] Signal code: Address not mapped (1)
[john-linux:09596] Failing at address: 0x3e950840
[john-linux:09596] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10060)
[0x7f8a8ad5c060]
[john-linux:09596] [ 1] /usr/lib/libgmx_mpi.openmpi.so.6(+0x1f9670)
[0x7f8a8b413670]
[john-linux:09596] *** End of error message ***
--
mpirun noticed that process rank 2 with PID 9596 on node john-linux exited
on signal 11 (Segmentation fault).
--


The .log file does NOT contain any error messages, indicating any
instability.  The last entry in the log file is a long chain of energy
status report blocks.  Here's the last one:

DD  step 1078799 load imb.: force  1.8%

   Step   Time Lambda
1078800 2157.60.0

   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
2.07623e+038.42439e+026.03967e+02   -2.12322e+021.95589e+04
LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.  Potential
8.12766e+04   -9.12661e+02   -5.96406e+05   -4.40546e+04   -5.37227e+05
Kinetic En.   Total EnergyTemperature Pres. DC (bar) Pressure (bar)
9.76293e+04   -4.39598e+053.10969e+02   -7.72781e+013.95185e+01
   Constr. rmsd
1.90839e-05


I'm not a low-level programmer, and so I don't have to deal with this much,
but... a segmentation fault generally indicates that a program is trying to
write outside of its allocated memory block.  The third line of the error
message sent to the shell would seem to indicate exactly that.  That doesn't
actually sound like it has anything to do with my simulation being unstable. 
(However, with applications written in C, I'm willing to believe anything.) 
I did check on my memory usage.  I have 8 GB of RAM on my system, running
Ubuntu Linux 11.10, AMD 64-bit.  At most, I'm using a bit more than half of
my RAM (I have other, undemanding applications open besides my GROMACS
terminal windows, and I also reserved one CPU core to run those apps).  I
think that I should be fine.

If they would help, I can repost my cleaned-up MDP files. I can post graphs
of potential, pressure, temperature, density, etc., from any phase in my
protocol.   Or you could just take my word for it that all of these
parameters converge nicely during my equilibration procedure, and then
remain stable throughout the production MD run.  My target temperature is
310 K (37 C), and I get very close to that value on average.  My average
pressure and density readings are both a bit lower than my targets (0.80 bar
and 988 kg/m^3, respectively), but they are consistent.  I have examined a
series of snapshots of my protein.  It isn't undergoing any radical
movements.

My systems are on the small side, under 50,000 atoms.  It's all amino acids
and water molecules.

Puzzled once again.  Thanks for your advice!



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Segmentation-fault-mdrun-mpi-tp5001601.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please 

RE: [gmx-users] Segmentation fault (core dumped error)

2012-09-15 Thread Elie M

Thanks very much for much for your help. The Carbon naotube issue is solved. I 
still have to figure out the polymers. Thanks for the info
Regards
Elie

 Date: Fri, 14 Sep 2012 10:31:28 -0400
 From: jalem...@vt.edu
 To: gmx-users@gromacs.org
 Subject: Re: [gmx-users] Segmentation fault (core dumped error)
 
 
 
 On 9/14/12 12:19 AM, Elie M wrote:
 
  Dear all, I am trying to study the MD of a Carbon Nanotube interacting with 
  some polymers. and I have some problems in forming the topology files. I 
  have actually two questions and I hope you can help me in that.
  (1) In an attempt to form the topology files of CNTs and graphene (using 
  x2top), i have found on the internet scripts  (by Andrea Minoia I guess). 
  These constitute of adding .nt2, .rtp. and .itp files to the 
  /Gromacs/share/Gromacs/top directory  (namely ffcntoplsaa.nt2, 
  ffcntoplsaa.rtp and ffcntoplsaa.itp) and adding a line in the FF.dat file. 
  I have done that and tried to execute x2top and I got the error:
  ..Entries in elements.dat: 218Looking whether force field files 
  existOpening library file 
  /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.rtpOpening 
  library file 
  /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.n2tOpening 
  library file 
  /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.n2tThere are 0 
  name to type translationsGenerating bonds from distances...Segmentation 
  fault (core dumped)
  Can anyone please tell me the source of this error and how to fix it?
 
 x2top is telling you it found nothing in the .n2t file.  Either the contents 
 are 
 nonexistent, formatted incorrectly, or you have a line ending issue (common 
 with 
 Windows OS - use dos2unix if necessary).
 
  (2) I will definitely need a top file for the polymers I will also be 
  solvating. But I also have problems because the pdb file contains a LIG 
  residue unrecognizable by Gromacs. I have asked this question before and I 
  was advised to change some files accordingly but to be honest I am not 
  really professional in that; I have asked someone who had a problem in the 
  past but he did not know all the details because he ended up not using the 
  modified force fields after all. Can anyone give me in details how to 
  incorporate the residue LIG within the force field or let me know whom I 
  can consult...A part of the pdb file with the residue LIG is:
  COMPNDUNNAMEDAUTHORGENERATED BY OPEN BABEL 2.3.1HETATM1  C   
  LIG 1   1.481  -1.276  -0.621  1.00  0.00   CHETATM2  C 
LIG 1   2.216  -2.370  -1.040  1.00  0.00   CHETATM3  
  S   LIG 1   3.770  -2.409  -0.306  1.00  0.00   SHETATM
  4  C   LIG 1   3.456  -0.998   0.609  1.00  0.00   CHETATM  
5  C   LIG 1   2.207  -0.479   0.313  1.00  0.00   
  CHETATM6  C   LIG 1   5.156   0.676   1.386  1.00  0.00 
CHETATM7  C   LIG 1   4.423  -0.491   1.600  1.00  0.00   
  CHETATM8  C   LIG 1   4.550  -1.119   2.847  1.00  0.00 
CHETATM9  C   LIG 1   5.256  -0.503   3.905  1.00  0.00   
  CHETATM   10  C   LIG 1   6.107   0.592   3.667  1.00  0.00 
CHETATM   11  C   LIG 1   6.008   1.181   2.393  1.00  
  0.00   CHETATM   12  S   LIG 1   7.457   2.548   5.198  
  1.00  0.00  !
   SHETA
 TM   13  C   LIG 1   7.220   0.945   4.621  1.00  0.00...
 
 You need to introduce some sensible set of parameters for it.  Using a 
 generic 
 LIG for a polymer is unlikely to work.  Consult the following:
 
 http://www.gromacs.org/Documentation/How-tos/Polymers
 http://www.gromacs.org/Documentation/How-tos/Adding_a_Residue_to_a_Force_Field
 
 -Justin
 
 -- 
 
 
 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault (core dumped error)

2012-09-14 Thread Justin Lemkul



On 9/14/12 12:19 AM, Elie M wrote:


Dear all, I am trying to study the MD of a Carbon Nanotube interacting with 
some polymers. and I have some problems in forming the topology files. I have 
actually two questions and I hope you can help me in that.
(1) In an attempt to form the topology files of CNTs and graphene (using 
x2top), i have found on the internet scripts  (by Andrea Minoia I guess). These 
constitute of adding .nt2, .rtp. and .itp files to the 
/Gromacs/share/Gromacs/top directory  (namely ffcntoplsaa.nt2, ffcntoplsaa.rtp 
and ffcntoplsaa.itp) and adding a line in the FF.dat file. I have done that and 
tried to execute x2top and I got the error:
..Entries in elements.dat: 218Looking whether force field files existOpening 
library file /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.rtpOpening 
library file /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.n2tOpening 
library file /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.n2tThere are 0 
name to type translationsGenerating bonds from distances...Segmentation fault (core 
dumped)
Can anyone please tell me the source of this error and how to fix it?


x2top is telling you it found nothing in the .n2t file.  Either the contents are 
nonexistent, formatted incorrectly, or you have a line ending issue (common with 
Windows OS - use dos2unix if necessary).



(2) I will definitely need a top file for the polymers I will also be solvating. But I 
also have problems because the pdb file contains a LIG residue unrecognizable by Gromacs. 
I have asked this question before and I was advised to change some files accordingly but 
to be honest I am not really professional in that; I have asked someone who had a problem 
in the past but he did not know all the details because he ended up not using the 
modified force fields after all. Can anyone give me in details how to incorporate the 
residue LIG within the force field or let me know whom I can consult...A part 
of the pdb file with the residue LIG is:
COMPNDUNNAMEDAUTHORGENERATED BY OPEN BABEL 2.3.1HETATM1  C   LIG   
  1   1.481  -1.276  -0.621  1.00  0.00   CHETATM2  C   LIG 1
   2.216  -2.370  -1.040  1.00  0.00   CHETATM3  S   LIG 1   
3.770  -2.409  -0.306  1.00  0.00   SHETATM4  C   LIG 1   3.456  
-0.998   0.609  1.00  0.00   CHETATM5  C   LIG 1   2.207  -0.479 
  0.313  1.00  0.00   CHETATM6  C   LIG 1   5.156   0.676   
1.386  1.00  0.00   CHETATM7  C   LIG 1   4.423  -0.491   1.600  
1.00  0.00   CHETATM8  C   LIG 1   4.550  -1.119   2.847  1.00  
0.00   CHETATM9  C   LIG 1   5.256  -0.503   3.905  1.00  0.00   
CHETATM   10  C   LIG 1   6.107   0.592   3.667  1.00  0.00  
 CHETATM   11  C   LIG 1   6.008   1.181   2.393  1.00  0.00   
CHETATM   12  S   LIG 1   7.457   2.548   5.198  1.00  0.00  !

 SHETA
TM   13  C   LIG 1   7.220   0.945   4.621  1.00  0.00...

You need to introduce some sensible set of parameters for it.  Using a generic 
LIG for a polymer is unlikely to work.  Consult the following:


http://www.gromacs.org/Documentation/How-tos/Polymers
http://www.gromacs.org/Documentation/How-tos/Adding_a_Residue_to_a_Force_Field

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault (core dumped error)

2012-09-13 Thread Elie M

Dear all, I am trying to study the MD of a Carbon Nanotube interacting with 
some polymers. and I have some problems in forming the topology files. I have 
actually two questions and I hope you can help me in that. 
(1) In an attempt to form the topology files of CNTs and graphene (using 
x2top), i have found on the internet scripts  (by Andrea Minoia I guess). These 
constitute of adding .nt2, .rtp. and .itp files to the 
/Gromacs/share/Gromacs/top directory  (namely ffcntoplsaa.nt2, ffcntoplsaa.rtp 
and ffcntoplsaa.itp) and adding a line in the FF.dat file. I have done that and 
tried to execute x2top and I got the error:
..Entries in elements.dat: 218Looking whether force field files 
existOpening library file 
/cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.rtpOpening library 
file /cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.n2tOpening 
library file 
/cygdrive/c/Packages/gromacs/share/gromacs/top/ffcntoplsaa.n2tThere are 0 name 
to type translationsGenerating bonds from distances...Segmentation fault (core 
dumped)
Can anyone please tell me the source of this error and how to fix it?
(2) I will definitely need a top file for the polymers I will also be 
solvating. But I also have problems because the pdb file contains a LIG residue 
unrecognizable by Gromacs. I have asked this question before and I was advised 
to change some files accordingly but to be honest I am not really professional 
in that; I have asked someone who had a problem in the past but he did not know 
all the details because he ended up not using the modified force fields after 
all. Can anyone give me in details how to incorporate the residue LIG within 
the force field or let me know whom I can consult...A part of the pdb file with 
the residue LIG is:
COMPNDUNNAMEDAUTHORGENERATED BY OPEN BABEL 2.3.1HETATM1  C   LIG   
  1   1.481  -1.276  -0.621  1.00  0.00   CHETATM2  C   LIG 
1   2.216  -2.370  -1.040  1.00  0.00   CHETATM3  S   LIG 1 
  3.770  -2.409  -0.306  1.00  0.00   SHETATM4  C   LIG 1   
3.456  -0.998   0.609  1.00  0.00   CHETATM5  C   LIG 1 
  2.207  -0.479   0.313  1.00  0.00   CHETATM6  C   LIG 1   
5.156   0.676   1.386  1.00  0.00   CHETATM7  C   LIG 1   
4.423  -0.491   1.600  1.00  0.00   CHETATM8  C   LIG 1   
4.550  -1.119   2.847  1.00  0.00   CHETATM9  C   LIG 1   
5.256  -0.503   3.905  1.00  0.00   CHETATM   10  C   LIG 1   
6.107   0.592   3.667  1.00  0.00   CHETATM   11  C   LIG 1   
6.008   1.181   2.393  1.00  0.00   CHETATM   12  S   LIG 1   
7.457   2.548   5.198  1.00  0.00   SHETATM   13  C   LIG 1   
7.220   0.945   4.621  1.00  0.00...
I am really thankful
Elie  --
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault with mdrun

2012-08-21 Thread Deepak Ojha
Dear All
I am trying to perform the  azide ion in water simulation with
Gromacs. I generated to topology file with PRODG server for azide ion
and ran the calculations.I got one error at grompp  level which was  

  327 non-matching atom names
  atom names from azide.top will be used
  atom names from azide.gro will be ignored

I continued with the maxwarn and performed energy minimization which
went smoothly.However no sooner I started equilibration in NVT run
using mdrun
it crashed with segmentation fault. Please help me to locate the
error. I went through the previous mails on the mailing list but I
could not sort it out.


The topology file is :

; Include forcefield parameters
#include ffG43a1.itp

;Include azide topology
#include azide.itp

; Include water topology
#include spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
   11   1000   1000   1000
#endif

; Include generic topology for ions
#include ions.itp

[ system ]
; Name
azide in water

[ molecules ]
; Compound#mols
SOL   108
AZI   1


and the itp file for azide which I made from PRODG is

[ moleculetype ]
; Name nrexcl
AZI  3

[ atoms ]
;   nr  type  resnr resid  atom  cgnr   charge mass
 1 N 1  AZI  N1 1   -1.000  14.0067
 2 N 1  AZI  N2 12.000  14.0067
 3 N 1  AZI  N3 1   -1.000  14.0067

[ bonds ]
; ai  aj  fuc0, c1, ...
   2   1   20.112   4527362.40.112   4527362.4 ;N2   N1
   2   3   20.112   4527362.40.112   4527362.4 ;N2   N3

[ pairs ]
; ai  aj  fuc0, c1, ...

[ angles ]
; ai  aj  ak  fuc0, c1, ...
   1   2   3   2180.0  41840001.2180.0  41840001.2 ;N1   N2   N3

[ dihedrals ]
; ai  aj  ak  al  fuc0, c1, m, ...

--

DeepaK Ojha
School Of Chemistry

Selfishness is not living as one wishes to live, it is asking others
to live as one wishes to live
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault with mdrun

2012-08-21 Thread Justin Lemkul



On 8/21/12 6:00 AM, Deepak Ojha wrote:

Dear All
I am trying to perform the  azide ion in water simulation with
Gromacs. I generated to topology file with PRODG server for azide ion
and ran the calculations.I got one error at grompp  level which was  

   327 non-matching atom names
   atom names from azide.top will be used
   atom names from azide.gro will be ignored

I continued with the maxwarn and performed energy minimization which
went smoothly.However no sooner I started equilibration in NVT run
using mdrun
it crashed with segmentation fault. Please help me to locate the
error. I went through the previous mails on the mailing list but I
could not sort it out.



Don't use -maxwarn unless you know exactly why you're doing it.  The fact that 
you have 327 non-matching names and 327 atoms in the system (108*3 + 3) suggests 
the contents of your coordinate file do not match that of the topology in terms 
of the order of the [molecules] section.  Likely your azide should be listed 
first, presumably if you took the coordinate file for this molecule and solvated it.


Also beware that PRODRG topologies are notoriously unreliable and that linear 
molecules should not be constructed in this way (180 degree angles are not 
stable).  See, for instance, the following tutorial for a more robust method:


http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/vsites/index.html

-Justin



The topology file is :

; Include forcefield parameters
#include ffG43a1.itp

;Include azide topology
#include azide.itp

; Include water topology
#include spc.itp

#ifdef POSRES_WATER
; Position restraint for each water oxygen
[ position_restraints ]
;  i funct   fcxfcyfcz
11   1000   1000   1000
#endif

; Include generic topology for ions
#include ions.itp

[ system ]
; Name
azide in water

[ molecules ]
; Compound#mols
SOL   108
AZI   1


and the itp file for azide which I made from PRODG is

[ moleculetype ]
; Name nrexcl
AZI  3

[ atoms ]
;   nr  type  resnr resid  atom  cgnr   charge mass
  1 N 1  AZI  N1 1   -1.000  14.0067
  2 N 1  AZI  N2 12.000  14.0067
  3 N 1  AZI  N3 1   -1.000  14.0067

[ bonds ]
; ai  aj  fuc0, c1, ...
2   1   20.112   4527362.40.112   4527362.4 ;N2   N1
2   3   20.112   4527362.40.112   4527362.4 ;N2   N3

[ pairs ]
; ai  aj  fuc0, c1, ...

[ angles ]
; ai  aj  ak  fuc0, c1, ...
1   2   3   2180.0  41840001.2180.0  41840001.2 ;N1   N2   N3

[ dihedrals ]
; ai  aj  ak  al  fuc0, c1, m, ...

--

DeepaK Ojha
School Of Chemistry

Selfishness is not living as one wishes to live, it is asking others
to live as one wishes to live



--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault-g_spatial

2012-06-28 Thread Christopher Neale
Dear D.M.

I wrote g_spatial and Justin is correct that you simply need to use a larger 
number for -nab.

It's not a bug per se, it is simply that I didn't write the program to do two 
loops through
the input trajectory (one to determine the required bins and then allocate 
memory and
then another loop to count the values). So it is just quick and dirty coding, 
but when -nab is 
sufficiently large it will not be wrong.

Here is what happens: let's say that you have a cubic box. You then use trjconv 
to do a
fitting in which you rotate the box differently in different frames. Now the 
length of the 
new x-axis in a frame could be the length of the diagonal of your box, which is 
substantially 
longer. This is why you need to ask the program to allocate memory for 
additional bins 
(which will be written out to your cube file, making it larger, so you don't 
want to set -nab 
too much larger than is necessary). Pressure coupling also influences this, but 
not as much.

Note that if you make your bins half as large (for higher resolution) then you 
will need to 
double the value that you provide to -nab.

I usually use -nab 50. You may need a larger value if you use smaller bins or a 
very rectangular 
system which you rotate before running through g_spatial. You may also have to 
use very large 
values of -nab if you do a fitting on a system  where your central group is not 
centered (use the 
suggested trjconv preparation listed in g_spatial -h).

Finally, if you are using incredibly small bin sizes, then you might really be 
running into an 
out-of-memory condition, although that should not result in a segfault as the 
program should 
exit cleanly if it can not allocate the required memory.

Chris.


 Forwarded Message -
From: delara aghaie d_aghaie at yahoo.com
To: Discussion list for GROMACS users gmx-users at gromacs.org 
Sent: Tuesday, 19 June 2012, 12:04
Subject: [gmx-users] segmentation fault-g_spatial
 

Dear Gromacs users. 

I have a protein in a box of water. I want to calculate the SDF of water 
molecules around the protein. I have used the procedure described in this page:

http://www.gromacs.org/Documentation/Gromacs_Utilities/g_spatial

after two times using trjconv for putting the protein in the center of box and 
removing its rotation and translation, I use the g_spatial order.

g_spatial -s  ~.tpr -f   ~.xtc   (this is the output .xtc after tao times 
running trjcov).

I get this message:

Reading frame   7 time   14.000   There was an item outside of the 
allocated memory. Increase the value given with the -nab option.
Memory was allocated for [-0.374000,-0.301000,-0.217000]to  
[7.676000,7.749000,7.833000]
Memory was required for [-0.375000,6.70,6.815001]

1) I want to know what exactly does (nab) option?

2) I have changed this -nab value from 4 to 6,8,10,.40
but again I get something like the mentioned message or the segmentation fault.

What should I do to fix it and is it a limiting value for nab option?

3) Also please let me know, is it possible to calculate SDF of water molecules 
around a specific residue by creating and index group which contains that 
residue?



Thanks
Regards
D.M





--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Only plain text messages are allowed!
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Fw: [gmx-users] segmentation fault-g_spatial

2012-06-19 Thread delara aghaie



- Forwarded Message -
From: delara aghaie d_agh...@yahoo.com
To: Discussion list for GROMACS users gmx-users@gromacs.org 
Sent: Tuesday, 19 June 2012, 12:04
Subject: [gmx-users] segmentation fault-g_spatial
 

Dear Gromacs users. 

I have a protein in a box of water. I want to calculate the SDF of water 
molecules around the protein. I have used the procedure described in this page:

http://www.gromacs.org/Documentation/Gromacs_Utilities/g_spatial

after two times using trjconv for putting the protein in the center of box and 
removing its rotation and translation, I use the g_spatial order.

g_spatial -s  ~.tpr -f   ~.xtc   (this is the output .xtc after tao times 
running trjcov).

I get this message:

Reading frame   7 time   14.000   There was an item outside of the 
allocated memory. Increase the value given with the -nab option.
Memory was allocated for [-0.374000,-0.301000,-0.217000]    to  
[7.676000,7.749000,7.833000]
Memory was required for [-0.375000,6.70,6.815001]

1) I want to know what exactly does (nab) option?

2) I have changed this -nab value from 4 to 6,8,10,.40
but again I get something like the mentioned message or the segmentation fault.

What should I do to fix it and is it a limiting value for nab option?

3) Also please let me know, is it possible to calculate SDF of water molecules 
around a specific residue by creating and index group which contains that 
residue?



Thanks
Regards
D.M








-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: Fw: [gmx-users] segmentation fault-g_spatial

2012-06-19 Thread Justin A. Lemkul



On 6/19/12 9:34 AM, delara aghaie wrote:


- Forwarded Message -
*From:* delara aghaie d_agh...@yahoo.com
*To:* Discussion list for GROMACS users gmx-users@gromacs.org
*Sent:* Tuesday, 19 June 2012, 12:04
*Subject:* [gmx-users] segmentation fault-g_spatial

Dear Gromacs users.
I have a protein in a box of water. I want to calculate the SDF of water
molecules around the protein. I have used the procedure described in this page:

http://www.gromacs.org/Documentation/Gromacs_Utilities/g_spatial

after two times using trjconv for putting the protein in the center of box and
removing its rotation and translation, I use the g_spatial order.

g_spatial -s  ~.tpr -f   ~.xtc   (this is the output .xtc after tao times
running trjcov).

I get this message:

Reading frame   7 time   14.000   There was an item outside of the allocated
memory. Increase the value given with the -nab option.
Memory was allocated for [-0.374000,-0.301000,-0.217000]to
[7.676000,7.749000,7.833000]
Memory was required for [-0.375000,6.70,6.815001]

1) I want to know what exactly does (nab) option?



According to g_spatial -h:

BUGS:
When the allocated memory is not large enough, a segmentation fault may
occur. This is usually detected and the program is halted prior to the fault
while displaying a warning message suggesting the use of the -nab (Number of
Additional Bins) option. However, the program does not detect all such
events. If you encounter a segmentation fault, run it again with an increased
-nab value.


2) I have changed this -nab value from 4 to 6,8,10,.40
but again I get something like the mentioned message or the segmentation fault.

What should I do to fix it and is it a limiting value for nab option?



Maybe try an even larger value.  But since it is a known bug, it may simply be 
that the program needs to be fixed or re-written to work more effectively.



3) Also please let me know, is it possible to calculate SDF of water molecules
around a specific residue by creating and index group which contains that 
residue?



Yes, it should be.  Try it and see.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault - pdb2gmx specbond.dat

2012-06-06 Thread Steven Neumann
Dear Gmx Users,

I created a plane surface made of 4 different atoms (400 atoms togehter).
Each atom correspond to different residue - I added them to the
aminoacids.rtp file. They are placed in different positions with LJ radius
of 1.7A and they their center is 3.6 A away from each other (0.2A between
atom LJsurfaces). I want to create bonds between all of them so I added
specbonds.dat to my working directory:

10
POSSOD4POSSOD40.36POSPOS
POSSOD4NEGCLA40.36POSNEG
POSSOD4POLN40.36POSPOL
POSSOD4NONC40.36POSNON
NEGCLA4NEGCLA40.36NEGNEG
NEGCLA4POLN40.36NEGPOL
NEGCLA4NONC40.36NEGNON
POLN4POLN40.36POLPOL
POLN4NONC40.36POLNON
NONC4NONC40.36NONNON

So that all of them can create bonds with each of them being within the
distance of 3.6A. When I process to pdb2gmx where the matrix is created and
bonds are being linked when the last residues are linked:

.
Linking POL-397 N-397 and NEG-398 CLA-398...
Linking NEG-398 CLA-398 and NON-399 C-399...
Linking NON-399 C-399 and POL-400 N-400...
Segmentation fault (core dumped)

The Gromacs is installed on the cluster - version 4.5.5. I tried also on
4.5.4. and the same happens.

Could you please advise?

Steven
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] segmentation fault during equilibration

2012-05-01 Thread niaz poorgholami
Dear gmx users,
I am simulating a system including carbon nanotube(a finite
one)+water+surfactant and so far I have done these things:
1.I generated the topology of CNT ( opls force field) by this command:
g_x2top -f cnt.pdf -o cnt.top -nopairs -nexcl 3 -name CNT
2.I used TopolGen to produce topology of surfactant and I calculated
the charges of atomes by the RESP method and checked the
atomtypes to make sure that they make sence as well.so I changed some
atomtypes according to the opls.
3.I build a topol.top file that contains topolgy of CNT+surfactant.
4.I created the initial configuration for the CNT+surfactant by means
of packmol.
5.I used editconf and genbox with these command:
editconf -f CNTSUR.pdb -o newbox.gro -d 0.7 -bt cubic
genbox -cp newbox.gro -cs spc216.gro -p topol.top -o solv.gro
6.I run grompp and genion for adding the ions:
grompp -f em.mdp -c solv.gro -p topol.top -o ions.tpr
genion -s ions.tpr -p topol.top -o solv_ions.gro -nname BR -nn 28
7.I run grompp again for energy minimization:
grompp -f em_real.mdp -c solv_ions.gro -p topol.top -o em.tpr
mdrun -deffnm em
the em_real.mdp file contains:
; Parameters describing what to do, when to stop and what to save
integrator  = steep ; Algorithm (steep = steepest descent 
minimization)
emtol   = 1000.0; Stop minimization when the maximum force  
10.0 kJ/mol
emstep  = 0.01  ; Energy step size
nsteps  = 7 ; Maximum number of (minimization) steps to 
perform
energygrps  = UNK LIG   ; Which energy group(s) to write to disk

; Parameters describing how to find the neighbors of each atom and how
to calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and 
long range forces
ns_type = grid  ; Method to determine neighbor list (simple, 
grid)
rlist   = 0.9   ; Cut-off for making neighbor list (short range 
forces)
coulombtype = PME   ; Treatment of long range electrostatic 
interactions
rcoulomb= 0.9   ; long range electrostatic cut-off
rvdw= 1.4   ; Periodic Boundary Conditions (yes/no)
pbc = xyz   ; Periodic Boundary Conditions (yes/no)

8.then I made an index file for CNT and surfactant and used genrestr
to restraint the CNT and surfactant during equilibration.
9.I run grompp and mdrun with these command:grompp -f nvt.mdp -c
em.gro -p topol.top -o nvt.tpr -n index.ndx
mdrun -deffnm nvt
and after passing 42900 of 5 steps, segmentation fault occured.I
checked the log file but it did not contain any errors and I also used
VMD to see trajetory but I did not see anything wrong.
I would be pleased if anyone could help me how to fix this.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault during equilibration

2012-05-01 Thread Justin A. Lemkul



On 5/1/12 7:19 AM, niaz poorgholami wrote:

Dear gmx users,
I am simulating a system including carbon nanotube(a finite
one)+water+surfactant and so far I have done these things:
1.I generated the topology of CNT ( opls force field) by this command:
g_x2top -f cnt.pdf -o cnt.top -nopairs -nexcl 3 -name CNT
2.I used TopolGen to produce topology of surfactant and I calculated
the charges of atomes by the RESP method and checked the
atomtypes to make sure that they make sence as well.so I changed some
atomtypes according to the opls.
3.I build a topol.top file that contains topolgy of CNT+surfactant.
4.I created the initial configuration for the CNT+surfactant by means
of packmol.
5.I used editconf and genbox with these command:
editconf -f CNTSUR.pdb -o newbox.gro -d 0.7 -bt cubic


Unrelated to the crash, but if your longest cutoff is 1.4 nm, setting a 
solute-box distance of 0.7 nm will lead to trouble if you ever use NPT.  With 
just a small fluctuation in box dimension, you can easily violate the minimum 
image convention.



genbox -cp newbox.gro -cs spc216.gro -p topol.top -o solv.gro
6.I run grompp and genion for adding the ions:
grompp -f em.mdp -c solv.gro -p topol.top -o ions.tpr
genion -s ions.tpr -p topol.top -o solv_ions.gro -nname BR -nn 28
7.I run grompp again for energy minimization:
grompp -f em_real.mdp -c solv_ions.gro -p topol.top -o em.tpr
mdrun -deffnm em
the em_real.mdp file contains:
; Parameters describing what to do, when to stop and what to save
integrator  = steep ; Algorithm (steep = steepest descent 
minimization)
emtol   = 1000.0; Stop minimization when the maximum force  
10.0 kJ/mol
emstep  = 0.01  ; Energy step size
nsteps  = 7 ; Maximum number of (minimization) steps to 
perform
energygrps  = UNK LIG   ; Which energy group(s) to write to disk

; Parameters describing how to find the neighbors of each atom and how
to calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and 
long range forces
ns_type = grid  ; Method to determine neighbor list (simple, 
grid)
rlist   = 0.9   ; Cut-off for making neighbor list (short range 
forces)
coulombtype = PME   ; Treatment of long range electrostatic 
interactions
rcoulomb= 0.9   ; long range electrostatic cut-off
rvdw= 1.4   ; Periodic Boundary Conditions (yes/no)
pbc = xyz   ; Periodic Boundary Conditions (yes/no)



What was the outcome of EM?  What were your values for Fmax and Epot?


8.then I made an index file for CNT and surfactant and used genrestr
to restraint the CNT and surfactant during equilibration.
9.I run grompp and mdrun with these command:grompp -f nvt.mdp -c
em.gro -p topol.top -o nvt.tpr -n index.ndx
 mdrun -deffnm nvt
and after passing 42900 of 5 steps, segmentation fault occured.I
checked the log file but it did not contain any errors and I also used
VMD to see trajetory but I did not see anything wrong.
I would be pleased if anyone could help me how to fix this.


We need to see your complete .mdp file for the NVT run.  As of now, there is no 
indication of what is wrong.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault during equilibration

2012-05-01 Thread niaz poorgholami
thank you Sir for your reply.below I provide you .mdp file:
title   = UNK-ligand complex NVT equilibration
define = -DPOSRES -DPOSRES_LIG  ; position restrain the UNK and LIG
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 5; 2 * 5 = 100 ps
dt  = 0.002 ; 2 fs
; Output control
nstxout = 1   ; save coordinates every 0.2 ps
nstvout = 1   ; save velocities every 0.2 ps
nstenergy   = 100   ; save energies every 0.2 ps
nstlog  = 100   ; update log file every 0.2 ps
energygrps  = UNK LIG
; Bond parameters
continuation= no; first dynamics run
constraint_algorithm = lincs; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds)
constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
rvdw  = 1.4
; Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale ; modified Berendsen thermostat
tc-grps = UNK_LIG Water_and_ions; two coupling groups -
more accurate
tau_t   = 0.1   0.1 ; time constant, in ps
ref_t   = 300   300
 ; Pressure coupling is off
pcoupl  = no; no pressure coupling in NVT
; Periodic boundary conditions
pbc= xyz   ; 3-D PBC
; Dispersion correction
DispCorr= EnerPres  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes   ; assign velocities from Maxwell distribution
gen_temp= 300   ; temperature for Maxwell distribution
gen_seed= -1; generate a random seed
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault during equilibration

2012-05-01 Thread Justin A. Lemkul



On 5/1/12 1:13 PM, niaz poorgholami wrote:

thank you Sir for your reply.below I provide you .mdp file:
title   = UNK-ligand complex NVT equilibration
define = -DPOSRES -DPOSRES_LIG  ; position restrain the UNK and LIG
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 5; 2 * 5 = 100 ps
dt  = 0.002 ; 2 fs
; Output control
nstxout = 1   ; save coordinates every 0.2 ps
nstvout = 1   ; save velocities every 0.2 ps


Your .trr files will be huge with these settings.  I would only recommend 
getting such output every step if your system were collapsing within just a few 
steps, but not in this case.  Though, if you have such output, watching the 
trajectory (provided your workstation doesn't run out of memory) should point to 
the problem.



nstenergy   = 100   ; save energies every 0.2 ps
nstlog  = 100   ; update log file every 0.2 ps
energygrps  = UNK LIG
; Bond parameters
continuation= no; first dynamics run
constraint_algorithm = lincs; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds)
constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
rvdw  = 1.4
; Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale ; modified Berendsen thermostat
tc-grps = UNK_LIG Water_and_ions; two coupling groups -
more accurate
tau_t   = 0.1   0.1 ; time constant, in ps
ref_t   = 300   300
  ; Pressure coupling is off
pcoupl  = no; no pressure coupling in NVT
; Periodic boundary conditions
pbc= xyz   ; 3-D PBC
; Dispersion correction
DispCorr= EnerPres  ; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes   ; assign velocities from Maxwell distribution
gen_temp= 300   ; temperature for Maxwell distribution
gen_seed= -1; generate a random seed


I see nothing particularly glaring with the .mdp file that would be causing a 
problem.  You also did not answer my question about EM, so it is still a 
possibility that the system is insufficiently minimized, but that's just a guess 
since I don't know.


Otherwise: 
http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault during equilibration

2012-05-01 Thread niaz poorgholami
Dear Justin
the results of EM was:
Steepest Descents converged to Fmax  1000 in 77 steps
Potential Energy  = -3.6802272e+05
Maximum force =  8.9652832e+02 on atom 1556
Norm of force =  1.1512711e+02
 thank you for your concern
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault during equilibration

2012-05-01 Thread Justin A. Lemkul



On 5/1/12 2:50 PM, niaz poorgholami wrote:

Dear Justin
the results of EM was:
Steepest Descents converged to Fmax  1000 in 77 steps
Potential Energy  = -3.6802272e+05
Maximum force =  8.9652832e+02 on atom 1556
Norm of force =  1.1512711e+02
  thank you for your concern


All of that seems fine.  You'll have to investigate using the tips I linked 
before.  It is also possible that your topology is unstable.  The procedure you 
described before seems reasonable, but thorough parameterization can be very 
tricky and may require refinement.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault

2012-04-12 Thread Justin A. Lemkul



priya thiyagarajan wrote:

hello sir,

thanks for your kind reply..

in another folder i resubmitted my mdrun from starting time..

i reduced my time step..
i kept my time step = 0.001
 


this is my md.mdp file

title= Gromacs43a1 lipopeptide MD
; Run parameters
integrator= md; leap-frog integrator
nsteps= 500; 1 * 500 = 5000 ps, 5 ns
dt= 0.001; 1 fs
; Output control
nstxout= 1000; save coordinates every 2 ps
nstvout= 1000; save velocities every 2 ps
nstxtcout= 1000; xtc compressed trajectory output every 2 ps
nstenergy= 1000; save energies every 2 ps
nstlog= 1000; update log file every 2 ps
; Bond parameters
continuation= yes; Restarting after NPT
constraint_algorithm = lincs; holonomic constraints
constraints= all-bonds; all bonds (even heavy atom-H bonds) 
constrained

lincs_iter= 1; accuracy of LINCS
lincs_order= 4; also related to accuracy
; Neighborsearching
ns_type= grid; search neighboring grid cells
nstlist= 5; 10 fs
rlist= 1.4; short-range neighborlist cutoff (in nm)
rcoulomb= 1.4; short-range electrostatic cutoff (in nm)
rvdw= 1.4; short-range van der Waals cutoff (in nm)
; Electrostatics
coulombtype= PME; Particle Mesh Ewald for long-range 
electrostatics

pme_order= 4; cubic interpolation
fourierspacing= 0.16; grid spacing for FFT
; Temperature coupling is on
tcoupl= *V-rescale*; modified Berendsen thermostat
tc-grps= DRG   SOL; two coupling groups - more accurate
tau_t= 0.10.1; time constant, in ps
ref_t= 300 300; reference temperature, one for each 
group, in K

; Pressure coupling is on
pcoupl= Parrinello-Rahman; Pressure coupling on in NPT
pcoupltype= isotropic; uniform scaling of box vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0; reference pressure, in bar
compressibility = 4.5e-5; isothermal compressibility of water, bar^-1
; Periodic boundary conditions
pbc= xyz; 3-D PBC
; Dispersion correction
DispCorr= EnerPres; account for cut-off vdW scheme
; Velocity generation
gen_vel= no; Velocity generation is off


i resubmitted my mdrun and my run completed i analysed my output file..

now it completed at 1.286ns out of 5ns..

in log file it didnt give any information about this error..

in error file its showing segmentation fault..


how to solve this prolem..



Mark already provided this link, but I'll post it again anyway:

http://www.gromacs.org/Documentation/Terminology/Blowing_Up

On that page you will find not only several possibilities as to the source of 
the problem, but also the means to diagnose what might be going wrong.  Please 
pay careful attention to this page, as it summarizes this information very well.



is it only because of some problem in my system??



Yes, quite likely.  The questions that come to mind for me - what is DRG?  What 
is its topology?  Did you do proper energy minimization and equilibration?


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation Fault using g_cluster

2012-03-28 Thread Mark Abraham

On 28/03/2012 2:14 PM, Davide Mercadante wrote:
Thank you for the prompt reply.! Indeed, I am using Gromacs version 
4.5.5 compiled in double-precision and I am running the analysis on a 
MacBook PRO.


I tried to open an issue at http://redmine/gromacs.org but it asks me 
for login name and passwd which I don't think I have as I never 
subscribed as developer. I may be wrong though..do I need to register?


Yes, just sign up. We need to be able to contact you to let you know the 
solution or get more information, so anonymous bug submission is not 
very useful.


I suppose that if this is not an explainable issue at the moment there 
is no solution to it?


The problem with plain segfaulting is that there's no way to tell what 
caused the problem. GROMACS tries hard not to do this, but clearly it's 
not perfect. There may be a code bug. There may be a way for you to use 
the code better. We just don't know yet. There's certainly room to 
improve the code.


Mark



Thank you again for the reply. It has been much appreciated.

Davide

From: Mark Abraham mark.abra...@anu.edu.au 
mailto:mark.abra...@anu.edu.au
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org 
mailto:gmx-users@gromacs.org

Date: Wed, 28 Mar 2012 13:29:40 +1100
To: Discussion list for GROMACS users gmx-users@gromacs.org 
mailto:gmx-users@gromacs.org

Subject: Re: [gmx-users] Segmentation Fault using g_cluster

On 28/03/2012 1:00 PM, Davide Mercadante wrote:

Dear Gromacs Users,

I am trying to run g_cluster to find an average structure for my 
system and after giving the following command line:


g_cluster_d -f allnj10_XM10.xtc -s EB_XM.gro -cl pdb_ligplot_XM.pdb 
-n --g


g_cluster started without problems and continued calculating the 
matrix etc...until I got this:


Last frame   5000 time 5.004
Allocated 645448320 bytes for frames
Read 5001 frames from trajectory allnj10_XM10.xtc
Computing 5001x5001 RMS deviation matrix
# RMSD calculations left: 0

The RMSD ranges from 0.0816802 to 0.301369 nm
Average RMSD is 0.208468
Number of structures for matrix 5001
Energy of the matrix is 364.532 nm
WARNING: rmsd minimum 0 is below lowest rmsd value 0.0816802
Linking structures **
Sorting and renumbering clusters

Found 1425 clusters

Writing middle structure for each cluster to pdb_ligplot_XM.pdb
Segmentation fault: 11

Can you please help me to understand where the problem comes from and 
how I can solve it?

Any help is greatly appreciated.


I don't think this should happen. You haven't stated your GROMACS 
version. If you can reproduce this with 4.5.5., please open an issue 
here http://redmine.gromacs.org/ and upload your files and 
instructions on how to reproduce the problem.


Mark
-- gmx-users mailing list gmx-users@gromacs.org 
mailto:gmx-users@gromacs.org 
http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the 
archive at http://www.gromacs.org/Support/Mailing_Lists/Search before 
posting! Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org 
mailto:gmx-users-requ...@gromacs.org. Can't post? Read 
http://www.gromacs.org/Support/Mailing_Lists





-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation Fault using g_cluster

2012-03-27 Thread Davide Mercadante
Dear Gromacs Users,

I am trying to run g_cluster to find an average structure for my system and
after giving the following command line:

g_cluster_d -f allnj10_XM10.xtc -s EB_XM.gro -cl pdb_ligplot_XM.pdb -n ­g

g_cluster started without problems and continued calculating the matrix
etcŠuntil I got this:

Last frame   5000 time 5.004
Allocated 645448320 bytes for frames
Read 5001 frames from trajectory allnj10_XM10.xtc
Computing 5001x5001 RMS deviation matrix
# RMSD calculations left: 0

The RMSD ranges from 0.0816802 to 0.301369 nm
Average RMSD is 0.208468
Number of structures for matrix 5001
Energy of the matrix is 364.532 nm
WARNING: rmsd minimum 0 is below lowest rmsd value 0.0816802
Linking structures **
Sorting and renumbering clusters

Found 1425 clusters

Writing middle structure for each cluster to pdb_ligplot_XM.pdb
Segmentation fault: 11

Can you please help me to understand where the problem comes from and how I
can solve it? 
Any help is greatly appreciated.

Thank you.
Davide


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation Fault using g_cluster

2012-03-27 Thread Mark Abraham

On 28/03/2012 1:00 PM, Davide Mercadante wrote:

Dear Gromacs Users,

I am trying to run g_cluster to find an average structure for my 
system and after giving the following command line:


g_cluster_d -f allnj10_XM10.xtc -s EB_XM.gro -cl pdb_ligplot_XM.pdb -n --g

g_cluster started without problems and continued calculating the 
matrix etc...until I got this:


Last frame   5000 time 5.004
Allocated 645448320 bytes for frames
Read 5001 frames from trajectory allnj10_XM10.xtc
Computing 5001x5001 RMS deviation matrix
# RMSD calculations left: 0

The RMSD ranges from 0.0816802 to 0.301369 nm
Average RMSD is 0.208468
Number of structures for matrix 5001
Energy of the matrix is 364.532 nm
WARNING: rmsd minimum 0 is below lowest rmsd value 0.0816802
Linking structures **
Sorting and renumbering clusters

Found 1425 clusters

Writing middle structure for each cluster to pdb_ligplot_XM.pdb
Segmentation fault: 11

Can you please help me to understand where the problem comes from and 
how I can solve it?

Any help is greatly appreciated.


I don't think this should happen. You haven't stated your GROMACS 
version. If you can reproduce this with 4.5.5., please open an issue 
here http://redmine.gromacs.org/ and upload your files and instructions 
on how to reproduce the problem.


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation Fault using g_cluster

2012-03-27 Thread Davide Mercadante
Thank you for the prompt reply.! Indeed, I am using Gromacs version 4.5.5
compiled in double-precision and I am running the analysis on a MacBook PRO.

I tried to open an issue at http://redmine/gromacs.org but it asks me for
login name and passwd which I don't think I have as I never subscribed as
developer. I may be wrong though..do I need to register?
I suppose that if this is not an explainable issue at the moment there is no
solution to it?

Thank you again for the reply. It has been much appreciated.

Davide

From:  Mark Abraham mark.abra...@anu.edu.au
Reply-To:  Discussion list for GROMACS users gmx-users@gromacs.org
Date:  Wed, 28 Mar 2012 13:29:40 +1100
To:  Discussion list for GROMACS users gmx-users@gromacs.org
Subject:  Re: [gmx-users] Segmentation Fault using g_cluster


 On 28/03/2012 1:00 PM, Davide Mercadante wrote:
  
 Dear Gromacs Users,
  
 
  
  
 I am trying to run g_cluster to find an average structure for my system and
 after giving the following command line:
  
 
  
  
 g_cluster_d -f allnj10_XM10.xtc -s EB_XM.gro -cl pdb_ligplot_XM.pdb -n ­g
  
 
  
  
 g_cluster started without problems and continued calculating the matrix
 etcŠuntil I got this:
  
 
  
  
  
 Last frame   5000 time 5.004
  
 Allocated 645448320 bytes for frames
  
 Read 5001 frames from trajectory allnj10_XM10.xtc
  
 Computing 5001x5001 RMS deviation matrix
  
 # RMSD calculations left: 0
  
 
  
  
 The RMSD ranges from 0.0816802 to 0.301369 nm
  
 Average RMSD is 0.208468
  
 Number of structures for matrix 5001
  
 Energy of the matrix is 364.532 nm
  
 WARNING: rmsd minimum 0 is below lowest rmsd value 0.0816802
  
 Linking structures **
  
 Sorting and renumbering clusters
  
 
  
  
 Found 1425 clusters
  
 
  
  
 Writing middle structure for each cluster to pdb_ligplot_XM.pdb
  
 Segmentation fault: 11
  
  
 
  
  
 Can you please help me to understand where the problem comes from and how I
 can solve it? 
  
 Any help is greatly appreciated.
  
  
 
 I don't think this should happen. You haven't stated your GROMACS version.
If you can reproduce this with 4.5.5., please open an issue here
http://redmine.gromacs.org/ and upload your files and instructions on how to
reproduce the problem.
 
 Mark
 
-- gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the
archive at http://www.gromacs.org/Support/Mailing_Lists/Search before
posting! Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org. Can't post? Read
http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault

2012-03-11 Thread saly jackson
Hi all

Would you please let me know how can I remove the following error when I
want to run 'mdrun -v -deffnm H'?


Back Off! I just backed up H.log to ./#H.log.1#
Getting Loaded...
Reading file H.tpr, VERSION 4.5.4 (single precision)
Starting 24 threads
Segmentation fault

Thanks

Regards

Saly
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault

2012-03-11 Thread Justin A. Lemkul



saly jackson wrote:

Hi all

Would you please let me know how can I remove the following error when I 
want to run 'mdrun -v -deffnm H'?



Back Off! I just backed up H.log to ./#H.log.1#
Getting Loaded...
Reading file H.tpr, VERSION 4.5.4 (single precision)
Starting 24 threads
Segmentation fault



Your simulation crashed.  The fact that it did so immediately suggests 
inadequate energy minimization and/or equilibration.


http://www.gromacs.org/Documentation/Terminology/Blowing_Up

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault

2012-03-08 Thread rama david
Hi ,
  Thank you for help.
I solve my problem for LINCS error
But now I have another problem
after mdrun command
gromacs output

 Making 1D domain decomposition 4 x 1 x 1
starting mdrun 'Martini system from nap.pdb'
5000 steps,100.0 ps.
step 0Segmentation fault

Please give the valuable suggestion..
Thank you in advance..
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault

2012-03-08 Thread Mark Abraham

On 9/03/2012 6:24 PM, rama david wrote:

Hi ,
  Thank you for help.
I solve my problem for LINCS error
But now I have another problem
after mdrun command
gromacs output

 Making 1D domain decomposition 4 x 1 x 1
starting mdrun 'Martini system from nap.pdb'
5000 steps,100.0 ps.
step 0Segmentation fault


No idea. Look at your stdout and/or log files and then consult the 
errors page of the GROMACS webpage.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault err

2012-03-05 Thread shilpa yadahalli
Dear gmx-users,

I'm facing segmentation fault error while *mdrun*.
When i checked my md.log file, after 440'th step the kinetic energy energy 
increases by tenfold and hence the temperature (temp - 6.80141e+01K to 
2.21829e+02K). all other values, potential energy etc. are not changing much so 
i guess this is the cause for fatal error. But I can't figure out the reason 
behind sudden increase in temperature. 


I'm using following options in my mdp file:

;Temperature coupling
tc-grps = system
tau_t = 1.0 ; Temperature coupling time constant. 
ref_t = 50.0 ; In reduced units
;Pressure coupling
Pcoupl = no
;Velocity generation
gen_vel = yes
gen_temp = 50.0


Can anybody suggest me, if i'm missing something (to take care of)? 


Regards,
Shilpa
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault err

2012-03-05 Thread Mark Abraham

On 6/03/2012 2:58 AM, shilpa yadahalli wrote:

Dear gmx-users,

I'm facing segmentation fault error while *mdrun*.
When i checked my md.log file, after 440'th step the kinetic energy 
energy increases by tenfold and hence the temperature (temp - 
6.80141e+01K to 2.21829e+02K). all other values, potential energy etc. 
are not changing much so i guess this is the cause for fatal error. 
But I can't figure out the reason behind sudden increase in temperature.


I'm using following options in my mdp file:
;Temperature coupling
tc-grps = system
tau_t = 1.0 ; Temperature coupling time constant.
ref_t = 50.0 ; In reduced units
;Pressure coupling
Pcoupl = no
;Velocity generation
gen_vel = yes
gen_temp = 50.0

Can anybody suggest me, if i'm missing something (to take care of)?



You will need to look at the end of the .log and/or stdout file to know, 
but probably this is what is happening 
http://www.gromacs.org/Documentation/Terminology/Blowing_Up


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
Dear Gmx Users,

I am trying to run nvt simulation (equilibration) of the protein in
implicit solvent. My mdp:


integrator = md ; leap-frog integrator

nsteps = 100 ; 0.0005 * 100 = 0.5 ns

dt = 0.0005 ; 0.5 fs

; Output control

nstxout = 1

nstxtcout = 1 ; xtc compressed trajectory output every 2 ps

nstenergy = 1 ; save energies every 2 ps

nstlog = 1 ; update log file every 2 ps

; Bond parameters

continuation = no

constraints = none

; Neighborsearching

ns_type = simple ; search neighboring grid cells

nstlist = 0 ; 10 fs

rlist = 0 ; short-range neighborlist cutoff (in nm)

; Infinite box with no cutoffs

pbc = no

rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

coulombtype = cut-off

vdwtype = cut-off

rvdw = 0 ; short-range van der Waals cutoff (in nm)

epsilon_rf = 0

comm_mode = angular ; remove angular and com motions

; implicit solvent

implicit_solvent = GBSA

gb_algorithm = OBC

gb_epsilon_solvent = 80.0

sa_surface_tension = 2.25936

rgbradii = 0

sa_algorithm = Ace-approximation

nstgbradii = 1

; Temperature coupling is on

Tcoupl = v-rescale

tau_t = 0.1

tc_grps = system

ref_t = 298

; Velocity generation

gen_vel = yes ; Velocity generation is on

gen_temp = 298.0

gen_seed = -1

Then after grompp I am trying to run the simulation on the cluster:

mdrun -pd -deffnm nvt500ps

My log file:

Back Off! I just backed up nvt500ps.log to ./#nvt500ps.log.1#

Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

Starting 8 threads

Back Off! I just backed up nvt500ps.trr to ./#nvt500ps.trr.1#

Back Off! I just backed up nvt500ps.xtc to ./#nvt500ps.xtc.1#

Back Off! I just backed up nvt500ps.edr to ./#nvt500ps.edr.1#

starting mdrun 'Protein'

100 steps, 500.0 ps.

Segmentation fault



Do you have any clue what is happening?

thank you

Steven
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Justin A. Lemkul



Steven Neumann wrote:

Dear Gmx Users,
 
I am trying to run nvt simulation (equilibration) of the protein in 
implicit solvent. My mdp:
 


integrator = md ; leap-frog integrator

nsteps = 100 ; 0.0005 * 100 = 0.5 ns

dt = 0.0005 ; 0.5 fs

; Output control

nstxout = 1

nstxtcout = 1 ; xtc compressed trajectory output every 2 ps

nstenergy = 1 ; save energies every 2 ps

nstlog = 1 ; update log file every 2 ps

; Bond parameters

continuation = no

constraints = none

; Neighborsearching

ns_type = simple ; search neighboring grid cells

nstlist = 0 ; 10 fs

rlist = 0 ; short-range neighborlist cutoff (in nm)

; Infinite box with no cutoffs

pbc = no

rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

coulombtype = cut-off

vdwtype = cut-off

rvdw = 0 ; short-range van der Waals cutoff (in nm)

epsilon_rf = 0

comm_mode = angular ; remove angular and com motions

; implicit solvent

implicit_solvent = GBSA

gb_algorithm = OBC

gb_epsilon_solvent = 80.0

sa_surface_tension = 2.25936

rgbradii = 0

sa_algorithm = Ace-approximation

nstgbradii = 1

; Temperature coupling is on

Tcoupl = v-rescale

tau_t = 0.1

tc_grps = system

ref_t = 298

; Velocity generation

gen_vel = yes ; Velocity generation is on

gen_temp = 298.0

gen_seed = -1

Then after grompp I am trying to run the simulation on the cluster:

mdrun -pd -deffnm nvt500ps

My log file:

Back Off! I just backed up nvt500ps.log to ./#nvt500ps.log.1#

Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

Starting 8 threads

Back Off! I just backed up nvt500ps.trr to ./#nvt500ps.trr.1#

Back Off! I just backed up nvt500ps.xtc to ./#nvt500ps.xtc.1#

Back Off! I just backed up nvt500ps.edr to ./#nvt500ps.edr.1#

starting mdrun 'Protein'

100 steps, 500.0 ps.

Segmentation fault

 


Do you have any clue what is happening?



Try running in serial or with a maximum of 2 threads.  Your problem could be 
related to http://redmine.gromacs.org/issues/777.  You will need to upgrade to 
4.5.5 (serial should work on 4.5.4).


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Steven Neumann wrote:

 Dear Gmx Users,
  I am trying to run nvt simulation (equilibration) of the protein in
 implicit solvent. My mdp:

 integrator = md ; leap-frog integrator

 nsteps = 100 ; 0.0005 * 100 = 0.5 ns

 dt = 0.0005 ; 0.5 fs

 ; Output control

 nstxout = 1

 nstxtcout = 1 ; xtc compressed trajectory output every 2 ps

 nstenergy = 1 ; save energies every 2 ps

 nstlog = 1 ; update log file every 2 ps

 ; Bond parameters

 continuation = no

 constraints = none

 ; Neighborsearching

 ns_type = simple ; search neighboring grid cells

 nstlist = 0 ; 10 fs

 rlist = 0 ; short-range neighborlist cutoff (in nm)

 ; Infinite box with no cutoffs

 pbc = no

 rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

 coulombtype = cut-off

 vdwtype = cut-off

 rvdw = 0 ; short-range van der Waals cutoff (in nm)

 epsilon_rf = 0

 comm_mode = angular ; remove angular and com motions

 ; implicit solvent

 implicit_solvent = GBSA

 gb_algorithm = OBC

 gb_epsilon_solvent = 80.0

 sa_surface_tension = 2.25936

 rgbradii = 0

 sa_algorithm = Ace-approximation

 nstgbradii = 1

 ; Temperature coupling is on

 Tcoupl = v-rescale

 tau_t = 0.1

 tc_grps = system

 ref_t = 298

 ; Velocity generation

 gen_vel = yes ; Velocity generation is on

 gen_temp = 298.0

 gen_seed = -1

 Then after grompp I am trying to run the simulation on the cluster:

 mdrun -pd -deffnm nvt500ps

 My log file:

 Back Off! I just backed up nvt500ps.log to ./#nvt500ps.log.1#

 Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

 Starting 8 threads

 Back Off! I just backed up nvt500ps.trr to ./#nvt500ps.trr.1#

 Back Off! I just backed up nvt500ps.xtc to ./#nvt500ps.xtc.1#

 Back Off! I just backed up nvt500ps.edr to ./#nvt500ps.edr.1#

 starting mdrun 'Protein'

 100 steps, 500.0 ps.

 Segmentation fault


 Do you have any clue what is happening?


 Try running in serial or with a maximum of 2 threads.  Your problem could
 be related to 
 http://redmine.gromacs.org/**issues/777http://redmine.gromacs.org/issues/777.
  You will need to upgrade to 4.5.5 (serial should work on 4.5.4).

 -Justin


Thank you. What do you mean by running in serial? Well... with 2 threads it
does not make sense to use implicit solvent. Will 4.5.5 resolve this
problem?

Steven



 --
 ==**==

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann s.neuman...@gmail.comwrote:



  On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Steven Neumann wrote:

 Dear Gmx Users,
  I am trying to run nvt simulation (equilibration) of the protein in
 implicit solvent. My mdp:

 integrator = md ; leap-frog integrator

 nsteps = 100 ; 0.0005 * 100 = 0.5 ns

 dt = 0.0005 ; 0.5 fs

 ; Output control

 nstxout = 1

 nstxtcout = 1 ; xtc compressed trajectory output every 2 ps

 nstenergy = 1 ; save energies every 2 ps

 nstlog = 1 ; update log file every 2 ps

 ; Bond parameters

 continuation = no

 constraints = none

 ; Neighborsearching

 ns_type = simple ; search neighboring grid cells

 nstlist = 0 ; 10 fs

 rlist = 0 ; short-range neighborlist cutoff (in nm)

 ; Infinite box with no cutoffs

 pbc = no

 rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

 coulombtype = cut-off

 vdwtype = cut-off

 rvdw = 0 ; short-range van der Waals cutoff (in nm)

 epsilon_rf = 0

 comm_mode = angular ; remove angular and com motions

 ; implicit solvent

 implicit_solvent = GBSA

 gb_algorithm = OBC

 gb_epsilon_solvent = 80.0

 sa_surface_tension = 2.25936

 rgbradii = 0

 sa_algorithm = Ace-approximation

 nstgbradii = 1

 ; Temperature coupling is on

 Tcoupl = v-rescale

 tau_t = 0.1

 tc_grps = system

 ref_t = 298

 ; Velocity generation

 gen_vel = yes ; Velocity generation is on

 gen_temp = 298.0

 gen_seed = -1

 Then after grompp I am trying to run the simulation on the cluster:

 mdrun -pd -deffnm nvt500ps

 My log file:

 Back Off! I just backed up nvt500ps.log to ./#nvt500ps.log.1#

 Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

 Starting 8 threads

 Back Off! I just backed up nvt500ps.trr to ./#nvt500ps.trr.1#

 Back Off! I just backed up nvt500ps.xtc to ./#nvt500ps.xtc.1#

 Back Off! I just backed up nvt500ps.edr to ./#nvt500ps.edr.1#

 starting mdrun 'Protein'

 100 steps, 500.0 ps.

 Segmentation fault


 Do you have any clue what is happening?


 Try running in serial or with a maximum of 2 threads.  Your problem could
 be related to 
 http://redmine.gromacs.org/**issues/777http://redmine.gromacs.org/issues/777.
  You will need to upgrade to 4.5.5 (serial should work on 4.5.4).

 -Justin


 Thank you. What do you mean by running in serial? Well... with 2 threads
 it does not make sense to use implicit solvent. Will 4.5.5 resolve this
 problem?



Indeed, it works in serial. Will version 4.5.5 resolve it run it on e.g. 12
nodes?

Steven


  Steven



 --
 ==**==

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann s.neuman...@gmail.comwrote:



  On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann s.neuman...@gmail.comwrote:



  On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul jalem...@vt.eduwrote:



 Steven Neumann wrote:

 Dear Gmx Users,
  I am trying to run nvt simulation (equilibration) of the protein in
 implicit solvent. My mdp:

 integrator = md ; leap-frog integrator

 nsteps = 100 ; 0.0005 * 100 = 0.5 ns

 dt = 0.0005 ; 0.5 fs

 ; Output control

 nstxout = 1

 nstxtcout = 1 ; xtc compressed trajectory output every 2 ps

 nstenergy = 1 ; save energies every 2 ps

 nstlog = 1 ; update log file every 2 ps

 ; Bond parameters

 continuation = no

 constraints = none

 ; Neighborsearching

 ns_type = simple ; search neighboring grid cells

 nstlist = 0 ; 10 fs

 rlist = 0 ; short-range neighborlist cutoff (in nm)

 ; Infinite box with no cutoffs

 pbc = no

 rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

 coulombtype = cut-off

 vdwtype = cut-off

 rvdw = 0 ; short-range van der Waals cutoff (in nm)

 epsilon_rf = 0

 comm_mode = angular ; remove angular and com motions

 ; implicit solvent

 implicit_solvent = GBSA

 gb_algorithm = OBC

 gb_epsilon_solvent = 80.0

 sa_surface_tension = 2.25936

 rgbradii = 0

 sa_algorithm = Ace-approximation

 nstgbradii = 1

 ; Temperature coupling is on

 Tcoupl = v-rescale

 tau_t = 0.1

 tc_grps = system

 ref_t = 298

 ; Velocity generation

 gen_vel = yes ; Velocity generation is on

 gen_temp = 298.0

 gen_seed = -1

 Then after grompp I am trying to run the simulation on the cluster:

 mdrun -pd -deffnm nvt500ps

 My log file:

 Back Off! I just backed up nvt500ps.log to ./#nvt500ps.log.1#

 Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

 Starting 8 threads

 Back Off! I just backed up nvt500ps.trr to ./#nvt500ps.trr.1#

 Back Off! I just backed up nvt500ps.xtc to ./#nvt500ps.xtc.1#

 Back Off! I just backed up nvt500ps.edr to ./#nvt500ps.edr.1#

 starting mdrun 'Protein'

 100 steps, 500.0 ps.

 Segmentation fault


 Do you have any clue what is happening?


 Try running in serial or with a maximum of 2 threads.  Your problem
 could be related to 
 http://redmine.gromacs.org/**issues/777http://redmine.gromacs.org/issues/777.
  You will need to upgrade to 4.5.5 (serial should work on 4.5.4).

 -Justin


 Thank you. What do you mean by running in serial? Well... with 2 threads
 it does not make sense to use implicit solvent. Will 4.5.5 resolve this
 problem?



 Indeed, it works in serial. Will version 4.5.5 resolve it run it on e.g.
 12 nodes?



Sorry, the same problem :


Initial temperature: 298.398 K

Started mdrun on node 0 Thu Mar 1 15:56:04 2012

Step Time Lambda

0 0.0 0.0

Energies (kJ/mol)

Bond U-B Proper Dih. Improper Dih. CMAP Dih.

2.27391e+02 4.27461e+02 1.27733e+03 2.73432e+01 -6.95645e+02

GB Polarization Nonpolar Sol. LJ-14 Coulomb-14 LJ (SR)

inf 3.59121e+02 6.85627e+02 3.68572e+04 -7.57140e+02

Coulomb (SR) Potential Kinetic En. Total Energy Conserved En.

-3.30643e+04 inf nan nan nan

Temperature Pressure (bar)

nan 0.0e+00


 -
 ==**==

 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justinhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Justin A. Lemkul



Steven Neumann wrote:



On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann s.neuman...@gmail.com 
mailto:s.neuman...@gmail.com wrote:




On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com wrote:



On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul
jalem...@vt.edu mailto:jalem...@vt.edu wrote:



Steven Neumann wrote:

Dear Gmx Users,
 I am trying to run nvt simulation (equilibration) of
the protein in implicit solvent. My mdp:
 
integrator = md ; leap-frog integrator


nsteps = 100 ; 0.0005 * 100 = 0.5 ns

dt = 0.0005 ; 0.5 fs

; Output control

nstxout = 1

nstxtcout = 1 ; xtc compressed trajectory output
every 2 ps

nstenergy = 1 ; save energies every 2 ps

nstlog = 1 ; update log file every 2 ps

; Bond parameters

continuation = no

constraints = none

; Neighborsearching

ns_type = simple ; search neighboring grid cells

nstlist = 0 ; 10 fs

rlist = 0 ; short-range neighborlist cutoff (in nm)

; Infinite box with no cutoffs

pbc = no

rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

coulombtype = cut-off

vdwtype = cut-off

rvdw = 0 ; short-range van der Waals cutoff (in nm)

epsilon_rf = 0

comm_mode = angular ; remove angular and com motions

; implicit solvent

implicit_solvent = GBSA

gb_algorithm = OBC

gb_epsilon_solvent = 80.0

sa_surface_tension = 2.25936

rgbradii = 0

sa_algorithm = Ace-approximation

nstgbradii = 1

; Temperature coupling is on

Tcoupl = v-rescale

tau_t = 0.1

tc_grps = system

ref_t = 298

; Velocity generation

gen_vel = yes ; Velocity generation is on

gen_temp = 298.0

gen_seed = -1

Then after grompp I am trying to run the simulation on
the cluster:

mdrun -pd -deffnm nvt500ps

My log file:

Back Off! I just backed up nvt500ps.log to
./#nvt500ps.log.1#

Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

Starting 8 threads

Back Off! I just backed up nvt500ps.trr to
./#nvt500ps.trr.1#

Back Off! I just backed up nvt500ps.xtc to
./#nvt500ps.xtc.1#

Back Off! I just backed up nvt500ps.edr to
./#nvt500ps.edr.1#

starting mdrun 'Protein'

100 steps, 500.0 ps.

Segmentation fault

 
Do you have any clue what is happening?



Try running in serial or with a maximum of 2 threads.  Your
problem could be related to
http://redmine.gromacs.org/__issues/777
http://redmine.gromacs.org/issues/777.  You will need to
upgrade to 4.5.5 (serial should work on 4.5.4).

-Justin

 
Thank you. What do you mean by running in serial? Well... with 2

threads it does not make sense to use implicit solvent. Will
4.5.5 resolve this problem?
 

 
Indeed, it works in serial. Will version 4.5.5 resolve it run it on

e.g. 12 nodes?
 

 
Sorry, the same problem :
 


I think, as stated in the redmine issue cited before, the limit is 2 
threads/processors.


-Justin



Initial temperature: 298.398 K

Started mdrun on node 0 Thu Mar 1 15:56:04 2012

Step Time Lambda

0 0.0 0.0

Energies (kJ/mol)

Bond U-B Proper Dih. Improper Dih. CMAP Dih.

2.27391e+02 4.27461e+02 1.27733e+03 2.73432e+01 -6.95645e+02

GB Polarization Nonpolar Sol. LJ-14 Coulomb-14 LJ (SR)

inf 3.59121e+02 6.85627e+02 3.68572e+04 -7.57140e+02

Coulomb (SR) Potential Kinetic En. Total Energy Conserved En.

-3.30643e+04 inf nan nan nan

Temperature Pressure (bar)

nan 0.0e+00


-
==__==

Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu http://vt.edu/ | (540) 231-9080
tel:%28540%29%20231-9080
http://www.bevanlab.biochem.__vt.edu/Pages/Personal/justin
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

==__==
-- 
gmx-users mailing 

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
On Thu, Mar 1, 2012 at 5:43 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Steven Neumann wrote:



 On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann s.neuman...@gmail.commailto:
 s.neuman...@gmail.com** wrote:



On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com** wrote:



On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul
jalem...@vt.edu mailto:jalem...@vt.edu wrote:



Steven Neumann wrote:

Dear Gmx Users,
 I am trying to run nvt simulation (equilibration) of
the protein in implicit solvent. My mdp:
 integrator = md ; leap-frog integrator

nsteps = 100 ; 0.0005 * 100 = 0.5 ns

dt = 0.0005 ; 0.5 fs

; Output control

nstxout = 1

nstxtcout = 1 ; xtc compressed trajectory output
every 2 ps

nstenergy = 1 ; save energies every 2 ps

nstlog = 1 ; update log file every 2 ps

; Bond parameters

continuation = no

constraints = none

; Neighborsearching

ns_type = simple ; search neighboring grid cells

nstlist = 0 ; 10 fs

rlist = 0 ; short-range neighborlist cutoff (in nm)

; Infinite box with no cutoffs

pbc = no

rcoulomb = 0 ; short-range electrostatic cutoff (in nm)

coulombtype = cut-off

vdwtype = cut-off

rvdw = 0 ; short-range van der Waals cutoff (in nm)

epsilon_rf = 0

comm_mode = angular ; remove angular and com motions

; implicit solvent

implicit_solvent = GBSA

gb_algorithm = OBC

gb_epsilon_solvent = 80.0

sa_surface_tension = 2.25936

rgbradii = 0

sa_algorithm = Ace-approximation

nstgbradii = 1

; Temperature coupling is on

Tcoupl = v-rescale

tau_t = 0.1

tc_grps = system

ref_t = 298

; Velocity generation

gen_vel = yes ; Velocity generation is on

gen_temp = 298.0

gen_seed = -1

Then after grompp I am trying to run the simulation on
the cluster:

mdrun -pd -deffnm nvt500ps

My log file:

Back Off! I just backed up nvt500ps.log to
./#nvt500ps.log.1#

Reading file nvt500ps.tpr, VERSION 4.5.4 (single precision)

Starting 8 threads

Back Off! I just backed up nvt500ps.trr to
./#nvt500ps.trr.1#

Back Off! I just backed up nvt500ps.xtc to
./#nvt500ps.xtc.1#

Back Off! I just backed up nvt500ps.edr to
./#nvt500ps.edr.1#

starting mdrun 'Protein'

100 steps, 500.0 ps.

Segmentation fault

 Do you have any clue what is happening?


Try running in serial or with a maximum of 2 threads.  Your
problem could be related to

 http://redmine.gromacs.org/__**issues/777http://redmine.gromacs.org/__issues/777

 http://redmine.gromacs.org/**issues/777http://redmine.gromacs.org/issues/777.
  You will need to

upgrade to 4.5.5 (serial should work on 4.5.4).

-Justin

 Thank you. What do you mean by running in serial? Well...
 with 2
threads it does not make sense to use implicit solvent. Will
4.5.5 resolve this problem?

 Indeed, it works in serial. Will version 4.5.5 resolve it run it
 on
e.g. 12 nodes?

  Sorry, the same problem :



 I think, as stated in the redmine issue cited before, the limit is 2
 threads/processors.

 -Justin



I tried on one thread and on two threads and the error is still the same.
Without particle decomposition the problem remains. Any suggestions?

Steven


 Initial temperature: 298.398 K

 Started mdrun on node 0 Thu Mar 1 15:56:04 2012

 Step Time Lambda

 0 0.0 0.0

 Energies (kJ/mol)

 Bond U-B Proper Dih. Improper Dih. CMAP Dih.

 2.27391e+02 4.27461e+02 1.27733e+03 2.73432e+01 -6.95645e+02

 GB Polarization Nonpolar Sol. LJ-14 Coulomb-14 LJ (SR)

 inf 3.59121e+02 6.85627e+02 3.68572e+04 -7.57140e+02

 Coulomb (SR) Potential Kinetic En. Total Energy Conserved En.

 -3.30643e+04 inf nan nan nan

 Temperature Pressure (bar)

 nan 0.0e+00


-
==**__==


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia 

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Justin A. Lemkul



Steven Neumann wrote:



On Thu, Mar 1, 2012 at 5:43 PM, Justin A. Lemkul jalem...@vt.edu 
mailto:jalem...@vt.edu wrote:




Steven Neumann wrote:



On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com__
wrote:



   On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann
   s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com__
wrote:



   On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul
   jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



   Steven Neumann wrote:

   Dear Gmx Users,
I am trying to run nvt simulation (equilibration) of
   the protein in implicit solvent. My mdp:
integrator = md ; leap-frog
integrator

   nsteps = 100 ; 0.0005 * 100 = 0.5 ns

   dt = 0.0005 ; 0.5 fs

   ; Output control

   nstxout = 1

   nstxtcout = 1 ; xtc compressed trajectory output
   every 2 ps

   nstenergy = 1 ; save energies every 2 ps

   nstlog = 1 ; update log file every 2 ps

   ; Bond parameters

   continuation = no

   constraints = none

   ; Neighborsearching

   ns_type = simple ; search neighboring grid cells

   nstlist = 0 ; 10 fs

   rlist = 0 ; short-range neighborlist cutoff (in nm)

   ; Infinite box with no cutoffs

   pbc = no

   rcoulomb = 0 ; short-range electrostatic cutoff
(in nm)

   coulombtype = cut-off

   vdwtype = cut-off

   rvdw = 0 ; short-range van der Waals cutoff (in nm)

   epsilon_rf = 0

   comm_mode = angular ; remove angular and com motions

   ; implicit solvent

   implicit_solvent = GBSA

   gb_algorithm = OBC

   gb_epsilon_solvent = 80.0

   sa_surface_tension = 2.25936

   rgbradii = 0

   sa_algorithm = Ace-approximation

   nstgbradii = 1

   ; Temperature coupling is on

   Tcoupl = v-rescale

   tau_t = 0.1

   tc_grps = system

   ref_t = 298

   ; Velocity generation

   gen_vel = yes ; Velocity generation is on

   gen_temp = 298.0

   gen_seed = -1

   Then after grompp I am trying to run the
simulation on
   the cluster:

   mdrun -pd -deffnm nvt500ps

   My log file:

   Back Off! I just backed up nvt500ps.log to
   ./#nvt500ps.log.1#

   Reading file nvt500ps.tpr, VERSION 4.5.4 (single
precision)

   Starting 8 threads

   Back Off! I just backed up nvt500ps.trr to
   ./#nvt500ps.trr.1#

   Back Off! I just backed up nvt500ps.xtc to
   ./#nvt500ps.xtc.1#

   Back Off! I just backed up nvt500ps.edr to
   ./#nvt500ps.edr.1#

   starting mdrun 'Protein'

   100 steps, 500.0 ps.

   Segmentation fault

Do you have any clue what is
happening?


   Try running in serial or with a maximum of 2 threads.
 Your
   problem could be related to
   http://redmine.gromacs.org/issues/777
http://redmine.gromacs.org/__issues/777
   http://redmine.gromacs.org/__issues/777
http://redmine.gromacs.org/issues/777.  You will need to

   upgrade to 4.5.5 (serial should work on 4.5.4).

   -Justin

Thank you. What do you mean by running in
serial? Well... with 2
   threads it does not make sense to use implicit solvent. Will
   4.5.5 resolve this problem?
   
Indeed, it works in serial. Will version 4.5.5 resolve

it run it on
   e.g. 12 nodes?
   
 Sorry, the same problem :
 



I think, as stated 

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
On Thu, Mar 1, 2012 at 10:07 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Steven Neumann wrote:



 On Thu, Mar 1, 2012 at 5:43 PM, Justin A. Lemkul jalem...@vt.edumailto:
 jalem...@vt.edu wrote:



Steven Neumann wrote:



On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com**__

wrote:



   On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann
   s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com**__

wrote:



   On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul
   jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



   Steven Neumann wrote:

   Dear Gmx Users,
I am trying to run nvt simulation (equilibration)
 of
   the protein in implicit solvent. My mdp:
integrator = md ; leap-frog
integrator

   nsteps = 100 ; 0.0005 * 100 = 0.5 ns

   dt = 0.0005 ; 0.5 fs

   ; Output control

   nstxout = 1

   nstxtcout = 1 ; xtc compressed trajectory output
   every 2 ps

   nstenergy = 1 ; save energies every 2 ps

   nstlog = 1 ; update log file every 2 ps

   ; Bond parameters

   continuation = no

   constraints = none

   ; Neighborsearching

   ns_type = simple ; search neighboring grid cells

   nstlist = 0 ; 10 fs

   rlist = 0 ; short-range neighborlist cutoff (in nm)

   ; Infinite box with no cutoffs

   pbc = no

   rcoulomb = 0 ; short-range electrostatic cutoff
(in nm)

   coulombtype = cut-off

   vdwtype = cut-off

   rvdw = 0 ; short-range van der Waals cutoff (in nm)

   epsilon_rf = 0

   comm_mode = angular ; remove angular and com motions

   ; implicit solvent

   implicit_solvent = GBSA

   gb_algorithm = OBC

   gb_epsilon_solvent = 80.0

   sa_surface_tension = 2.25936

   rgbradii = 0

   sa_algorithm = Ace-approximation

   nstgbradii = 1

   ; Temperature coupling is on

   Tcoupl = v-rescale

   tau_t = 0.1

   tc_grps = system

   ref_t = 298

   ; Velocity generation

   gen_vel = yes ; Velocity generation is on

   gen_temp = 298.0

   gen_seed = -1

   Then after grompp I am trying to run the
simulation on
   the cluster:

   mdrun -pd -deffnm nvt500ps

   My log file:

   Back Off! I just backed up nvt500ps.log to
   ./#nvt500ps.log.1#

   Reading file nvt500ps.tpr, VERSION 4.5.4 (single
precision)

   Starting 8 threads

   Back Off! I just backed up nvt500ps.trr to
   ./#nvt500ps.trr.1#

   Back Off! I just backed up nvt500ps.xtc to
   ./#nvt500ps.xtc.1#

   Back Off! I just backed up nvt500ps.edr to
   ./#nvt500ps.edr.1#

   starting mdrun 'Protein'

   100 steps, 500.0 ps.

   Segmentation fault

Do you have any clue what is
happening?


   Try running in serial or with a maximum of 2 threads.
 Your
   problem could be related to
   
 http://redmine.gromacs.org/___**_issues/777http://redmine.gromacs.org/issues/777

 http://redmine.gromacs.org/__**issues/777http://redmine.gromacs.org/__issues/777
 

   
 http://redmine.gromacs.org/__**issues/777http://redmine.gromacs.org/__issues/777

 http://redmine.gromacs.org/**issues/777http://redmine.gromacs.org/issues/777.
  You will need to

   upgrade to 4.5.5 (serial should work on 4.5.4).

   -Justin

Thank you. What do you mean by running in
serial? Well... with 2
   threads it does not make sense to use implicit solvent. Will

Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Mark Abraham

On 2/03/2012 9:22 AM, Steven Neumann wrote:



On Thu, Mar 1, 2012 at 10:07 PM, Justin A. Lemkul jalem...@vt.edu 
mailto:jalem...@vt.edu wrote:




Steven Neumann wrote:



On Thu, Mar 1, 2012 at 5:43 PM, Justin A. Lemkul
jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



   Steven Neumann wrote:



   On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com__

   wrote:



  On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com
mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com__

   wrote:



  On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul
jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



  Steven Neumann wrote:

  Dear Gmx Users,
   I am trying to run nvt simulation
(equilibration) of
  the protein in implicit solvent. My mdp:
   integrator = md ; leap-frog
   integrator

  nsteps = 100 ; 0.0005 * 100 = 0.5 ns

  dt = 0.0005 ; 0.5 fs

  ; Output control

  nstxout = 1

  nstxtcout = 1 ; xtc compressed
trajectory output
  every 2 ps

  nstenergy = 1 ; save energies every 2 ps

  nstlog = 1 ; update log file every 2 ps

  ; Bond parameters

  continuation = no

  constraints = none

  ; Neighborsearching

  ns_type = simple ; search neighboring
grid cells

  nstlist = 0 ; 10 fs

  rlist = 0 ; short-range neighborlist
cutoff (in nm)

  ; Infinite box with no cutoffs

  pbc = no

  rcoulomb = 0 ; short-range electrostatic
cutoff
   (in nm)

  coulombtype = cut-off

  vdwtype = cut-off

  rvdw = 0 ; short-range van der Waals
cutoff (in nm)

  epsilon_rf = 0

  comm_mode = angular ; remove angular and
com motions

  ; implicit solvent

  implicit_solvent = GBSA

  gb_algorithm = OBC

  gb_epsilon_solvent = 80.0

  sa_surface_tension = 2.25936

  rgbradii = 0

  sa_algorithm = Ace-approximation

  nstgbradii = 1

  ; Temperature coupling is on

  Tcoupl = v-rescale

  tau_t = 0.1

  tc_grps = system

  ref_t = 298

  ; Velocity generation

  gen_vel = yes ; Velocity generation is on

  gen_temp = 298.0

  gen_seed = -1

  Then after grompp I am trying to run the
   simulation on
  the cluster:

  mdrun -pd -deffnm nvt500ps

  My log file:

  Back Off! I just backed up nvt500ps.log to
  ./#nvt500ps.log.1#

  Reading file nvt500ps.tpr, VERSION 4.5.4
(single
   precision)

  Starting 8 threads

  Back Off! I just backed up nvt500ps.trr to
  ./#nvt500ps.trr.1#

  Back Off! I just backed up nvt500ps.xtc to
  ./#nvt500ps.xtc.1#

  Back Off! I just backed up nvt500ps.edr to
  ./#nvt500ps.edr.1#


Re: [gmx-users] Segmentation fault - Implicit solvent

2012-03-01 Thread Steven Neumann
On Thu, Mar 1, 2012 at 10:26 PM, Mark Abraham mark.abra...@anu.edu.auwrote:

  On 2/03/2012 9:22 AM, Steven Neumann wrote:



 On Thu, Mar 1, 2012 at 10:07 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 Steven Neumann wrote:



 On Thu, Mar 1, 2012 at 5:43 PM, Justin A. Lemkul jalem...@vt.edumailto:
 jalem...@vt.edu wrote:



Steven Neumann wrote:



 On Thu, Mar 1, 2012 at 3:58 PM, Steven Neumann
s.neuman...@gmail.com mailto:s.neuman...@gmail.com
 mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com__


wrote:



   On Thu, Mar 1, 2012 at 3:32 PM, Steven Neumann
   s.neuman...@gmail.com mailto:s.neuman...@gmail.com
 mailto:s.neuman...@gmail.com mailto:s.neuman...@gmail.com__


wrote:



   On Thu, Mar 1, 2012 at 2:55 PM, Justin A. Lemkul
   jalem...@vt.edu mailto:jalem...@vt.edu
  mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



   Steven Neumann wrote:

   Dear Gmx Users,
I am trying to run nvt simulation (equilibration)
 of
   the protein in implicit solvent. My mdp:
integrator = md ; leap-frog
integrator

   nsteps = 100 ; 0.0005 * 100 = 0.5 ns

   dt = 0.0005 ; 0.5 fs

   ; Output control

   nstxout = 1

   nstxtcout = 1 ; xtc compressed trajectory
 output
   every 2 ps

   nstenergy = 1 ; save energies every 2 ps

   nstlog = 1 ; update log file every 2 ps

   ; Bond parameters

   continuation = no

   constraints = none

   ; Neighborsearching

   ns_type = simple ; search neighboring grid cells

   nstlist = 0 ; 10 fs

   rlist = 0 ; short-range neighborlist cutoff (in nm)

   ; Infinite box with no cutoffs

   pbc = no

   rcoulomb = 0 ; short-range electrostatic cutoff
(in nm)

   coulombtype = cut-off

   vdwtype = cut-off

   rvdw = 0 ; short-range van der Waals cutoff (in nm)

   epsilon_rf = 0

   comm_mode = angular ; remove angular and com
 motions

   ; implicit solvent

   implicit_solvent = GBSA

   gb_algorithm = OBC

   gb_epsilon_solvent = 80.0

   sa_surface_tension = 2.25936

   rgbradii = 0

   sa_algorithm = Ace-approximation

   nstgbradii = 1

   ; Temperature coupling is on

   Tcoupl = v-rescale

   tau_t = 0.1

   tc_grps = system

   ref_t = 298

   ; Velocity generation

   gen_vel = yes ; Velocity generation is on

   gen_temp = 298.0

   gen_seed = -1

   Then after grompp I am trying to run the
simulation on
   the cluster:

   mdrun -pd -deffnm nvt500ps

   My log file:

   Back Off! I just backed up nvt500ps.log to
   ./#nvt500ps.log.1#

   Reading file nvt500ps.tpr, VERSION 4.5.4 (single
precision)

   Starting 8 threads

   Back Off! I just backed up nvt500ps.trr to
   ./#nvt500ps.trr.1#

   Back Off! I just backed up nvt500ps.xtc to
   ./#nvt500ps.xtc.1#

   Back Off! I just backed up nvt500ps.edr to
   ./#nvt500ps.edr.1#

   starting mdrun 'Protein'

   100 steps, 500.0 ps.

   Segmentation fault

Do you have any clue what is
happening?


   Try running in serial or with a maximum of 2 threads.
 Your
   problem could be related to
http://redmine.gromacs.org/issues/777
http://redmine.gromacs.org/__issues/777

   http://redmine.gromacs.org/__issues/777
http://redmine.gromacs.org/issues/777.  You will need to

   upgrade to 4.5.5 (serial should work on 4.5.4).

   -Justin

Thank you. What do you mean by running in
serial? Well... with 2
   threads it does not make sense to use implicit solvent.
 Will
   4.5.5 resolve this problem?
  

[gmx-users] Segmentation fault

2012-01-01 Thread Saba Ferdous
Dear Gromacs Experts,

I am having problem in execution of a command in Gromacs,

thats when i use dssp for secondary structure analysis.

it gives error:

Reading file md_0_10.tpr, VERSION 4.5.5 (single precision)
Reading file md_0_10.tpr, VERSION 4.5.5 (single precision)
Segmentation fault (core dumped)
saba@linuxserver:~/complex/MD

I used the command : ulimit -s unlimited
 ulimit -c unlimited
but no vain, the problem still persists,
Tell me how to fix it?

I urgently need to study secondary structure during simulations

Thanks...
-- 
Saba Ferdous
Research Scholar (M. Phil)
National Center for Bioinformatics
Quaid-e-Azam University, Islamabad
Pakistan
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault

2012-01-01 Thread Mark Abraham

On 2/01/2012 12:35 AM, Saba Ferdous wrote:


Dear Gromacs Experts,

I am having problem in execution of a command in Gromacs,

thats when i use dssp for secondary structure analysis.

it gives error:

Reading file md_0_10.tpr, VERSION 4.5.5 (single precision)
Reading file md_0_10.tpr, VERSION 4.5.5 (single precision)
Segmentation fault (core dumped)
saba@linuxserver:~/complex/MD

I used the command : ulimit -s unlimited
   ulimit -c unlimited
but no vain, the problem still persists,
Tell me how to fix it?



Assuming other GROMACS tools work, you've done something wrong with the 
DSSP installation, but it's impossible for us to say what. Note that the 
version of DSSP released on the last year or two is unsuitable - get the 
old one.


Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault error from mdrun

2011-12-07 Thread rainy908
Hi,

I encounter the following error when trying to execute mdrun:

# Running Gromacs: read TPR and write output to /gpfs disk
 $MPIRUN  $MDRUN -v -nice 0 -np $NSLOTS \
 -s n12_random_50_protein_all.tpr \
 -o n12_random_50_protein_all.trr \
 -c n12_random_50_protein_all.gro \
 -g n12_random_50_protein_all.log \
 -x n12_random_50_protein_all.xtc \
 -e n12_random_50_protein_all.edr

Error:

[compute-0-7:12377] Failing at address: 0x7159fd0
[compute-0-30:07435] [ 1] mdrun [0x761971]
[compute-0-30:07435] *** End of error message ***
[compute-0-29:15535] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15535] [ 1] mdrun [0x761d60]
[compute-0-29:15535] *** End of error message ***
[compute-1-29:19799] [ 0] /lib64/libpthread.so.0 [0x33aac0e7c0]
[compute-1-29:19799] [ 1] mdrun [0x762065]
[compute-1-29:19799] *** End of error message ***
[compute-0-29:15537] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15537] [ 1] mdrun [0x762065]
[compute-0-29:15537] *** End of error message ***
[compute-0-29:15536] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15536] [ 1] mdrun [0x762065]
[compute-0-29:15536] *** End of error message ***
[compute-1-31:11981] [ 0] /lib64/libpthread.so.0 [0x374f00e7c0]
[compute-1-31:11981] [ 1] mdrun [0x761d60]
[compute-1-31:11981] *** End of error message ***
[compute-1-31:11982] [ 0] /lib64/libpthread.so.0 [0x374f00e7c0]
[compute-1-31:11982] [ 1] mdrun [0x761960]
[compute-1-31:11982] *** End of error message ***
[compute-0-29:15538] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15538] [ 1] mdrun [0x761960]
[compute-0-29:15538] *** End of error message ***
[compute-0-7:12377] [ 0] /lib64/libpthread.so.0 [0x387c60e7c0]
[compute-0-7:12377] [ 1] mdrun [0x729641]
[compute-0-7:12377] *** End of error message ***
[compute-1-29:19796] [ 0] /lib64/libpthread.so.0 [0x33aac0e7c0]
[compute-1-29:19796] [ 1] mdrun [0x762065]
[compute-1-29:19796] *** End of error message ***
[compute-1-31.local][[50630,1],32][btl_tcp_frag.c:216:mca_btl_tcp_frag_recv] 
mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
--
mpirun noticed that process rank 35 with PID 32477 on node compute-1-8.local 
exited on signal 11 (Segmentation fault).
--

This is a parallel job that caused segmentation fault on compute-1-8, thus 
causing the entire job to fail.

Any input would be most appreciated.

Lily
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault error from mdrun

2011-12-07 Thread Mark Abraham

On 8/12/2011 7:36 AM, rainy908 wrote:

Hi,

I encounter the following error when trying to execute mdrun:

# Running Gromacs: read TPR and write output to /gpfs disk
  $MPIRUN  $MDRUN -v -nice 0 -np $NSLOTS \
  -s n12_random_50_protein_all.tpr \
  -o n12_random_50_protein_all.trr \
  -c n12_random_50_protein_all.gro \
  -g n12_random_50_protein_all.log \
  -x n12_random_50_protein_all.xtc \
  -e n12_random_50_protein_all.edr


You can save yourself some typing with mdrun -deffnm. Also note that 
mdrun -np does nothing for GROMACS 4.x.




Error:

[compute-0-7:12377] Failing at address: 0x7159fd0
[compute-0-30:07435] [ 1] mdrun [0x761971]
[compute-0-30:07435] *** End of error message ***
[compute-0-29:15535] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15535] [ 1] mdrun [0x761d60]
[compute-0-29:15535] *** End of error message ***
[compute-1-29:19799] [ 0] /lib64/libpthread.so.0 [0x33aac0e7c0]
[compute-1-29:19799] [ 1] mdrun [0x762065]
[compute-1-29:19799] *** End of error message ***
[compute-0-29:15537] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15537] [ 1] mdrun [0x762065]
[compute-0-29:15537] *** End of error message ***
[compute-0-29:15536] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15536] [ 1] mdrun [0x762065]
[compute-0-29:15536] *** End of error message ***
[compute-1-31:11981] [ 0] /lib64/libpthread.so.0 [0x374f00e7c0]
[compute-1-31:11981] [ 1] mdrun [0x761d60]
[compute-1-31:11981] *** End of error message ***
[compute-1-31:11982] [ 0] /lib64/libpthread.so.0 [0x374f00e7c0]
[compute-1-31:11982] [ 1] mdrun [0x761960]
[compute-1-31:11982] *** End of error message ***
[compute-0-29:15538] [ 0] /lib64/libpthread.so.0 [0x39df60e7c0]
[compute-0-29:15538] [ 1] mdrun [0x761960]
[compute-0-29:15538] *** End of error message ***
[compute-0-7:12377] [ 0] /lib64/libpthread.so.0 [0x387c60e7c0]
[compute-0-7:12377] [ 1] mdrun [0x729641]
[compute-0-7:12377] *** End of error message ***
[compute-1-29:19796] [ 0] /lib64/libpthread.so.0 [0x33aac0e7c0]
[compute-1-29:19796] [ 1] mdrun [0x762065]
[compute-1-29:19796] *** End of error message ***
[compute-1-31.local][[50630,1],32][btl_tcp_frag.c:216:mca_btl_tcp_frag_recv] 
mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
--
mpirun noticed that process rank 35 with PID 32477 on node compute-1-8.local 
exited on signal 11 (Segmentation fault).
--

This is a parallel job that caused segmentation fault on compute-1-8, thus 
causing the entire job to fail.


You need to look at stderr, stdout (some of which are above) and the 
.log file to find out what GROMACS thought caused the crash. You also 
need to use an mpi-enabled mdrun.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault from power6 kernel

2011-11-03 Thread Fabio Affinito

Thank you, Mark.
Using GMX_NOOPTIMIZEDKERNELS=1 everything runs fine on power6.
I also tried to run on a linux cluster and it went ok.


Fabio



The most likely issue is some normal blowing up scenario leading to a
table-lookup-overrun segfault in the 3xx series kernels. I don't know
why the usual error messages in such scenarios did not arise on this
platform. Try setting the environment variable GMX_NOOPTIMIZEDKERNELS to
1 to see if this is a power6-specific kernel issue. Try running the .tpr
on another platform.

Mark


--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault from power6 kernel

2011-11-03 Thread Mark Abraham

On 3/11/2011 7:59 PM, Fabio Affinito wrote:

Thank you, Mark.
Using GMX_NOOPTIMIZEDKERNELS=1 everything runs fine on power6.
I also tried to run on a linux cluster and it went ok.


Sounds like a bug. Please file a report here http://redmine.gromacs.org 
http://redmine.gromacs.org/ including your observations and a .tpr 
that will reproduce them.


Thanks,

Mark




Fabio



The most likely issue is some normal blowing up scenario leading to a
table-lookup-overrun segfault in the 3xx series kernels. I don't know
why the usual error messages in such scenarios did not arise on this
platform. Try setting the environment variable GMX_NOOPTIMIZEDKERNELS to
1 to see if this is a power6-specific kernel issue. Try running the .tpr
on another platform.

Mark




-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault from power6 kernel

2011-11-03 Thread Fabio AFFINITO
Hi Mark,
today I tested the 4.5.5 version. It seems here there's not that problem. 
Anyway, I will take some days to make more tests.

Thank you,

Fabio

- Messaggio originale -
Da: Mark Abraham mark.abra...@anu.edu.au
A: Discussion list for GROMACS users gmx-users@gromacs.org
Inviato: Giovedì, 3 novembre 2011 14:40:39
Oggetto: Re: [gmx-users] segmentation fault from power6 kernel


On 3/11/2011 7:59 PM, Fabio Affinito wrote:

Thank you, Mark.
Using GMX_NOOPTIMIZEDKERNELS=1 everything runs fine on power6.
I also tried to run on a linux cluster and it went ok.

Sounds like a bug. Please file a report here http://redmine.gromacs.org 
including your observations and a .tpr that will reproduce them.

Thanks,

Mark





Fabio




The most likely issue is some normal blowing up scenario leading to a
table-lookup-overrun segfault in the 3xx series kernels. I don't know
why the usual error messages in such scenarios did not arise on this
platform. Try setting the environment variable GMX_NOOPTIMIZEDKERNELS to
1 to see if this is a power6-specific kernel issue. Try running the .tpr
on another platform.

Mark



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault from power6 kernel

2011-11-02 Thread Fabio AFFINITO
Dear all,
I've trying to run a simulation on a IBM Power6 cluster. At the
beginning of the simulation I've got a segmentation fault. I investigated with 
TotalView and I've found that this segmentation violation originates in the 
pwr6kernel310.F
Up to now, I still didn't find what is behind this seg violation. I would like 
to ask if anybody is aware of a bug behind this function.
The simulation is obtained by using Gromacs 4.5.3 compiled in double precision.
The options that I specified in the configure are:
--disable-threads --enable-power6 --enable-mpi

The log file doesn't provide much informations:

Log file opened on Wed Nov  2 20:11:02 2011
Host: sp0202  pid: 11796682  nodeid: 0  nnodes:  1
The Gromacs distribution was built Thu Dec 16 14:44:40 GMT+01:00 2010 by
propro01@sp0201 (AIX 1 00C3E6444C00)


 :-)  G  R  O  M  A  C  S  (-:

   Gromacs Runs One Microsecond At Cannonball Speeds

:-)  VERSION 4.5.3  (-:

Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
  Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra,
Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2010, The GROMACS development team at
Uppsala University  The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  mdrun_d (double precision)  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

Input Parameters:
   integrator   = md
   nsteps   = 250
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 10
   comm_mode= Linear
   nstlog   = 2500
   nstxout  = 2500
   nstvout  = 2500
   nstfout  = 0
   nstcalcenergy= 10
   nstenergy= 2500
   nstxtcout= 2500
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 50
   nky  = 50
   nkz  = 50
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = TRUE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = Nose-Hoover
   nsttcouple   = 10
   epc  = No
   epctype  = Isotropic
   nstpcouple   = -1
   tau_p= 1
   ref_p (3x3):
  ref_p[0]={ 0.0e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  0.0e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  0.0e+00}
   compress (3x3):
  compress[0]={ 0.0e+00,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  0.0e+00,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  0.0e+00}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 0.0e+00
   posres_comB (3):
  posres_comB[0]= 0.0e+00
  posres_comB[1]= 0.0e+00
  posres_comB[2]= 0.0e+00
   andersen_seed= 815131
   rlist

[gmx-users] Segmentation fault

2011-10-10 Thread ITHAYARAJA
Hi

When i perform mdrun for energy minimization, I found an error revealed
segmentation fault. Please explain me

-- 
**
Ithayaraja M,
Research Scholar,
Department of Bionformatics,
Bharathiar University,
Coimbatore 641 046,
Tamil Nadu
India
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread rainy908
Dear gmx-users:

Thanks Justin for your help.  But now I am experiencing a Segmentation fault 
error when executing mdrun.  I've perused the archives but found none of the 
threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

My files are as follows:

md.mdp

title   = 1JFF MD
cpp = /lib/cpp ; location of cpp on SGI
constraints = all-bonds
integrator  = md
dt  = 0.0001 ; ps
nsteps  = 25000 ;
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 0
nstfout = 0
nstlist = 10
ns_type = grid
rlist   = 0.9
coulombtype = PME
rcoulomb= 0.9
rvdw= 1.0
fourierspacing  = 0.12
fourier_nx= 0
fourier_ny= 0
fourier_nz= 0
pme_order = 6
ewald_rtol= 1e-5
optimize_fft  = yes
; Berendsen temperature coupling is on in four groups
Tcoupl= berendsen
tau_t = 0.1
tc-grps   = system
ref_t = 310
; Pressure coupling is on
Pcoupl  = berendsen
pcoupltype  = isotropic
tau_p   = 0.5
compressibility = 4.5e-5
ref_p   = 1.0
; Generate velocites is on at 310 K.
gen_vel = yes
gen_temp = 310.0
gen_seed = 173529




error output file:

..
..
Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
Starting 8 threads
Loaded with Money
 
Making 3D domain decomposition 2 x 2 x 2
 
Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#

Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.001453, max 0.034820 (between atoms 315 and 317)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.048739, max 1.100685 (between atoms 8422 and 8421)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
..
..
snip
..
..

starting mdrun 'TUBULIN ALPHA CHAIN'
25000 steps, 50.0 ps.
Warning: 1-4 interaction between 8443 and 8446 at distance 2.853 which is 
larger than the 1-4 table size 2.000 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size   
..
..
snip
..
..
step 0: Water molecule starting at atom 23781 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.1#

Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.1#

Back Off! I just backed up step0b_n5.pdb to ./#step0b_n5.pdb.2#

Back Off! I just backed up step0b_n3.pdb to ./#step0b_n3.pdb.2#

Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.1#

Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.1#

Back Off! I just backed up step0c_n5.pdb to ./#step0c_n5.pdb.2#

Back Off! I just backed up step0c_n3.pdb to ./#step0c_n3.pdb.2#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
^Mstep 0/opt/sge/jacobson/spool/node-2-05/job_scripts/1097116: line 21:  1473 
Segmentation fault  (core dumped) $MDRUN -machinefile $TMPDIR/machines -np 
$NSLOTS $MDRUN -v -nice 0 -np $NSLOTS -s 1JFF_md.tpr -o 1JFF_md.trr -c 
1JFF_pmd.gro -x 1JFF_md.xtc -e 1JFF_md.edr



On 16 August 2011 10:58, Justin A. Lemkul jalem...@vt.edu wrote:



rainy908 wrote:

Hi,

I get the error Atomtype CR1 not found when I execute grompp.  After 
perusing the gmx archives, I understand this error has to do with the lack of 
CR1 being specified in the force field.  However, I did include the 
appropriate .itp files in my .top file (shown below).  As you can see, 
obviously CR1 is specified in taxol.itp and gtp.itp.  Therefore, I'm not sure 
what exactly is the problem here.


You're mixing and matching force fields.  

Re: [gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread Justin A. Lemkul



rainy908 wrote:

Dear gmx-users:

Thanks Justin for your help.  But now I am experiencing a Segmentation fault 
error when executing mdrun.  I've perused the archives but found none of the 
threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

My files are as follows:

md.mdp

title   = 1JFF MD
cpp = /lib/cpp ; location of cpp on SGI
constraints = all-bonds
integrator  = md
dt  = 0.0001 ; ps
nsteps  = 25000 ;
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 0
nstfout = 0
nstlist = 10
ns_type = grid
rlist   = 0.9
coulombtype = PME
rcoulomb= 0.9
rvdw= 1.0
fourierspacing  = 0.12
fourier_nx= 0
fourier_ny= 0
fourier_nz= 0
pme_order = 6
ewald_rtol= 1e-5
optimize_fft  = yes
; Berendsen temperature coupling is on in four groups
Tcoupl= berendsen
tau_t = 0.1
tc-grps   = system
ref_t = 310
; Pressure coupling is on
Pcoupl  = berendsen
pcoupltype  = isotropic
tau_p   = 0.5
compressibility = 4.5e-5
ref_p   = 1.0
; Generate velocites is on at 310 K.
gen_vel = yes
gen_temp = 310.0
gen_seed = 173529




error output file:

..
..
Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
Starting 8 threads
Loaded with Money
 
Making 3D domain decomposition 2 x 2 x 2
 
Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#


Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.001453, max 0.034820 (between atoms 315 and 317)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length



If mdrun is failing at step 0, it indicates that your system is physically 
unreasonable.  Either the starting configuration has atomic clashes that have 
not been resolved (and thus you need better EM and/or equilibration) or that the 
parameters assigned to the molecules in your system are unreasonable.


-Justin


Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.048739, max 1.100685 (between atoms 8422 and 8421)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
..
..
snip
..
..

starting mdrun 'TUBULIN ALPHA CHAIN'
25000 steps, 50.0 ps.
Warning: 1-4 interaction between 8443 and 8446 at distance 2.853 which is 
larger than the 1-4 table size 2.000 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size   
..

..
snip
..
..
step 0: Water molecule starting at atom 23781 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.1#

Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.1#

Back Off! I just backed up step0b_n5.pdb to ./#step0b_n5.pdb.2#

Back Off! I just backed up step0b_n3.pdb to ./#step0b_n3.pdb.2#

Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.1#

Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.1#

Back Off! I just backed up step0c_n5.pdb to ./#step0c_n5.pdb.2#

Back Off! I just backed up step0c_n3.pdb to ./#step0c_n3.pdb.2#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
^Mstep 0/opt/sge/jacobson/spool/node-2-05/job_scripts/1097116: line 21:  1473 
Segmentation fault  (core dumped) $MDRUN -machinefile $TMPDIR/machines -np 
$NSLOTS $MDRUN -v -nice 0 -np $NSLOTS -s 1JFF_md.tpr -o 1JFF_md.trr -c 
1JFF_pmd.gro -x 1JFF_md.xtc -e 1JFF_md.edr



On 16 August 2011 10:58, Justin A. Lemkul jalem...@vt.edu wrote:



rainy908 wrote:

Hi,

I get the error Atomtype CR1 not found when I execute grompp.  After perusing 
the gmx archives, I 

Re: [gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread rainy908
Hi Justin,

THanks for the input.  So I traced back to my energy minimization steps, and am 
getting the error message after I execute the following line:

$mdrun -s 1JFF_em.tpr -o 1JFF_em.trr -c 1JFF_b4pr.gro -e em.edr

output:
Back Off! I just backed up md.log to ./#md.log.2#
Reading file 1JFF_em.tpr, VERSION 4.5.3 (single precision)
Starting 24 threads

Will use 15 particle-particle and 9 PME only nodes
This is a guess, check the performance at the end of the log file

---
Program mdrun, VERSION 4.5.3
Source code file: domdec.c, line: 6428

Fatal error:
There is no domain decomposition for 15 nodes that is compatible with the given 
box and a minimum cell size of 2.92429 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

---

I figure the problem must lie within my em.mdp file:

title = 1JFF
cpp = /lib/cpp ; location of cpp on SGI
define = -DFLEX_SPC ; Use Ferguson’s Flexible water model [4]
constraints = none
integrator = steep
dt = 0.001 ; ps !
nsteps = 1
nstlist = 10
ns_type = grid
rlist = 0.9
coulombtype = PME ; Use particle-mesh ewald
rcoulomb = 0.9
rvdw = 1.0
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
ewald_rtol = 1e-5
optimize_fft = yes
;
; Energy minimizing stuff
;
emtol = 1000.0
emstep = 0.01
~

I figure this is an issue related to with PME and the Fourier spacing?

Thanks,

rainy908



On 17 August 2011 17:55, Justin A. Lemkul jalem...@vt.edu wrote:



rainy908 wrote:

Dear gmx-users:

Thanks Justin for your help.  But now I am experiencing a Segmentation 
fault error when executing mdrun.  I've perused the archives but found none of 
the threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

My files are as follows:

md.mdp

title   = 1JFF MD
cpp = /lib/cpp ; location of cpp on SGI
constraints = all-bonds
integrator  = md
dt  = 0.0001 ; ps
nsteps  = 25000 ;
nstcomm = 1
nstxout = 500 ; output coordinates every 1.0 ps
nstvout = 0
nstfout = 0
nstlist = 10
ns_type = grid
rlist   = 0.9
coulombtype = PME
rcoulomb= 0.9
rvdw= 1.0
fourierspacing  = 0.12
fourier_nx= 0
fourier_ny= 0
fourier_nz= 0
pme_order = 6
ewald_rtol= 1e-5
optimize_fft  = yes
; Berendsen temperature coupling is on in four groups
Tcoupl= berendsen
tau_t = 0.1
tc-grps   = system
ref_t = 310
; Pressure coupling is on
Pcoupl  = berendsen
pcoupltype  = isotropic
tau_p   = 0.5
compressibility = 4.5e-5
ref_p   = 1.0
; Generate velocites is on at 310 K.
gen_vel = yes
gen_temp = 310.0
gen_seed = 173529




error output file:

..
..
Back Off! I just backed up md.log to ./#md.log.1#
Getting Loaded...
Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
Starting 8 threads
Loaded with Money
 Making 3D domain decomposition 2 x 2 x 2
 Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#

Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

Step 0, time 0 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.001453, max 0.034820 (between atoms 315 and 317)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length

If mdrun is failing at step 0, it indicates that your system is physically 
unreasonable.  Either the starting 

Re: [gmx-users] Segmentation fault after mdrun for MD simulation

2011-08-17 Thread Mark Abraham

On 18/08/2011 2:41 PM, rainy908 wrote:

Hi Justin,

THanks for the input.  So I traced back to my energy minimization steps, and am 
getting the error message after I execute the following line:

$mdrun -s 1JFF_em.tpr -o 1JFF_em.trr -c 1JFF_b4pr.gro -e em.edr

output:
Back Off! I just backed up md.log to ./#md.log.2#
Reading file 1JFF_em.tpr, VERSION 4.5.3 (single precision)
Starting 24 threads

Will use 15 particle-particle and 9 PME only nodes
This is a guess, check the performance at the end of the log file

---
Program mdrun, VERSION 4.5.3
Source code file: domdec.c, line: 6428

Fatal error:
There is no domain decomposition for 15 nodes that is compatible with the given 
box and a minimum cell size of 2.92429 nm
Change the number of nodes or mdrun option -rdd
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

---

I figure the problem must lie within my em.mdp file:


It could, but if you follow the above advice you will learn about some 
other considerations.


Mark



title = 1JFF
cpp = /lib/cpp ; location of cpp on SGI
define = -DFLEX_SPC ; Use Ferguson’s Flexible water model [4]
constraints = none
integrator = steep
dt = 0.001 ; ps !
nsteps = 1
nstlist = 10
ns_type = grid
rlist = 0.9
coulombtype = PME ; Use particle-mesh ewald
rcoulomb = 0.9
rvdw = 1.0
fourierspacing = 0.12
fourier_nx = 0
fourier_ny = 0
fourier_nz = 0
pme_order = 4
ewald_rtol = 1e-5
optimize_fft = yes
;
; Energy minimizing stuff
;
emtol = 1000.0
emstep = 0.01
~

I figure this is an issue related to with PME and the Fourier spacing?

Thanks,

rainy908



On 17 August 2011 17:55, Justin A. Lemkuljalem...@vt.edu  wrote:



 rainy908 wrote:

 Dear gmx-users:

 Thanks Justin for your help.  But now I am experiencing a Segmentation 
fault error when executing mdrun.  I've perused the archives but found none of 
the threads on segmentation faults similar to my case here.  I believe the 
segmentation fault is caused by the awkward positioning of atoms 8443 and 8446 
with respect to one another, but am not 100%.  Any advice would be especially 
welcome.

 My files are as follows:

 md.mdp
 
 title   = 1JFF MD
 cpp = /lib/cpp ; location of cpp on SGI
 constraints = all-bonds
 integrator  = md
 dt  = 0.0001 ; ps
 nsteps  = 25000 ;
 nstcomm = 1
 nstxout = 500 ; output coordinates every 1.0 ps
 nstvout = 0
 nstfout = 0
 nstlist = 10
 ns_type = grid
 rlist   = 0.9
 coulombtype = PME
 rcoulomb= 0.9
 rvdw= 1.0
 fourierspacing  = 0.12
 fourier_nx= 0
 fourier_ny= 0
 fourier_nz= 0
 pme_order = 6
 ewald_rtol= 1e-5
 optimize_fft  = yes
 ; Berendsen temperature coupling is on in four groups
 Tcoupl= berendsen
 tau_t = 0.1
 tc-grps   = system
 ref_t = 310
 ; Pressure coupling is on
 Pcoupl  = berendsen
 pcoupltype  = isotropic
 tau_p   = 0.5
 compressibility = 4.5e-5
 ref_p   = 1.0
 ; Generate velocites is on at 310 K.
 gen_vel = yes
 gen_temp = 310.0
 gen_seed = 173529
 



 error output file:
 
 ..
 ..
 Back Off! I just backed up md.log to ./#md.log.1#
 Getting Loaded...
 Reading file 1JFF_md.tpr, VERSION 4.5.3 (single precision)
 Starting 8 threads
 Loaded with Money
  Making 3D domain decomposition 2 x 2 x 2
  Back Off! I just backed up 1JFF_md.trr to ./#1JFF_md.trr.1#

 Back Off! I just backed up 1JFF_md.edr to ./#1JFF_md.edr.1#

 Step 0, time 0 (ps)  LINCS WARNING
 relative constraint deviation after LINCS:
 rms 0.046849, max 1.014038 (between atoms 8541 and 8539)

 Step 0, time 0 (ps)  LINCS WARNING
 relative constraint deviation after LINCS:
 rms 0.001453, max 0.034820 (between atoms 315 and 317)
 bonds that rotated more than 30 degrees:
  atom 1 atom 2  angle  previous, current, constraint length
 bonds that rotated more than 30 

[gmx-users] Segmentation fault

2011-07-13 Thread Sayan Bagchi
Hello All,

I was trying to run a MD simulation of a 17 amino acid peptide. At the
position restraint step, the program crashed after running ~536 ps.

It gave the error message:

t=536.242 ps: Water molecule starting at atom 6101 cannot be settled.
Check for bad contacts and/or reduce the timestep. Wrote pdb files with
previous and current coordinates.

So I looked into the pdb files for atom 6101:

In the last but one step, the pdb file looks like:

ATOM   6101  OW  SOL  1970  44.896  30.613  17.849  1.00  0.00
ATOM   6102  HW1 SOL  1970  45.375  30.296  18.668  1.00  0.00
ATOM   6103  HW2 SOL  1970  44.264  29.904  17.537  1.00  0.00

In the last step, the pdb file looks like:

ATOM   6101  OW  SOL  1970
395154652137521152.000-290190255628222464.000695407981780533248.000  1.00
0.00
ATOM   6102  HW1 SOL  1970
-2104656020830683136.0005806908484133847040.000-6427192471385538560.000
1.00  0.00
ATOM   6103  HW2 SOL  1970
1131095442881249280.000593978790032441344.0001018902650772520960.000  1.00
0.00

So, there is clearly something wrong. What should I do now to solve this
problem?

Thanks,
Sayan.

-- 
 Sayan Bagchi
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault

2011-07-13 Thread Justin A. Lemkul



Sayan Bagchi wrote:

Hello All,

I was trying to run a MD simulation of a 17 amino acid peptide. At the 
position restraint step, the program crashed after running ~536 ps.


It gave the error message:

t=536.242 ps: Water molecule starting at atom 6101 cannot be settled.
Check for bad contacts and/or reduce the timestep. Wrote pdb files with 
previous and current coordinates.


So I looked into the pdb files for atom 6101:

In the last but one step, the pdb file looks like:

ATOM   6101  OW  SOL  1970  44.896  30.613  17.849  1.00  0.00
ATOM   6102  HW1 SOL  1970  45.375  30.296  18.668  1.00  0.00
ATOM   6103  HW2 SOL  1970  44.264  29.904  17.537  1.00  0.00

In the last step, the pdb file looks like:

ATOM   6101  OW  SOL  1970
395154652137521152.000-290190255628222464.000695407981780533248.000  
1.00  0.00
ATOM   6102  HW1 SOL  1970
-2104656020830683136.0005806908484133847040.000-6427192471385538560.000  
1.00  0.00
ATOM   6103  HW2 SOL  1970
1131095442881249280.000593978790032441344.0001018902650772520960.000  
1.00  0.00


So, there is clearly something wrong. What should I do now to solve this 
problem?




http://www.gromacs.org/Documentation/Errors#LINCS.2fSETTLE.2fSHAKE_warnings

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault.

2011-05-20 Thread sreelakshmi ramesh
dear gmx-users,
 During equilibriation i get the following error.Any
suggestions please.

*grompp*

WARNING 1 [file npt.mdp]:
  The sum of the two largest charge group radii (2.369582) is larger than
  rlist (1.00)


This run will generate roughly 73 Mb of data

There was 1 warning

Back Off! I just backed up npt.tpr to ./#npt.tpr.5#

gcq#113: I Don't Like Dirt (The Breeders)

*
mdrun*

WARNING: For the 1498 non-zero entries for table 2 in table_Na_Cl.xvg the
forces deviate on average -2147483648% from minus the numerical derivative
of the potential


WARNING: For the 1498 non-zero entries for table 2 in table_Na_Cl.xvg the
forces deviate on average -2147483648% from minus the numerical derivative
of the potential

Making 1D domain decomposition 2 x 1 x 1

Back Off! I just backed up npt.trr to ./#npt.trr.6#

Back Off! I just backed up ener.edr to ./#ener.edr.23#
starting mdrun 'NA SODIUM ION in water'
5 steps, 50.0 ps.

step 0: Water molecule starting at atom 6153 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.23#

step 0: Water molecule starting at atom 4410 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.23#

Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.23#

Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.23#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
*Segmentation fault*

*mdp file*
title   = nacl
define  = -DPOSRES  ; position restrain the protein
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 5 ; 2 * 5 = 100 ps
dt  = 0.001 ; 2 fs
; Output control
nstxout = 100   ; save coordinates every 0.2 ps
nstvout = 100   ; save velocities every 0.2 ps
nstenergy   = 100   ; save energies every 0.2 ps
nstlog  = 100   ; update log file every 0.2 ps
; Bond parameters
continuation= no; Restarting after NVT
constraint_algorithm = lincs; holonomic constraints
constraints = all-bonds ; all bonds (even heavy atom-H bonds)
constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist   = 1.0   ; short-range neighborlist cutoff (in nm)
rcoulomb= 1.0   ; short-range electrostatic cutoff (in nm)
rvdw= 1.0   ; short-range van der Waals cutoff (in nm)
; Electrostatics

coulombtype = user
energygrps = Na Cl Sol
energygrp_table = Na Cl
vdwtype= user

fourierspacing  = 0.16  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = V-rescale ; modified Berendsen thermostat
tc-grps = SOL Na Cl ; two coupling groups - more accurate
tau_t   = 0.1   0.1 0.1 ; time constant, in ps
ref_t   = 300   300 300 ; reference temperature, one for each group,
in K
; Pressure coupling is on
pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
pcoupltype  = isotropic ; uniform scaling of box vectors
tau_p   = 2.0   ; time constant, in ps
ref_p   = 1.0   ; reference pressure, in bar
compressibility = 4.5e-5; isothermal compressibility of water,
bar^-1
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC


; Velocity generation
gen_vel= yes
gen_temp= 300.0
gen_seed= -1
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault.

2011-05-20 Thread Justin A. Lemkul



sreelakshmi ramesh wrote:

dear gmx-users,
 During equilibriation i get the following 
error.Any suggestions please.


*grompp*

WARNING 1 [file npt.mdp]:
  The sum of the two largest charge group radii (2.369582) is larger than
  rlist (1.00)




This message suggests your topology is somehow broken.


This run will generate roughly 73 Mb of data

There was 1 warning

Back Off! I just backed up npt.tpr to ./#npt.tpr.5#

gcq#113: I Don't Like Dirt (The Breeders)

*
mdrun*

WARNING: For the 1498 non-zero entries for table 2 in table_Na_Cl.xvg 
the forces deviate on average -2147483648% from minus the numerical 
derivative of the potential



WARNING: For the 1498 non-zero entries for table 2 in table_Na_Cl.xvg 
the forces deviate on average -2147483648% from minus the numerical 
derivative of the potential




This still indicates a problem with the table.  Take Chris' advice and try your 
simulation without tabulated potentials.  This will tell you whether the problem 
is from a bad starting structure or because your tables are somehow not usable.



Making 1D domain decomposition 2 x 1 x 1

Back Off! I just backed up npt.trr to ./#npt.trr.6#

Back Off! I just backed up ener.edr to ./#ener.edr.23#
starting mdrun 'NA SODIUM ION in water'
5 steps, 50.0 ps.

step 0: Water molecule starting at atom 6153 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n1.pdb to ./#step0b_n1.pdb.23#

step 0: Water molecule starting at atom 4410 can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

Back Off! I just backed up step0b_n0.pdb to ./#step0b_n0.pdb.23#

Back Off! I just backed up step0c_n1.pdb to ./#step0c_n1.pdb.23#

Back Off! I just backed up step0c_n0.pdb to ./#step0c_n0.pdb.23#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
*Segmentation fault*



Step 0 failures indicate either (1) your starting configuration is unreasonable, 
(2) your .mdp settings are inappropriate, or (3) the tabulated potential is 
causing the system to collapse.  Did you do energy minimization?  Was it 
successful?  Good EM is the way around point 1.



*mdp file*
title   = nacl  
define  = -DPOSRES  ; position restrain the protein


What are you restraining?

snip


tc-grps = SOL Na Cl ; two coupling groups - more accurate


Never couple solvent and ions separately.  This alone can be a reason for 
instability.


http://www.gromacs.org/Documentation/Terminology/Thermostats#What_Not_To_Do

-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Vigneshwar Ramakrishnan
Dear All,

I am using the gromacs 4.0.7 version and I was trying to calculate the
momentum autocorrelation function by using the -m flag. However, I get a
segmentation fault as follows:

trn version: GMX_trn_file (double precision)
Reading frame   0 time0.000   Segmentation fault

When I don't use the -m option, I have no problem.

Upon searching the userslist, I found this thread:
http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and a
patch, but I don't find any related bugs reported elsewhere.

So, I am just wondering if I sould go ahead and use the patch or if there
could be something else that is wrong.

Will appreciate any kind of pointers.

Sincerely,
Vignesh
-- 
R.Vigneshwar
Graduate Student,
Dept. of Chemical  Biomolecular Engg,
National University of Singapore,
Singapore

Strive for Excellence, Never be satisfied with the second Best!!

I arise in the morning torn between a desire to improve the world and a
desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Justin A. Lemkul



Vigneshwar Ramakrishnan wrote:

Dear All,

I am using the gromacs 4.0.7 version and I was trying to calculate the 
momentum autocorrelation function by using the -m flag. However, I get a 
segmentation fault as follows:


trn version: GMX_trn_file (double precision)
Reading frame   0 time0.000   Segmentation fault

When I don't use the -m option, I have no problem.

Upon searching the userslist, I found this 
thread: http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and 
a patch, but I don't find any related bugs reported elsewhere. 

So, I am just wondering if I sould go ahead and use the patch or if 
there could be something else that is wrong. 

Will appreciate any kind of pointers. 


Either apply the patch or upgrade to a newer version of Gromacs that contains 
this bug fix.


-Justin



Sincerely, 
Vignesh

--
R.Vigneshwar
Graduate Student,
Dept. of Chemical  Biomolecular Engg,
National University of Singapore,
Singapore

Strive for Excellence, Never be satisfied with the second Best!!

I arise in the morning torn between a desire to improve the world and a 
desire to enjoy the world. This makes it hard to plan the day. (E.B. White)




--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Carsten Kutzner
Hi,

apparently this bug fix made it to 4.5, but not to 4.0.
I will apply the fix also there.

Carsten

On Feb 1, 2011, at 1:58 PM, Justin A. Lemkul wrote:

 
 
 Vigneshwar Ramakrishnan wrote:
 Dear All,
 I am using the gromacs 4.0.7 version and I was trying to calculate the 
 momentum autocorrelation function by using the -m flag. However, I get a 
 segmentation fault as follows:
 trn version: GMX_trn_file (double precision)
 Reading frame   0 time0.000   Segmentation fault
 When I don't use the -m option, I have no problem.
 Upon searching the userslist, I found this thread: 
 http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and a 
 patch, but I don't find any related bugs reported elsewhere. So, I am just 
 wondering if I sould go ahead and use the patch or if there could be 
 something else that is wrong. Will appreciate any kind of pointers. 
 
 Either apply the patch or upgrade to a newer version of Gromacs that contains 
 this bug fix.
 
 -Justin
 
 Sincerely, Vignesh
 -- 
 R.Vigneshwar
 Graduate Student,
 Dept. of Chemical  Biomolecular Engg,
 National University of Singapore,
 Singapore
 Strive for Excellence, Never be satisfied with the second Best!!
 I arise in the morning torn between a desire to improve the world and a 
 desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
 
 -- 
 
 
 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface 
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Carsten Kutzner
Hi Vigneshwar, 

the problem is fixed now in the release-4-0-patches branch. 

Carsten


On Feb 1, 2011, at 2:00 PM, Carsten Kutzner wrote:

 Hi,
 
 apparently this bug fix made it to 4.5, but not to 4.0.
 I will apply the fix also there.
 
 Carsten
 
 On Feb 1, 2011, at 1:58 PM, Justin A. Lemkul wrote:
 
 
 
 Vigneshwar Ramakrishnan wrote:
 Dear All,
 I am using the gromacs 4.0.7 version and I was trying to calculate the 
 momentum autocorrelation function by using the -m flag. However, I get a 
 segmentation fault as follows:
 trn version: GMX_trn_file (double precision)
 Reading frame   0 time0.000   Segmentation fault
 When I don't use the -m option, I have no problem.
 Upon searching the userslist, I found this thread: 
 http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and a 
 patch, but I don't find any related bugs reported elsewhere. So, I am just 
 wondering if I sould go ahead and use the patch or if there could be 
 something else that is wrong. Will appreciate any kind of pointers. 
 
 Either apply the patch or upgrade to a newer version of Gromacs that 
 contains this bug fix.
 
 -Justin
 
 Sincerely, Vignesh
 -- 
 R.Vigneshwar
 Graduate Student,
 Dept. of Chemical  Biomolecular Engg,
 National University of Singapore,
 Singapore
 Strive for Excellence, Never be satisfied with the second Best!!
 I arise in the morning torn between a desire to improve the world and a 
 desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
 
 -- 
 
 
 Justin A. Lemkul
 Ph.D. Candidate
 ICTAS Doctoral Scholar
 MILES-IGERT Trainee
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface 
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
 
 
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault: g_velacc

2011-02-01 Thread Vigneshwar Ramakrishnan
Thanks very much, Dr. Kutzner!

On Tue, Feb 1, 2011 at 9:14 PM, Carsten Kutzner ckut...@gwdg.de wrote:

 Hi Vigneshwar,

 the problem is fixed now in the release-4-0-patches branch.

 Carsten


 On Feb 1, 2011, at 2:00 PM, Carsten Kutzner wrote:

  Hi,
 
  apparently this bug fix made it to 4.5, but not to 4.0.
  I will apply the fix also there.
 
  Carsten
 
  On Feb 1, 2011, at 1:58 PM, Justin A. Lemkul wrote:
 
 
 
  Vigneshwar Ramakrishnan wrote:
  Dear All,
  I am using the gromacs 4.0.7 version and I was trying to calculate the
 momentum autocorrelation function by using the -m flag. However, I get a
 segmentation fault as follows:
  trn version: GMX_trn_file (double precision)
  Reading frame   0 time0.000   Segmentation fault
  When I don't use the -m option, I have no problem.
  Upon searching the userslist, I found this thread:
 http://lists.gromacs.org/pipermail/gmx-users/2010-October/054813.html and
 a patch, but I don't find any related bugs reported elsewhere. So, I am just
 wondering if I sould go ahead and use the patch or if there could be
 something else that is wrong. Will appreciate any kind of pointers.
 
  Either apply the patch or upgrade to a newer version of Gromacs that
 contains this bug fix.
 
  -Justin
 
  Sincerely, Vignesh
  --
  R.Vigneshwar
  Graduate Student,
  Dept. of Chemical  Biomolecular Engg,
  National University of Singapore,
  Singapore
  Strive for Excellence, Never be satisfied with the second Best!!
  I arise in the morning torn between a desire to improve the world and a
 desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
 
  --
  
 
  Justin A. Lemkul
  Ph.D. Candidate
  ICTAS Doctoral Scholar
  MILES-IGERT Trainee
  Department of Biochemistry
  Virginia Tech
  Blacksburg, VA
  jalemkul[at]vt.edu | (540) 231-9080
  http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
  --
  Dr. Carsten Kutzner
  Max Planck Institute for Biophysical Chemistry
  Theoretical and Computational Biophysics
  Am Fassberg 11, 37077 Goettingen, Germany
  Tel. +49-551-2012313, Fax: +49-551-2012302
  http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
 
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
R.Vigneshwar
Graduate Student,
Dept. of Chemical  Biomolecular Engg,
National University of Singapore,
Singapore

Strive for Excellence, Never be satisfied with the second Best!!

I arise in the morning torn between a desire to improve the world and a
desire to enjoy the world. This makes it hard to plan the day. (E.B. White)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] segmentation fault while running eneconv

2011-01-25 Thread Anna Marabotti
Dear all,
 
I launched on my system a first simulation of 5 ns, then I prolonged it to
50 ns using 
tpbconv -s tpr1_5ns.tpr -until 5 -o tpr2_50ns.tpr
and then 
mdrun -s tpr2_50ns.tpr -deffnm md2_50ns -cpi md1_5ns.cpt
Since my simulation was interrupted several times, every time I relaunched
it simply doing:
mdrun -s tpr2_50ns.tpr -cpi md2_50ns.cpt -deffnm md2_50ns_2/3/4
 
At the end of these simulations I obtained the following files:
- md1_5ns.xtc and .edr: files obtained from the first MD of 5 ns long
- md2_50ns.xtc and .edr: files obtained by prolonging the first MD until
50ns
- md2_50ns_2.xtc and .edr: files obtained by restarting the previous
dynamics that was interrupted before 50 ns
- md2_50ns_3.xtc and .edr: same as before
- md2_50ns_4.xtc and .edr: same as before
 
After all these runs, I want to concatenate all the dynamics in order to
have a single .xtc file md_50ns_tot and a single .edr file md_50ns_tot.edr.
For the first, I used:
trjcat -f md1_5ns.xtc md2_50ns.xtc md2_50ns_2.xtc md2_50ns_3.xtc
md2_50ns_4.xtc -o md_50ns_tot.xtc
and all worked fine: I obtained the output file with no errors (there are no
errors also in the .log files)
 
On the contrary, when I tried to do the same with eneconv:
eneconv -f md1_5ns.edr md2_50ns.edr md2_50ns_2.edr md2_50ns_3.edr
md2_50ns_4.edr -o md_50ns_tot.edr
I obtained the following output:
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
Reading energy frame  1 time  100.000
Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
Reading energy frame  0 time0.000
Opened 2GH9openmod4_pH10_50ns_2.part0002.edr as double precision energy file
Reading energy frame  0 time 14900.000
Opened 2GH9openmod4_pH10_50ns_3.part0003.edr as double precision energy file
Reading energy frame  0 time 27800.000
Opened 2GH9openmod4_pH10_50ns_4.part0004.edr as double precision energy file
Reading energy frame  0 time 38800.000
 
Summary of files and start times used:
 
  FileStart time
-
2GH9openmod4_pH10_5ns.edr0.000
2GH9openmod4_pH10_50ns.edr0.000
2GH9openmod4_pH10_50ns_2.part0002.edr14900.000
2GH9openmod4_pH10_50ns_3.part0003.edr27800.000
2GH9openmod4_pH10_50ns_4.part0004.edr38800.000
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file
Segmentation fault

Looking for some hints in the gmx-users list the only thing I found that
could be similar to my problem is this old message:
http://lists.gromacs.org/pipermail/gmx-users/2007-January/025657.html
 
I see in the output error message that the start time for the first two
simulations is the same: could be this one the problem for my system?
However, I did use tpbconv each time to make restarts of my simulations, I
really don't know why the start time is 0.000 in the first two cases. 
Is there a problem in the results of simulations if these two simulations
have the same start time? Practically, what can I do to concatenate my .edr
files? 
 
Many thanks in advance and best regards
Anna Marabotti
 

Anna Marabotti, Ph.D.
Laboratory of Bioinformatics and Computational Biology
Institute of Food Science, CNR
Via Roma, 64
83100 Avellino (Italy)
Phone: +39 0825 299651
Fax: +39 0825 781585
Email: anna.marabo...@isa.cnr.it
Skype account: annam1972
Web page: http://bioinformatica.isa.cnr.it/anna/anna.htm
 
When a man with a gun meets a man with a pen, the man with a gun is a dead
man
 
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault while running eneconv

2011-01-25 Thread Justin A. Lemkul



Anna Marabotti wrote:

Dear all,
 
I launched on my system a first simulation of 5 ns, then I prolonged it 
to 50 ns using

tpbconv -s tpr1_5ns.tpr -until 5 -o tpr2_50ns.tpr
and then
mdrun -s tpr2_50ns.tpr -deffnm md2_50ns -cpi md1_5ns.cpt
Since my simulation was interrupted several times, every time I 
relaunched it simply doing:

mdrun -s tpr2_50ns.tpr -cpi md2_50ns.cpt -deffnm md2_50ns_2/3/4
 
At the end of these simulations I obtained the following files:

- md1_5ns.xtc and .edr: files obtained from the first MD of 5 ns long
- md2_50ns.xtc and .edr: files obtained by prolonging the first MD until 
50ns
- md2_50ns_2.xtc and .edr: files obtained by restarting the previous 
dynamics that was interrupted before 50 ns

- md2_50ns_3.xtc and .edr: same as before
- md2_50ns_4.xtc and .edr: same as before
 
After all these runs, I want to concatenate all the dynamics in order to 
have a single .xtc file md_50ns_tot and a single .edr file 
md_50ns_tot.edr. For the first, I used:
trjcat -f md1_5ns.xtc md2_50ns.xtc md2_50ns_2.xtc md2_50ns_3.xtc 
md2_50ns_4.xtc -o md_50ns_tot.xtc
and all worked fine: I obtained the output file with no errors (there 
are no errors also in the .log files)
 
On the contrary, when I tried to do the same with eneconv:
eneconv -f md1_5ns.edr md2_50ns.edr md2_50ns_2.edr md2_50ns_3.edr 
md2_50ns_4.edr -o md_50ns_tot.edr

I obtained the following output:
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file

Reading energy frame  1 time  100.000
Opened 2GH9openmod4_pH10_50ns.edr as double precision energy file
Reading energy frame  0 time0.000
Opened 2GH9openmod4_pH10_50ns_2.part0002.edr as double precision energy file
Reading energy frame  0 time 14900.000
Opened 2GH9openmod4_pH10_50ns_3.part0003.edr as double precision energy file
Reading energy frame  0 time 27800.000
Opened 2GH9openmod4_pH10_50ns_4.part0004.edr as double precision energy file
Reading energy frame  0 time 38800.000
 
Summary of files and start times used:
 
  FileStart time

-
2GH9openmod4_pH10_5ns.edr0.000
2GH9openmod4_pH10_50ns.edr0.000
2GH9openmod4_pH10_50ns_2.part0002.edr14900.000
2GH9openmod4_pH10_50ns_3.part0003.edr27800.000
2GH9openmod4_pH10_50ns_4.part0004.edr38800.000
 
Opened 2GH9openmod4_pH10_5ns.edr as double precision energy file

Segmentation fault
Looking for some hints in the gmx-users list the only thing I found that 
could be similar to my problem is this old message:

http://lists.gromacs.org/pipermail/gmx-users/2007-January/025657.html
 


What Gromacs version are you using?  If it is not 4.5.3, then you're probably 
running into a bug regarding double precision .edr files that was fixed some 
time ago.


I see in the output error message that the start time for the first two 
simulations is the same: could be this one the problem for my system? 
However, I did use tpbconv each time to make restarts of my simulations, 
I really don't know why the start time is 0.000 in the first two cases.


Well, your commands don't agree with the output of eneconv.  The names are 
different.  Perhaps you've confused what files you think you're using, or 
otherwise attempted to append to a file and then gave it a new name.  In any 
case, gmxcheck is your friend here.


Is there a problem in the results of simulations if these two 
simulations have the same start time? Practically, what can I do to 
concatenate my .edr files?
 


Presumably, yes.  As long as the .edr files have no internal corruptions (which, 
unfortunately, is quite possible if the job frequently went down), then you 
should be able to concatenate them.  That also depends on the version of Gromacs 
you're using, if you're running into the old bug.  It's always helpful to state 
right up front which version you're using when reporting a problem.


-Justin


Many thanks in advance and best regards
Anna Marabotti
 


Anna Marabotti, Ph.D.
Laboratory of Bioinformatics and Computational Biology
Institute of Food Science, CNR
Via Roma, 64
83100 Avellino (Italy)
Phone: +39 0825 299651
Fax: +39 0825 781585
Email: anna.marabo...@isa.cnr.it mailto:anna.marabo...@isa.cnr.it
Skype account: annam1972
Web page: http://bioinformatica.isa.cnr.it/anna/anna.htm
 
When a man with a gun meets a man with a pen, the man with a gun is a 
dead man
 



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe 

[gmx-users] Segmentation Fault in EM

2011-01-17 Thread TJ Mustard



  

  
Hi all,



I have been running alot of simulations on protein ligand interactions, and my settings/setup/mdp files worked great for one system. Then when we moved to a larger and more complicated system we started getting mdrun segmentation faults during steep energy minimization. This happens on our cluster and on our iMacs.



Any help would be appreciated. Also I can attach my mdp files.



Thank you



TJ Mustard
Email: musta...@onid.orst.edu
  

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation Fault in EM

2011-01-17 Thread Justin A. Lemkul



TJ Mustard wrote:



Hi all,

 

I have been running alot of simulations on protein ligand interactions, 
and my settings/setup/mdp files worked great for one system. Then when 
we moved to a larger and more complicated system we started getting 
mdrun segmentation faults during steep energy minimization.  This 
happens on our cluster and on our iMacs.


 


Any help would be appreciated. Also I can attach my mdp files.

 


There are a whole host of things that could be going wrong.  Without 
substantially more information, including even more (like a thorough description 
of what these systems are and the exact commands of what worked before), then 
you won't get any useful advice.


-Justin



Thank you

 


TJ Mustard
Email: musta...@onid.orst.edu



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation Fault in EM

2011-01-17 Thread TJ Mustard



  

  



  On January 17, 2011 at 1:20 PM Justin A. Lemkul jalem...@vt.edu wrote:
  
  
  
   TJ Mustard wrote:
   
   
Hi all,
   
   
   
I have been running alot of simulations on protein ligand interactions,
and my settings/setup/mdp files worked great for one system. Then when
we moved to a larger and more complicated system we started getting
mdrun segmentation faults during steep energy minimization. This
happens on our cluster and on our iMacs.
   
   
   
Any help would be appreciated. Also I can attach my mdp files.
   
   
  
   There are a whole host of things that could be going wrong. Without
   substantially more information, including even more (like a thorough description
   of what these systems are and the exact commands of what worked before), then
   you wont get any useful advice.
  
   -Justin
  


Ok, system 1 that worked is biotin and strepavidin in a water box, and the larger system is just rifampicin in a water box for hydration energies. Both ligands are being removed via FEP.



As for commands they are identical as we have made a systematic script that sets up our systems.



It is:




pdb2gmx -f base.pdb -o base.gro -p base.top



===Here we put the ligand .gro and the protein base .gro together.



editconf -bt cubic -f base.gro -o base.gro -c -d 3.5

genbox -cp base.gro -cs spc216.gro -o base_b4ion.gro -p base.top


grompp -f em.mdp -c base_b4ion.gro -p base.top -o base_b4ion.tpr -maxwarn 2

genion -s base_b4ion.tpr -o base_b4em.gro -neutral -conc 0.01 -pname NA -nname CL -g base_ion.log -p base.top



==Here select SOL



grompp -f em.mdp -c base_b4em.gro -p base.top -o base_em.tpr

mdrun -v -s base_em.tpr -c base_after_em.gro -g emlog.log -cpo stat_em.cpt



===Segmentation fault occurs here.


grompp -f pr.mdp -c base_after_em.gro -p base.top -o base_pr.tpr

mdrun -v -s base_pr.tpr -e pr.edr -c base_after_pr.gro -g prlog.log -cpi state_pr.cpt -cpo state_pr.cpt -dhdl dhdl-pr.xvg

grompp -f md.mdp -c base_after_pr.gro -p base.top -o base_md.tpr

mdrun -v -s base_md.tpr -o base_md.trr -c base_after_md.gro -g md.log -e md.edr -cpi state_md.cpt -cpo state_md.cpt -dhdl dhdl-md.xvg

grompp -f FEP.mdp -c base_after_md.gro -p base.top -o base_fep.tpr

mdrun -v -s base_fep.tpr -o base_fep.trr -c base_after_fep.gro -g fep.log -e fep.edr -cpi state_fep.cpt -cpo state_fep.cpt -dhdl dhdl-fep.xvg



I can include mdp files if that would help.



Thank you,

TJ Mustard






   
Thank you
   
   
   
TJ Mustard
Email: musta...@onid.orst.edu
   
  
   --
   
  
   Justin A. Lemkul
   Ph.D. Candidate
   ICTAS Doctoral Scholar
   MILES-IGERT Trainee
   Department of Biochemistry
   Virginia Tech
   Blacksburg, VA
   jalemkul[at]vt.edu | (540) 231-9080
   http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
  
   
   --
   gmx-users mailing list  gmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   Please dont post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   Cant post? Read http://www.gromacs.org/Support/Mailing_Lists
  




TJ Mustard
Email: musta...@onid.orst.edu
  

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation Fault in EM

2011-01-17 Thread Justin A. Lemkul



TJ Mustard wrote:



 


On January 17, 2011 at 1:20 PM Justin A. Lemkul jalem...@vt.edu wrote:

 
 
  TJ Mustard wrote:
  
  
   Hi all,
  
   
  

   I have been running alot of simulations on protein ligand interactions,
   and my settings/setup/mdp files worked great for one system. Then when
   we moved to a larger and more complicated system we started getting
   mdrun segmentation faults during steep energy minimization.  This
   happens on our cluster and on our iMacs.
  
   
  

   Any help would be appreciated. Also I can attach my mdp files.
  
   
 

  There are a whole host of things that could be going wrong.  Without
  substantially more information, including even more (like a thorough 
description
  of what these systems are and the exact commands of what worked 
before), then

  you won't get any useful advice.
 
  -Justin
 

Ok, system 1 that worked is biotin and strepavidin in a water box, and 
the larger system is just rifampicin in a water box for hydration 
energies. Both ligands are being removed via FEP.


 

As for commands they are identical as we have made a systematic script 
that sets up our systems.


 


It is:

 



pdb2gmx -f base.pdb -o base.gro -p base.top

 


===Here we put the ligand .gro and the protein base .gro together.

 


editconf -bt cubic -f base.gro -o base.gro -c -d 3.5

genbox -cp base.gro -cs spc216.gro -o base_b4ion.gro -p base.top


grompp -f em.mdp -c base_b4ion.gro -p base.top -o base_b4ion.tpr -maxwarn 2

genion -s base_b4ion.tpr -o base_b4em.gro -neutral -conc 0.01 -pname NA 
-nname CL -g base_ion.log -p base.top


 


==Here select SOL

 


grompp -f em.mdp -c base_b4em.gro -p base.top -o base_em.tpr

mdrun -v -s base_em.tpr -c base_after_em.gro -g emlog.log -cpo stat_em.cpt

 


===Segmentation fault occurs here.


grompp -f pr.mdp -c base_after_em.gro -p base.top -o base_pr.tpr

mdrun -v -s base_pr.tpr -e pr.edr -c base_after_pr.gro -g prlog.log -cpi 
state_pr.cpt -cpo state_pr.cpt -dhdl dhdl-pr.xvg


grompp -f md.mdp -c base_after_pr.gro -p base.top -o base_md.tpr

mdrun -v -s base_md.tpr -o base_md.trr -c base_after_md.gro -g md.log -e 
md.edr -cpi state_md.cpt -cpo state_md.cpt -dhdl dhdl-md.xvg


grompp -f FEP.mdp -c base_after_md.gro -p base.top -o base_fep.tpr

mdrun -v -s base_fep.tpr -o base_fep.trr -c base_after_fep.gro -g 
fep.log -e fep.edr -cpi state_fep.cpt -cpo state_fep.cpt -dhdl dhdl-fep.xvg


 


I can include mdp files if that would help.



Yes, please do.

-Justin

 


Thank you,

TJ Mustard

 

 


  
   Thank you
  
   
  

   TJ Mustard
   Email: musta...@onid.orst.edu
  
 
  --
  
 
  Justin A. Lemkul
  Ph.D. Candidate
  ICTAS Doctoral Scholar
  MILES-IGERT Trainee
  Department of Biochemistry
  Virginia Tech
  Blacksburg, VA
  jalemkul[at]vt.edu | (540) 231-9080
  http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!

  Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 

 


TJ Mustard
Email: musta...@onid.orst.edu



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault

2011-01-15 Thread leila separdar
I have Gromacs 4.0.7 I have simulated 1000 atoms of Argon with Lenard Jones
potential in a cubic box with linear size of 3.34. but when I reduced units
I confront with this error
Back Off! I just backed up md.log to ./#md.log.5#
Reading file argon.tpr, VERSION 4.0.7 (single precision)

Back Off! I just backed up argon.trr to ./#argon.trr.3#

Back Off! I just backed up ener.edr to ./#ener.edr.3#
starting mdrun 'Built with Packmol'
100 steps,   1000.0 ps.
Segmentation fault

there is no error in energy minimization or grommp command. could u please
help me?
here is my md.out file iin reduced units.
cpp = /usr/bin/cpp
integrator  = md
dt  = 0.001
nsteps  = 100
nstcomm = 100
nstxout = 100
nstvout = 100
nstfout = 0
nstlog  = 100
nstenergy   = 100
nstlist = 10
ns_type = grid
rlist   = 2.9
coulombtype = PME
rcoulomb= 2.9
rvdw= 2.9
fourierspacing  = 0.35
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes

;Berendsen temperature coupling is on in three groups
Tcoupl = berendsen
tau_t  = 0.1
tc_grps= GAS
ref_t  = 2.5

;generate velocities
gen_vel= yes
gen_temp= 2.5
gen_seed   = 173529
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] segmentation fault

2011-01-15 Thread Mark Abraham

On 15/01/2011 8:50 PM, leila separdar wrote:
I have Gromacs 4.0.7 I have simulated 1000 atoms of Argon with Lenard 
Jones potential in a cubic box with linear size of 3.34. but when I 
reduced units I confront with this error

Back Off! I just backed up md.log to ./#md.log.5#
Reading file argon.tpr, VERSION 4.0.7 (single precision)

Back Off! I just backed up argon.trr to ./#argon.trr.3#

Back Off! I just backed up ener.edr to ./#ener.edr.3#
starting mdrun 'Built with Packmol'
100 steps,   1000.0 ps.
Segmentation fault


What was your command line? What does the end of the logfile say? Why 
are you using PME for an LJ simulation that presumably has no 
electrostatic interactions?


Mark



there is no error in energy minimization or grommp command. could u 
please help me?

here is my md.out file iin reduced units.
cpp = /usr/bin/cpp
integrator  = md
dt  = 0.001
nsteps  = 100
nstcomm = 100
nstxout = 100
nstvout = 100
nstfout = 0
nstlog  = 100
nstenergy   = 100
nstlist = 10
ns_type = grid
rlist   = 2.9
coulombtype = PME
rcoulomb= 2.9
rvdw= 2.9
fourierspacing  = 0.35
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes

;Berendsen temperature coupling is on in three groups
Tcoupl = berendsen
tau_t  = 0.1
tc_grps= GAS
ref_t  = 2.5

;generate velocities
gen_vel= yes
gen_temp= 2.5
gen_seed   = 173529 


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault

2011-01-15 Thread leila separdar
I managed to run my simulation but which kind of coulomtype can I use except
PME? also I get different averages for kinetic and potential energies than
before reducing units I think these numbers must be the same because
E*=E/epsilon (epsilon is about 0.9977) for argon. could u please help me
about this issue?( I have reduced box size, dt, C6=4 , C12=4, mass=1 and
temperature. before reducing temperature is 300 and after reducing it is
2.5)

On Sun, Jan 16, 2011 at 12:23 AM, Mark Abraham mark.abra...@anu.edu.auwrote:

 On 15/01/2011 8:50 PM, leila separdar wrote:

 I have Gromacs 4.0.7 I have simulated 1000 atoms of Argon with Lenard
 Jones potential in a cubic box with linear size of 3.34. but when I reduced
 units I confront with this error
 Back Off! I just backed up md.log to ./#md.log.5#
 Reading file argon.tpr, VERSION 4.0.7 (single precision)

 Back Off! I just backed up argon.trr to ./#argon.trr.3#

 Back Off! I just backed up ener.edr to ./#ener.edr.3#
 starting mdrun 'Built with Packmol'
 100 steps,   1000.0 ps.
 Segmentation fault


 What was your command line? What does the end of the logfile say? Why are
 you using PME for an LJ simulation that presumably has no electrostatic
 interactions?

 Mark



 there is no error in energy minimization or grommp command. could u please
 help me?
 here is my md.out file iin reduced units.
 cpp = /usr/bin/cpp
 integrator  = md
 dt  = 0.001
 nsteps  = 100
 nstcomm = 100
 nstxout = 100
 nstvout = 100
 nstfout = 0
 nstlog  = 100
 nstenergy   = 100
 nstlist = 10
 ns_type = grid
 rlist   = 2.9
 coulombtype = PME
 rcoulomb= 2.9
 rvdw= 2.9
 fourierspacing  = 0.35
 fourier_nx  = 0
 fourier_ny  = 0
 fourier_nz  = 0
 pme_order   = 4
 ewald_rtol  = 1e-5
 optimize_fft= yes

 ;Berendsen temperature coupling is on in three groups
 Tcoupl = berendsen
 tau_t  = 0.1
 tc_grps= GAS
 ref_t  = 2.5

 ;generate velocities
 gen_vel= yes
 gen_temp= 2.5
 gen_seed   = 173529


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the www interface
 or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread leila karami
Dear gromacs users

I used g_hbond tool for hydrogen bond analysis between protein and solvent
(water molecules).

I have encountered with :


Select a group: 3
Selected 3: 'Protein'
Select a group: 15
Selected 15: 'SOL'
Checking for overlap in atoms between Protein and SOL
Calculating hydrogen bonds between Protein (825 atoms) and SOL (22218 atoms)
Found 7441 donors and 7654 acceptors
Making hbmap structure...done.
Reading frame   0 time0.000
Will do grid-seach on 14x14x14 grid, rcut=0.35

Back Off! I just backed up donor.xvg to ./#donor.xvg.2#
Reading frame 400 time 1600.000
Found 27249 different hydrogen bonds in trajectory
Found 33939 different atom-pairs within hydrogen bonding distance
Merging hbonds with Acceptor and Donor swapped
1/7441*Segmentation fault*


How to fix it?

any help will highly appreciated.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

Hi,

Upgrade to 4.5.x and see if the problem persists. I've hacked g_hbond 
quite a bit since 4.0.5.


Erik

leila karami skrev 2010-12-18 14.02:

Dear gromacs users

I'm using gromacs 4.0.5. with following command:

g_hbond -f .xtc -s .tpr -n .ndx -num -g -hbn.

my system contains protein, dna and water.

when I use above command for protein and dna, there is no problem. 
segmentation fault is only for protein and water.


--
Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group




--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread leila karami
Dear Erik

I used g_hbond in 4.5.1 but problem was not solved.

-- 

Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread leila karami
Dear Erik

thanks for your attention.

in version 4.5.1, when I use g_hbond -f .xtc -s .tpr -n .ndx -num -g -hbn,
gromacs give me only hbnum.xvg with segmentation fault.
when I use g_hbond -f .xtc -s .tpr -n .ndx -num -g, gromacs give me only
hbnum.xvg without segmentation fault.

how to file a bugzilla and attach a tpr and xtc/trr? I don't know.

size of xtc file is large, how to attach this file to list?


-- 

Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

leila karami skrev 2010-12-18 15.38:

Dear Erik

thanks for your attention.

in version 4.5.1, when I use g_hbond -f .xtc -s .tpr -n .ndx -num -g 
-hbn, gromacs give me only hbnum.xvg with segmentation fault.
when I use g_hbond -f .xtc -s .tpr -n .ndx -num -g, gromacs give me 
only hbnum.xvg without segmentation fault.
I'd like to see the actual output too, although it may not be ncessary 
if I can reproduce the error myself.


how to file a bugzilla and attach a tpr and xtc/trr? I don't know.

size of xtc file is large, how to attach this file to list?


you could upload it somewhere and post a link to it.


--
Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group




--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

Erik Marklund skrev 2010-12-18 15.40:

leila karami skrev 2010-12-18 15.38:

Dear Erik

thanks for your attention.

in version 4.5.1, when I use g_hbond -f .xtc -s .tpr -n .ndx -num -g 
-hbn, gromacs give me only hbnum.xvg with segmentation fault.
when I use g_hbond -f .xtc -s .tpr -n .ndx -num -g, gromacs give me 
only hbnum.xvg without segmentation fault.
I'd like to see the actual output too, although it may not be ncessary 
if I can reproduce the error myself.


how to file a bugzilla and attach a tpr and xtc/trr? I don't know.

size of xtc file is large, how to attach this file to list?


you could upload it somewhere and post a link to it.

..and here's the bugzilla tracker.


--
Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group







--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread leila karami
Dear Erik

excuse me, I sent .xtc and .tpr file your e-mail.



-- 

Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

leila karami skrev 2010-12-18 16.18:

Dear Erik

excuse me, I sent .xtc and .tpr file your e-mail.



--
Leila Karami
Ph.D. student of Physical Chemistry

K.N. Toosi University of Technology
Theoretical Physical Chemistry Group

I can reproduce the segfault. It doesn't happen without -hbn. I'll have 
a crack at fixing it. I won't promise to have it done before christmas, 
but I'll try.


--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

Erik Marklund skrev 2010-12-18 16.23:

leila karami skrev 2010-12-18 16.18:

Dear Erik

excuse me, I sent .xtc and .tpr file your e-mail.



--
Leila Karami
Ph.D. student of Physical Chemistry

K.N. Toosi University of Technology
Theoretical Physical Chemistry Group

I can reproduce the segfault. It doesn't happen without -hbn. I'll 
have a crack at fixing it. I won't promise to have it done before 
christmas, but I'll try.
Fixed it. There's a call to clearPshift in do_merge which causes a 
segfault if g_hbond is run without -geminate. Here's what you do:


In gmx_hbond.c, enclose the call in an if-statement:

if (hb-per-pHist)
{
clearPshift((hb-per-pHist[a1][a2]));
}

I'll push this to the master and release branches some time today.

Thanks for reporting this.

Regards,

Erik


--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.se http://folding.bmc.uu.se/



--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

Erik Marklund skrev 2010-12-18 16.36:

Erik Marklund skrev 2010-12-18 16.23:

leila karami skrev 2010-12-18 16.18:

Dear Erik

excuse me, I sent .xtc and .tpr file your e-mail.



--
Leila Karami
Ph.D. student of Physical Chemistry

K.N. Toosi University of Technology
Theoretical Physical Chemistry Group

I can reproduce the segfault. It doesn't happen without -hbn. I'll 
have a crack at fixing it. I won't promise to have it done before 
christmas, but I'll try.
Fixed it. There's a call to clearPshift in do_merge which causes a 
segfault if g_hbond is run without -geminate. Here's what you do:


In gmx_hbond.c, enclose the call in an if-statement:

if (hb-per-pHist)
{
clearPshift((hb-per-pHist[a1][a2]));
}

I'll push this to the master and release branches some time today.

Thanks for reporting this.

Regards,

Erik
As it turns out, I (or possibly someone else) had already fixed this 
issue in the master and release branches. Hence the solution is again to 
update your gromacs installation.


Erik


--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.se http://folding.bmc.uu.se/



--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.se http://folding.bmc.uu.se/



--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread leila karami
Dear Erik

there are several hb-per-pHist in gmx_hbond.c.

please say me exactly in which part of the gmx_hbond.c file if-statement
should be placed?


If I install gromacs 4.5.2 or 4.5.3, is there not this problem (segmentation
fault)?

-- 

Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

leila karami skrev 2010-12-18 16.52:

Dear Erik

there are several hb-per-pHist in gmx_hbond.c.

please say me exactly in which part of the gmx_hbond.c file 
if-statement should be placed?



If I install gromacs 4.5.2 or 4.5.3, is there not this problem 
(segmentation fault)?


--
Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group

As I said: There's a call to clearPshift in do_merge which causes a 
segfault. Theres only one call to clearPshift in do_merge as far as I 
can remember.


--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Erik Marklund

Erik Marklund skrev 2010-12-18 21.15:

leila karami skrev 2010-12-18 16.52:

Dear Erik

there are several hb-per-pHist in gmx_hbond.c.

please say me exactly in which part of the gmx_hbond.c file 
if-statement should be placed?



If I install gromacs 4.5.2 or 4.5.3, is there not this problem 
(segmentation fault)?

Sorry, forgot to answer this one.

I could have a look, but so could you. I would *think* that 4.5.3 is ok 
in this respect. If you checkout the release-4-5-patches from 
git.gromacs.org you're definitely safe.


--
Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group

As I said: There's a call to clearPshift in do_merge which causes a 
segfault. Theres only one call to clearPshift in do_merge as far as I 
can remember.





--
---
Erik Marklund, PhD student
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 4537fax: +46 18 511 755
er...@xray.bmc.uu.sehttp://folding.bmc.uu.se/

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Segmentation fault in g_hbond

2010-12-18 Thread Justin A. Lemkul



Erik Marklund wrote:

leila karami skrev 2010-12-18 16.52:

Dear Erik

there are several hb-per-pHist in gmx_hbond.c.

please say me exactly in which part of the gmx_hbond.c file 
if-statement should be placed?



If I install gromacs 4.5.2 or 4.5.3, is there not this problem 
(segmentation fault)?


--
Leila Karami
Ph.D. student of Physical Chemistry
K.N. Toosi University of Technology
Theoretical Physical Chemistry Group

As I said: There's a call to clearPshift in do_merge which causes a 
segfault. Theres only one call to clearPshift in do_merge as far as I 
can remember.




There is, and there are no code modifications necessary if the OP simply 
upgrades to version 4.5.3, which contains the proper code already.


-Justin
--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Segmentation fault at g_traj

2010-08-12 Thread Jorge Alberto Jover Galtier
Dear friends:Running several simulations, we have found a problem with the 'g_traj' utility. It doesn't finish properly the files it generates, and gives a segmentation fault. This is what we have done:We are working with double precision. With the file .mdp that is at the end of the mail, we have used 'grompp_d' to generate the .tpr file:grompp_d -maxwarn 1000 -f temp_prueba_mail-list_001_mdpmdp -c ../data/in/pep_ini.gro -r ../data/in/pep_ini.gro -p ../data/in/pep.top -o temp_prueba_mail-list_001_tpr.tprAfter that, we ran the simulation with 'mdrun_d':mdrun_d -s temp_prueba_mail-list_001_tpr.tpr -o temp_prueba_mail-list_001_trr.trr -c ../data/out/gro_unconstr_1.00_01.gro -g ../data/out/log_unconstr_1.00_01.log -e temp_prueba_mail-list_001_edr.edrThen we tried to get the coordinates of the atoms with 'g_traj_d':g_traj_d -f temp_prueba_mail-list_001_trr.trr -s temp_prueba_mail-list_001_tpr.tpr -ox temp_prueba_mail-list_001_ox.xvgAt the terminal, we tell the program to get the coordinates from the group 0 (system), although the error appears also for other groups.Here is where the problem appears. When the program is about to finish, it makes segmentation fault and ends abruptly. The .xvg file has only some of the last lines missing, but those are the lines we are interested in. We have tried different ways: we have used different number of steps, we have get velocities and forces instead of coordinates... and always the same problem appears.We would be very thankful if someone could tell us what is going wrong.Best wishes,Jorge Alberto Jover GaltierUniversidad de Zaragoza, Spain---; VARIOUS PREPROCESSING OPTIONStitle    = Yocpp  = /usr/bin/cppinclude  = define   = ; RUN CONTROL PARAMETERSintegrator   = md; Start time and timestep in pstinit    = 0dt = 0.001000nsteps   = 10; For exact run continuation or redoing part of a runinit_step    = 0; mode for center of mass motion removalcomm-mode    = none; number of steps for center of mass motion removalnstcomm  = 1; group(s) for center of mass motion removalcomm-grps    = ; OUTPUT CONTROL OPTIONS; Output frequency for coords (x), velocities (v) and forces (f)nstxout  = 1nstvout  = 1nstfout  = 1; Checkpointing helps you continue after crashesnstcheckpoint    = 1000; Output frequency for energies to log file and energy filenstlog   = 1000nstenergy    = 1nstcalcenergy         = 1; Output frequency and precision for xtc filenstxtcout    = 50xtc-precision    = 1000; This selects the subset of atoms for the xtc file. You can; select multiple groups. By default all atoms will be written.xtc-grps = ; Selection of energy groupsenergygrps   = ; NEIGHBORSEARCHING PARAMETERS; nblist update frequencynstlist  = -1; ns algorithm (simple or grid)ns_type  = grid; Periodic boundary conditions: xyz (default), no (vacuum); or full (infinite systems only)pbc  = no; nblist cut-off    rlist    = 20domain-decomposition = no; OPTIONS FOR ELECTROSTATICS AND VDW; Method for doing electrostaticscoulombtype  = Reaction-Field-zerorcoulomb-switch  = 0rcoulomb = 4; Dielectric constant (DC) for cut-off or DC of reaction fieldepsilon-r    = 1epsilon-rf         = 0; Method for doing Van der Waalsvdw-type = Shift; cut-off lengths   rvdw-switch  = 0rvdw = 4; Apply long range dispersion corrections for Energy and PressureDispCorr = no; Extension of the potential lookup tables beyond the cut-offtable-extension  = 1; IMPLICIT SOLVENT (for use with Generalized Born electrostatics)implicit_solvent = No; OPTIONS FOR WEAK COUPLING ALGORITHMS; Temperature coupling  Tcoupl   = no; Groups to couple separatelytc-grps  = System; Time constant (ps) and reference temperature (K)tau_t    = 0.1ref_t    = 300; Pressure coupling Pcoupl   = noPcoupltype   = isotropic; Time constant (ps), compressibility (1/bar) and reference P (bar)tau_p    = 1.0compressibility  = 4.5e-5ref_p    = 1.0; Random seed for Andersen thermostatandersen_seed    = 815131; GENERATE VELOCITIES FOR STARTUP RUNgen_vel  = yesgen_temp = 300gen_seed = 556380; OPTIONS FOR BONDS    constraints = none; Type of constraint algorithmconstraint-algorithm = Shake; Do not constrain the start configurationunconstrained-start  = yes; Use successive overrelaxation to reduce the number of shake iterationsShake-SOR    = no; Relative tolerance of 

Re: [gmx-users] Segmentation fault at g_traj

2010-08-12 Thread Justin A. Lemkul



Jorge Alberto Jover Galtier wrote:

Dear friends:
Running several simulations, we have found a problem with the 'g_traj' 
utility. It doesn't finish properly the files it generates, and gives a 
segmentation fault. This is what we have done:


We are working with double precision. With the file .mdp that is at the 
end of the mail, we have used 'grompp_d' to generate the .tpr file:


grompp_d -maxwarn 1000 -f temp_prueba_mail-list_001_mdpmdp -c 
../data/in/pep_ini.gro -r ../data/in/pep_ini.gro -p ../data/in/pep.top 
-o temp_prueba_mail-list_001_tpr.tpr


After that, we ran the simulation with 'mdrun_d':

mdrun_d -s temp_prueba_mail-list_001_tpr.tpr -o 
temp_prueba_mail-list_001_trr.trr -c 
../data/out/gro_unconstr_1.00_01.gro -g 
../data/out/log_unconstr_1.00_01.log -e 
temp_prueba_mail-list_001_edr.edr


Then we tried to get the coordinates of the atoms with 'g_traj_d':

g_traj_d -f temp_prueba_mail-list_001_trr.trr -s 
temp_prueba_mail-list_001_tpr.tpr -ox temp_prueba_mail-list_001_ox.xvg


At the terminal, we tell the program to get the coordinates from the 
group 0 (system), although the error appears also for other groups.


Here is where the problem appears. When the program is about to finish, 
it makes segmentation fault and ends abruptly. The .xvg file has only 
some of the last lines missing, but those are the lines we are 
interested in. We have tried different ways: we have used different 
number of steps, we have get velocities and forces instead of 
coordinates... and always the same problem appears.


We would be very thankful if someone could tell us what is going wrong.



You're probably running out of memory.  Your .mdp file indicates that you save 
full-precision coordinates every step (yikes!) over 100,000 steps.  If you're 
trying to print the coordinate of every atom at every time, then the file that 
g_traj is trying to produce will be enormous, and you'll potentially use up all 
the memory your machine has.


Other diagnostic information that would be useful would be the number of atoms 
in the system (to see if I'm on to something or completely guessing).  Does 
g_traj work if you just try to output a single frame, or just a few using -b and -e?


-Justin


Best wishes,
Jorge Alberto Jover Galtier
Universidad de Zaragoza, Spain

---

; VARIOUS PREPROCESSING OPTIONS
title= Yo
cpp  = /usr/bin/cpp
include  =
define   =

; RUN CONTROL PARAMETERS
integrator   = md
; Start time and timestep in ps
tinit= 0
dt = 0.001000
nsteps   = 10
; For exact run continuation or redoing part of a run
init_step= 0
; mode for center of mass motion removal
comm-mode= none
; number of steps for center of mass motion removal
nstcomm  = 1
; group(s) for center of mass motion removal
comm-grps=

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1
nstvout  = 1
nstfout  = 1
; Checkpointing helps you continue after crashes
nstcheckpoint= 1000
; Output frequency for energies to log file and energy file
nstlog   = 1000
nstenergy= 1
nstcalcenergy = 1
; Output frequency and precision for xtc file
nstxtcout= 50
xtc-precision= 1000
; This selects the subset of atoms for the xtc file. You can
; select multiple groups. By default all atoms will be written.
xtc-grps =
; Selection of energy groups
energygrps   =

; NEIGHBORSEARCHING PARAMETERS
; nblist update frequency
nstlist  = -1
; ns algorithm (simple or grid)
ns_type  = grid
; Periodic boundary conditions: xyz (default), no (vacuum)
; or full (infinite systems only)
pbc  = no
; nblist cut-off   
rlist= 20

domain-decomposition = no

; OPTIONS FOR ELECTROSTATICS AND VDW
; Method for doing electrostatics
coulombtype  = Reaction-Field-zero
rcoulomb-switch  = 0
rcoulomb = 4
; Dielectric constant (DC) for cut-off or DC of reaction field
epsilon-r= 1
epsilon-rf = 0
; Method for doing Van der Waals
vdw-type = Shift
; cut-off lengths  
rvdw-switch  = 0

rvdw = 4
; Apply long range dispersion corrections for Energy and Pressure
DispCorr = no
; Extension of the potential lookup tables beyond the cut-off
table-extension  = 1

; IMPLICIT SOLVENT (for use with Generalized Born electrostatics)
implicit_solvent = No

; OPTIONS FOR WEAK COUPLING ALGORITHMS
; Temperature coupling 
Tcoupl   = no

; Groups to couple separately
tc-grps  = System
; Time constant (ps) and reference temperature (K)
tau_t

[gmx-users] Segmentation Fault with g_dielectric

2010-07-20 Thread Jennifer Casey
Hello,

I am trying to calculate the dielectric constant for pure tetrahydrofuran
(THF) at 298K.  I keep running into problems though.  I have looked through
the gmx user list to see if others have had these problems, but I didn't see
any mention of them (although I did see that others were asked to report
issues with g_dipoles to bugzilla).

The first thing I do is run g_dipoles using the command (I do this in order
to get the ACF to use in g_dielectric):
g_dipoles -f nvt10ns.xtc -s previous.tpr -corr mol -mu 1.75
*I would have liked to attach the tpr and xtc file, but the message was too
big.  I can send them if they will help*
When I do this, I get the following output to terminal:


There are 255 molecules in the selection
Using volume from topology:  34.3361 nm^3
Last Frame 3 time 12000.001
t0 0, t 12000, teller 30001
**then there is a long pause (approx 5 minutes)**
Dipole Moment (Debye)
__
Average = 1.9194 Std. Dev.  = 0.0085 Error = 0.

**Then it lists the different dipole moments, kirkwood factors, and finally
an epsilon = 4.47756**
(I won't bother to write all of the info down)

I wanted to include the output files, but the e-mail was too big and
wouldn't go through.  I can send them later.

It seems that the g_dipoles is working fine for me.


Once I have the autocorrelation function (dipcorr.xvg), I want to use
g_dielectric.  Before I talk about the problems I have here, I wanted to
verify a few things about the various options:
epsRF - the default here is 78.5, even though the default in g_dipoles is 0
(infinity).  I wanted it to be infinity, so I assume I change it.
eps0 - this is the epsilon of my liquid - but is it the epsilon that was
calculated from g_dipoles (4.47756)?

When I run the command:
g_dielectric -f dipcorr.xvg -epsRF 0 -d -o -c
I get a segmentation fault before anything happens:
Read data set containing 2 columns a nd 15001 rows
Assuming (from data) that timestep is 0.4, nxtail = 1249
Creating standard deviation numbers ...
nbegin = 13, x[nbegin] = 5.2, tbegin = 5
Segmentation Fault

If I leave out the -epsRF, I still get the same error.  If I include eps0, I
still get a segmentation fault.  It seems strange to me since GROMACS
generates the input and yet has an issue with it.

I would like to point out that the manual states to use dipcorr.xvg to get
the dielectric constant, but after reading the paper GROMACS references, it
seems that Mtot^2 is more appropriate. I tried running the command
g_dielectric -f  Mtot.xvg, and the segmentation fault went away.  Instead
lambda went to infinity and there was a fatal error (nparm = 0 in the file
../../../../src/tools/exptfit.
c, line 466.

I am probably missing something obvious, but I am having a hard time
figuring out what it is.  I appreciate any help.

Thank you for your time,
Jenny
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] segmentation fault in position restrained step

2010-04-25 Thread Moeed
Dear Justin,

1- Could you please check if I have grouped atoms properly? Lastly, I could
generate the tpr file for PR step. Since I am getting segmentation fault in
the next step I though maybe there is sth wring with charge groups..

However, I have a funda,mental question. I am to compute interaction
parameters for ternary system of hexane/polyethylene/ethylene. Sofar I have
only hexane as solvent, later polyethylene and ethylene will be added. My
questions is for this apolar system, do I need to worry about electrostatic
interactions between atoms? I mean could I skip buildig charge groups if I
am interested in cacluation of interaction parameters BETWEEN hexane,
polyethylene and ethylene?

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass
typeBchargeB  massB
 1   opls_157  1   HEX  C1  1  -0.18 12.011   ; qtot
-0.18
 2   opls_158  1   HEX  C2  2  -0.12 12.011   ; qtot
-0.3
 3   opls_158  1   HEX  C3  3  -0.12 12.011   ; qtot
-0.42
 4   opls_158  1   HEX  C4  4  -0.12 12.011   ; qtot
-0.54
 5   opls_158  1   HEX  C5  5  -0.12 12.011   ; qtot
-0.66
 6   opls_157  1   HEX  C6  6  -0.18 12.011   ; qtot
-0.84
 7   opls_140  1   HEX  H1  1   0.06  1.008   ; qtot
-0.78
 8   opls_140  1   HEX  H2  1   0.06  1.008   ; qtot
-0.72
 9   opls_140  1   HEX  H3  1   0.06  1.008   ; qtot
-0.66
10   opls_140  1   HEX  H4  2   0.06  1.008   ; qtot
-0.6
11   opls_140  1   HEX  H5  2   0.06  1.008   ; qtot
-0.54
12   opls_140  1   HEX  H6  3   0.06  1.008   ; qtot
-0.48
13   opls_140  1   HEX  H7  3   0.06  1.008   ; qtot
-0.42
14   opls_140  1   HEX  H8  4   0.06  1.008   ; qtot
-0.36
15   opls_140  1   HEX  H9  4   0.06  1.008   ; qtot
-0.3
16   opls_140  1   HEX H10  5   0.06  1.008   ; qtot
-0.24
17   opls_140  1   HEX H11  5   0.06  1.008   ; qtot
-0.18
18   opls_140  1   HEX H12  6   0.06  1.008   ; qtot
-0.12
19   opls_140  1   HEX H13  6   0.06  1.008   ; qtot
-0.06
20   opls_140  1   HEX H14  6   0.06  1.008   ; qtot
0

2- I tried to run the position restrained simulation. VERSION 4.0.7 with
mdrun -s Hexane_pr.tpr -o Hexane_pr.tpr -c Hexane_b4md -v 
output.mdrun_pr. After a few seconds I get segmentation fault error. I did
a thorough search on mailing list and found a smilar situation where Mr.
MArck Abraham introduced
http://oldwiki.gromacs.org/index.php/blowing_=upbut link is not
working. Do I need to change the constraint algorithm? Could
you please tell me why I am getting segmentation error? Does it have to do
with kinetic energy?

output.mdrun_pr

starting mdrun 'HEX'
500 steps,  1.0 ps.
^Mstep 0
Step 26, time 0.052 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000193, max 0.001508 (between atoms 81 and 100)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   2541   2558   31.80.1090   0.1090  0.1090

Step 29, time 0.058 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000200, max 0.001497 (between atoms 1661 and 1678)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
301318   31.40.1091   0.1090  0.1090
   1521   1538   30.70.1091   0.1091  0.1090

Step 31, time 0.062 (ps)  LINCS WARNING
relative constraint deviation after LINCS:rms 0.000184, max 0.001189
(between atoms 1582 and 1596)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
321338   30.40.1091   0.1091  0.1090
   1861   1878   31.40.1091   0.1090  0.1090
   3481   3498   30.10.1091   0.1091  0.1090
   4841   4858   30.30.1091   0.1091  0.1090

Step 33, time 0.066 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000191, max 0.001602 (between atoms 4542 and 4556)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   3101   3118   30.50.1091   0.1091  0.1090

Step 38, time 0.076 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000242, max 0.001985 (between atoms 2885 and 2891)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   2885   2891   33.20.1090   0.1092  0.1090

Step 39, time 0.078 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.000234, max 0.002949 (between atoms 1465 and 1471)
bonds that rotated more than 30 degrees:
atom 1 atom 2  angle  previous, current, constraint length
   1425   

Re: [gmx-users] segmentation fault in position restrained step

2010-04-25 Thread Justin A. Lemkul



Moeed wrote:

Dear Justin,

1- Could you please check if I have grouped atoms properly? Lastly, I 
could generate the tpr file for PR step. Since I am getting segmentation 
fault in the next step I though maybe there is sth wring with charge 
groups..


I don't know how your labeling is set up.  If your CH3 and CH2 groups are all 
your charge groups, then you should be fine.  In fact, this is what grompp 
suggested in the note.  You can get a sense of what might be appropriate by 
looking at your force field's .rtp file and looking at the functional groups.




However, I have a funda,mental question. I am to compute interaction 
parameters for ternary system of hexane/polyethylene/ethylene. Sofar I 
have only hexane as solvent, later polyethylene and ethylene will be 
added. My questions is for this apolar system, do I need to worry about 
electrostatic interactions between atoms? I mean could I skip buildig 
charge groups if I am interested in cacluation of interaction parameters 
BETWEEN hexane, polyethylene and ethylene?


Your atoms have partial charges, do they not?  Then you certainly need to 
consider proper electrostatics treatment.




[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  
typeBchargeB  massB
 1   opls_157  1   HEX  C1  1  -0.18 12.011   ; 
qtot -0.18
 2   opls_158  1   HEX  C2  2  -0.12 12.011   ; 
qtot -0.3
 3   opls_158  1   HEX  C3  3  -0.12 12.011   ; 
qtot -0.42
 4   opls_158  1   HEX  C4  4  -0.12 12.011   ; 
qtot -0.54
 5   opls_158  1   HEX  C5  5  -0.12 12.011   ; 
qtot -0.66
 6   opls_157  1   HEX  C6  6  -0.18 12.011   ; 
qtot -0.84
 7   opls_140  1   HEX  H1  1   0.06  1.008   ; 
qtot -0.78
 8   opls_140  1   HEX  H2  1   0.06  1.008   ; 
qtot -0.72
 9   opls_140  1   HEX  H3  1   0.06  1.008   ; 
qtot -0.66
10   opls_140  1   HEX  H4  2   0.06  1.008   ; 
qtot -0.6
11   opls_140  1   HEX  H5  2   0.06  1.008   ; 
qtot -0.54
12   opls_140  1   HEX  H6  3   0.06  1.008   ; 
qtot -0.48
13   opls_140  1   HEX  H7  3   0.06  1.008   ; 
qtot -0.42
14   opls_140  1   HEX  H8  4   0.06  1.008   ; 
qtot -0.36
15   opls_140  1   HEX  H9  4   0.06  1.008   ; 
qtot -0.3
16   opls_140  1   HEX H10  5   0.06  1.008   ; 
qtot -0.24
17   opls_140  1   HEX H11  5   0.06  1.008   ; 
qtot -0.18
18   opls_140  1   HEX H12  6   0.06  1.008   ; 
qtot -0.12
19   opls_140  1   HEX H13  6   0.06  1.008   ; 
qtot -0.06
20   opls_140  1   HEX H14  6   0.06  1.008   ; 
qtot 0


2- I tried to run the position restrained simulation. VERSION 4.0.7 
with  mdrun -s Hexane_pr.tpr -o Hexane_pr.tpr -c Hexane_b4md -v  
output.mdrun_pr. After a few seconds I get segmentation fault error. I 
did a thorough search on mailing list and found a smilar situation where 
Mr. MArck Abraham introduced  
http://oldwiki.gromacs.org/index.php/blowing_=up but link is not 
working. Do I need to change the constraint algorithm? Could you please 
tell me why I am getting segmentation error? Does it have to do with 
kinetic energy?




Did you do energy minimization?  Usually instabilities like this arise because 
either the system is energetically unstable due to atomic clashes, or something 
about the underlying model physics is broken.  You haven't mentioned how you 
built your system or if you energy minimized it, so I assume that you simply 
haven't resolved the clashes in your system.


snip


define  =  -DPOSRES


If your system is just a box of hexane, restraining anything doesn't make sense 
to me.  One usually employs position restraints on a solute (like a protein) to 
relax the solvent (usually water) around it.  If you're trying to equilibrate a 
hexane system, you're just wasting your time by restraining any or all of the 
molecules.


-Justin

--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Segmentation fault with gromacs executables

2010-04-25 Thread Sikandar Mashayak
Hi

I am facing this strange problem of segmentation fault while executing
mpi-enabled gromacs executables on remote server.

I source GMXRC, so that I can access executables from any directory without
specifying full path of gromacs/bin. And when I execute, say grompp, I get
segmentation fault.
But when I use the same command by specifying full path
/home/.../gromacs/bin/grompp , it executes without any issues!!

Whats wrong going on here?

thanks
sikandar
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

  1   2   3   >