Re: [gmx-users] Error in system_inflate.gro coordinates does not match
hello gromacs users i used vmd and both the command pbc box and periodic images in all 6 frames +X -X +Y -Y +Z -Z and self but this action can not put protein inside the box and pbc box action only a box is created around protein and lipid 1/4 protein is in side the box how to put it in side kindly help On Wed, Jul 16, 2014 at 5:24 PM, Justin Lemkul jalem...@vt.edu wrote: On 7/16/14, 6:53 AM, RINU KHATTRI wrote: hello gromacs users i am working on protein complex with popc membrane at the time of minimization and shrinking steps my protein complex is out of the lipid membrane protein is in the lipid membrane till inflate.gro but after that it is out This indicates that the protein is not properly centered or the box is of insufficient size to accommodate the components of the system. Use pbc box in the Tcl console of VMD on the input and output structures and it should be obvious what is going on. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/ Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Error in system_inflate.gro coordinates does not match
On 7/17/14, 4:52 AM, RINU KHATTRI wrote: hello gromacs users i used vmd and both the command pbc box and periodic images in all 6 frames +X -X +Y -Y +Z -Z and self but this action can not put protein inside the box and pbc box action only a box is created around protein and lipid 1/4 protein is in side the box All of this simply suggests that you built the system wrong, either by placing the protein in the wrong location with respect to the membrane and/or setting the box size too small for the system at hand, such that the protein jumps across a boundary. As Dallas requested, it would be extremely helpful if you posted some images of what's going on. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Regarding segmentation fault in GPU_gromacs 4.6.1
On 7/17/14, 1:51 AM, DeBaJiT DeY wrote: Hello, I am trying to simulate a peptide-membrane system on GPU using gromacs 4.6.1 . However whenever I invoke the mdrun command I get an error as *srun: error: coe3: task 0: Segmentation fault (core dumped)*. Can anybody please suggest me what might be the problem? *Script file used to submit the job*: #!/bin/bash #SBATCH --job-name=GRMCS-GPU #SBATCH --nodes=1 #SBATCH --gres=gpu:1 #SBATCH --nodelist=coe3 #SBATCH -e slurm.err #SBATCH -v #cd $SLURM_SUBMIT_DIR echo CWD: $SLURM_SUBMIT_DIR echo NODELIST: $SLURM_JOB_NODELIST export PATH=/opt/cuda-5.0/bin:$PATH export LD_LIBRARY_PATH=/opt/cuda-5.0/lib64:/opt/cuda-5.0/lib:$LD_LIBRARY_PATH srun -N1 -s /opt/apps/gromacs_GPU/bin/mdrun_mpi -deffnm EM *.mdp file for energy minimization*: title= EM of B2AR-POPC system; Title of the miniization run ; Parameters describing the details of the Energy Minimization protocol define = ; Define any restrain (-DPOSRES OR -DSTRONG_POSRES OR leave blank if there is no restrain) integrator= steep; EM Algorithm (steep = steepest descent; cg = conjugate gradiant; l-bfgs = Low-memory Broyden-Fletcher-Goldfarb-Shanno quasi-Newtonian minimizer) emtol= 100.0 ; Minimization is stopped when the maximum force on an atom is less than the given value (kJ/mol/nm) emstep = 0.01 ; Initial step size (nm) nsteps= 2 ; Maximum number of (energy minimization) steps to be performed ; Parameters describing neighbors searching and details about interaction calculations nstlist= 1; Neighbor list update frequency (after every given number of steps) ns_type= grid; Neighbor list search method (simple, grid) rlist= 1.2; Neighbor list search cut-off (nm) coulombtype= PME; Long range electrostatic interactions treatment (cut-off, Ewald, PME) rcoulomb= 1.2; Short-range electrostatic cut-off (nm) rvdw= 1.2; Short-range van der Waals cut-off (nm) pbc= xyz ; Direction in which to use Perodic Boundary Conditions (xyz, xy, no) Hard to say without any actual information from Gromacs. Can you tell us: 1. Does the EM work on CPU? 2. Do other systems run on the GPU? 3. Can you post your .log file? -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Graphene topology file
Hi gromacs users, I read that for slab geometries where you need pbc in xy, you can use pbc=xyz and have a slab of vacuum in between the periodic images in z and use ewald3dc (Yeh and Berkowitz method). So I changed my mdp file extended my box in z direction( with no solvent molecules) and did the energy minimization and npt again. In em, the box looks fine but with npt again I get same output i.e. water molecules fly away and box size in z direction increases drastically, through the vacuum layer is still maintained. Then I changed my pressure coupling to Parinello- Rahman instead of Berendsen. This time npt runs some steps and gives an error message: Fatal error: One of the box vectors has become shorter than twice the cut-off length or box_yy-|box_zy| or box_zz has become smaller than the cut-off. Can anyone please let me know how can I solve this problem. If we have a layer of vacuum, wont the box size reduce in npt and cause this error? Should I skip npt and directly do nvt? Regards Sukriti Sukriti Gupta (Ms) | PhD Student | Energy Research Institute @ NTU (ERI@N) | Nanyang Technological University N1.3-B4-14, 50 Nanyang Avenue, Singapore 639798 Tel: (65) 81164191 GMT+8h | Email:sukriti...@e.ntu.edu.sg | Web:erian.ntu.edu.sg From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of #SUKRITI GUPTA# sukriti...@e.ntu.edu.sg Sent: Wednesday, July 16, 2014 11:20 AM To: gmx-us...@gromacs.org Subject: Re: [gmx-users] Graphene topology file Dear Justin, Thanks so much for your help. I am still unable to solve the problem of blowing up of the box. Once I am able to do correct simulation I will update here. Regards Sukriti Sukriti Gupta (Ms) | PhD Student | Energy Research Institute @ NTU (ERI@N) | Nanyang Technological University N1.3-B4-14, 50 Nanyang Avenue, Singapore 639798 Tel: (65) 81164191 GMT+8h | Email:sukriti...@e.ntu.edu.sg | Web:erian.ntu.edu.sg From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of Justin Lemkul jalem...@vt.edu Sent: Saturday, July 12, 2014 5:17 AM To: gmx-us...@gromacs.org Subject: Re: [gmx-users] Graphene topology file On 7/11/14, 5:42 AM, #SUKRITI GUPTA# wrote: Hi Justin, Thanks for the reply. I changed define= -dflexible to -dposres_water and removed freeze graphene group in my .mdp file and ran the energy minimization and npt again. This time for pbc= xyz, the water doesn't fly away but the graphene sheet curves and does not remain in xy plane. Is it ok for the sheet to bend during simulation and will it not effect the pbc? Applying pressure along the plane of the sheet can cause deformation. Whether or not this is physically relevant, I have no idea. Also for pbc=xy, the same problem persists as the previous one i.e. following error occurs: Step 20: The charge group starting at atom 796 moved than the distance allowed by the domain decomposition in direction X distance out of cell -0.290927 New coordinates:2.4112.0060.982 Old cell boundaries in direction X:0.0002.702 New cell boundaries in direction X:0.0002.702 --- Program mdrun, VERSION 4.5.5 Source code file: /build/buildd/gromacs-4.5.5/src/mdlib/domdec.c, line: 4124 Fatal error: A charge group moved too far between two domain decomposition steps This usually means that your system is not well equilibrated For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors. Can you please suggest what can be causing the error to occur? That's a generic error suggesting the system is blowing up. http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System I know nothing about using walls, so that's the best I can suggest. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read
Re: [gmx-users] Graphene topology file
On 7/17/14, 7:16 AM, #SUKRITI GUPTA# wrote: Hi gromacs users, I read that for slab geometries where you need pbc in xy, you can use pbc=xyz and have a slab of vacuum in between the periodic images in z and use ewald3dc (Yeh and Berkowitz method). So I changed my mdp file extended my box in z direction( with no solvent molecules) and did the energy minimization and npt again. In em, the box looks fine but with npt again I get same output i.e. water molecules fly away and box size in z direction increases drastically, through the vacuum layer is still maintained. Then I changed my pressure coupling to Parinello- Rahman instead of Berendsen. This time npt runs some steps and gives an error message: Fatal error: One of the box vectors has become shorter than twice the cut-off length or box_yy-|box_zy| or box_zz has become smaller than the cut-off. Can anyone please let me know how can I solve this problem. If we have a layer of vacuum, wont the box size reduce in npt and cause this error? Should I skip npt and directly do nvt? If you need to maintain a vacuum layer, your box needs to be incompressible along that direction, i.e. use semiisotropic coupling and set compressibility = 0 in the .mdp file for the z-direction. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Segmentation fault after warning regarding Listed nonbonded interaction
Hi everybody, I am trying to make a simulation of an enzyme containing a metal ion using a dummy model for it. I have inserted all the parameters, generated the topology, solvated the system, without any problem. When I try to run a first equilibration, I get the following warning: WARNING: Listed nonbonded interaction between particles 127 and 134* at distance 3f which is larger than the table limit 3f nm. This is likely either a 1,4 interaction, or a listed interaction inside a smaller molecule you are decoupling during a free energy calculation. Since interactions at distances beyond the table cannot be computed, they are skipped until they are inside the table limit again. You will only see this message once, even if it occurs for several interactions. *These particles are close to the dummy model for the metal. I believe the problem might be related to that, but I had no problem in generating topologies, or anything like that. After that warning, I have a Segmentation fault. Here is my mdp file for the test equilibration that I was trying: define = -DPOSRES -DPOSRES_WATER ; position restrain the protein ; Run parameters integrator = md; leap-frog integrator nsteps = 5 ; 2 * 5 = 100 ps dt = 0.001 ; 1 fs ; Output control nstxout = 500 ; save coordinates every 1.0 ps nstvout = 500 ; save velocities every 1.0 ps nstenergy = 500 ; save energies every 1.0 ps nstlog = 500 ; update log file every 1.0 ps ; Bond parameters continuation= no; first dynamics run ; constraint_algorithm = lincs ; holonomic constraints constraints = none ; all bonds (even heavy atom-H bonds) constrained ; Neighborsearching cutoff-scheme = Verlet ns_type = grid ; search neighboring grid cells nstlist = 30; 20 fs, largely irrelevant with Verlet rcoulomb= 1.0 ; short-range electrostatic cutoff (in nm) rvdw= 1.0 ; short-range van der Waals cutoff (in nm) ; Electrostatics coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics pme_order = 4 ; cubic interpolation fourierspacing = 0.16 ; grid spacing for FFT ; Temperature coupling is on tcoupl = V-rescale ; modified Berendsen thermostat tc-grps = Protein Non-Protein ; two coupling groups - more accurate tau_t = 0.1 0.1 ; time constant, in ps ref_t = 1 1 ; reference temperature, one for each group, in K ; Pressure coupling is off pcoupl = no; no pressure coupling in NVT ; Periodic boundary conditions pbc = xyz ; 3-D PBC ; Dispersion correction DispCorr= EnerPres ; account for cut-off vdW scheme ; Velocity generation gen_vel = yes ; assign velocities from Maxwell distribution gen_temp= 1; temperature for Maxwell distribution gen_seed= -1; generate a random seed Any clue about what is going on? Cheers, Alex -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Segmentation fault after warning regarding Listed nonbonded interaction
Hey Alex, I was also having the same error from two days although my topology files and mdp files were (I think) correct. Just two hours ago I corrected the problem. I just changed the timestep from 0.002 to 0.0002 and it worked. You can also try this. Regards, Vivek Sinha On Fri, Jul 18, 2014 at 12:42 AM, Alexandre Barrozo barrozo...@gmail.com wrote: Hi everybody, I am trying to make a simulation of an enzyme containing a metal ion using a dummy model for it. I have inserted all the parameters, generated the topology, solvated the system, without any problem. When I try to run a first equilibration, I get the following warning: WARNING: Listed nonbonded interaction between particles 127 and 134* at distance 3f which is larger than the table limit 3f nm. This is likely either a 1,4 interaction, or a listed interaction inside a smaller molecule you are decoupling during a free energy calculation. Since interactions at distances beyond the table cannot be computed, they are skipped until they are inside the table limit again. You will only see this message once, even if it occurs for several interactions. *These particles are close to the dummy model for the metal. I believe the problem might be related to that, but I had no problem in generating topologies, or anything like that. After that warning, I have a Segmentation fault. Here is my mdp file for the test equilibration that I was trying: define = -DPOSRES -DPOSRES_WATER ; position restrain the protein ; Run parameters integrator = md; leap-frog integrator nsteps = 5 ; 2 * 5 = 100 ps dt = 0.001 ; 1 fs ; Output control nstxout = 500 ; save coordinates every 1.0 ps nstvout = 500 ; save velocities every 1.0 ps nstenergy = 500 ; save energies every 1.0 ps nstlog = 500 ; update log file every 1.0 ps ; Bond parameters continuation= no; first dynamics run ; constraint_algorithm = lincs ; holonomic constraints constraints = none ; all bonds (even heavy atom-H bonds) constrained ; Neighborsearching cutoff-scheme = Verlet ns_type = grid ; search neighboring grid cells nstlist = 30; 20 fs, largely irrelevant with Verlet rcoulomb= 1.0 ; short-range electrostatic cutoff (in nm) rvdw= 1.0 ; short-range van der Waals cutoff (in nm) ; Electrostatics coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics pme_order = 4 ; cubic interpolation fourierspacing = 0.16 ; grid spacing for FFT ; Temperature coupling is on tcoupl = V-rescale ; modified Berendsen thermostat tc-grps = Protein Non-Protein ; two coupling groups - more accurate tau_t = 0.1 0.1 ; time constant, in ps ref_t = 1 1 ; reference temperature, one for each group, in K ; Pressure coupling is off pcoupl = no; no pressure coupling in NVT ; Periodic boundary conditions pbc = xyz ; 3-D PBC ; Dispersion correction DispCorr= EnerPres ; account for cut-off vdW scheme ; Velocity generation gen_vel = yes ; assign velocities from Maxwell distribution gen_temp= 1; temperature for Maxwell distribution gen_seed= -1; generate a random seed Any clue about what is going on? Cheers, Alex -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Regarding Performance Tuning for GROMACS
Hai Justin Thank you for your Previous Reply Is there is Any Restrictions in usage of Number of Processor (in Clustering ) for Performance Tuning in gromacs 4 and After Version. especially when i use PME When I surf I got the Following Link http://www.mpibpc.mpg.de/262136/PosterHuenfeld2009.pdf Please give me the Brief Detail and Guidance. Thanks in Advance -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Segmentation fault after warning regarding Listed nonbonded interaction
Hey Vivek, Thanks for the suggestion. I tried this, but I still get the same problem. Cheers, Alex 2014-07-17 17:58 GMT+02:00 vivek sinha viveksinha20...@gmail.com: Hey Alex, I was also having the same error from two days although my topology files and mdp files were (I think) correct. Just two hours ago I corrected the problem. I just changed the timestep from 0.002 to 0.0002 and it worked. You can also try this. Regards, Vivek Sinha On Fri, Jul 18, 2014 at 12:42 AM, Alexandre Barrozo barrozo...@gmail.com wrote: Hi everybody, I am trying to make a simulation of an enzyme containing a metal ion using a dummy model for it. I have inserted all the parameters, generated the topology, solvated the system, without any problem. When I try to run a first equilibration, I get the following warning: WARNING: Listed nonbonded interaction between particles 127 and 134* at distance 3f which is larger than the table limit 3f nm. This is likely either a 1,4 interaction, or a listed interaction inside a smaller molecule you are decoupling during a free energy calculation. Since interactions at distances beyond the table cannot be computed, they are skipped until they are inside the table limit again. You will only see this message once, even if it occurs for several interactions. *These particles are close to the dummy model for the metal. I believe the problem might be related to that, but I had no problem in generating topologies, or anything like that. After that warning, I have a Segmentation fault. Here is my mdp file for the test equilibration that I was trying: define = -DPOSRES -DPOSRES_WATER ; position restrain the protein ; Run parameters integrator = md; leap-frog integrator nsteps = 5 ; 2 * 5 = 100 ps dt = 0.001 ; 1 fs ; Output control nstxout = 500 ; save coordinates every 1.0 ps nstvout = 500 ; save velocities every 1.0 ps nstenergy = 500 ; save energies every 1.0 ps nstlog = 500 ; update log file every 1.0 ps ; Bond parameters continuation= no; first dynamics run ; constraint_algorithm = lincs ; holonomic constraints constraints = none ; all bonds (even heavy atom-H bonds) constrained ; Neighborsearching cutoff-scheme = Verlet ns_type = grid ; search neighboring grid cells nstlist = 30; 20 fs, largely irrelevant with Verlet rcoulomb= 1.0 ; short-range electrostatic cutoff (in nm) rvdw= 1.0 ; short-range van der Waals cutoff (in nm) ; Electrostatics coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics pme_order = 4 ; cubic interpolation fourierspacing = 0.16 ; grid spacing for FFT ; Temperature coupling is on tcoupl = V-rescale ; modified Berendsen thermostat tc-grps = Protein Non-Protein ; two coupling groups - more accurate tau_t = 0.1 0.1 ; time constant, in ps ref_t = 1 1 ; reference temperature, one for each group, in K ; Pressure coupling is off pcoupl = no; no pressure coupling in NVT ; Periodic boundary conditions pbc = xyz ; 3-D PBC ; Dispersion correction DispCorr= EnerPres ; account for cut-off vdW scheme ; Velocity generation gen_vel = yes ; assign velocities from Maxwell distribution gen_temp= 1; temperature for Maxwell distribution gen_seed= -1; generate a random seed Any clue about what is going on? Cheers, Alex -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search
Re: [gmx-users] Segmentation fault after warning regarding Listed nonbonded interaction
On 7/17/14, 1:18 PM, Alexandre Barrozo wrote: Hey Vivek, Thanks for the suggestion. I tried this, but I still get the same problem. Reducing the timestep is just a band-aid for the underlying problems. Sometimes it can be useful in initial preparation for very sensitive systems, but in this case I think it is clear that the topology is not sane. -Justin Cheers, Alex 2014-07-17 17:58 GMT+02:00 vivek sinha viveksinha20...@gmail.com: Hey Alex, I was also having the same error from two days although my topology files and mdp files were (I think) correct. Just two hours ago I corrected the problem. I just changed the timestep from 0.002 to 0.0002 and it worked. You can also try this. Regards, Vivek Sinha On Fri, Jul 18, 2014 at 12:42 AM, Alexandre Barrozo barrozo...@gmail.com wrote: Hi everybody, I am trying to make a simulation of an enzyme containing a metal ion using a dummy model for it. I have inserted all the parameters, generated the topology, solvated the system, without any problem. When I try to run a first equilibration, I get the following warning: WARNING: Listed nonbonded interaction between particles 127 and 134* at distance 3f which is larger than the table limit 3f nm. This is likely either a 1,4 interaction, or a listed interaction inside a smaller molecule you are decoupling during a free energy calculation. Since interactions at distances beyond the table cannot be computed, they are skipped until they are inside the table limit again. You will only see this message once, even if it occurs for several interactions. *These particles are close to the dummy model for the metal. I believe the problem might be related to that, but I had no problem in generating topologies, or anything like that. After that warning, I have a Segmentation fault. Here is my mdp file for the test equilibration that I was trying: define = -DPOSRES -DPOSRES_WATER ; position restrain the protein ; Run parameters integrator = md; leap-frog integrator nsteps = 5 ; 2 * 5 = 100 ps dt = 0.001 ; 1 fs ; Output control nstxout = 500 ; save coordinates every 1.0 ps nstvout = 500 ; save velocities every 1.0 ps nstenergy = 500 ; save energies every 1.0 ps nstlog = 500 ; update log file every 1.0 ps ; Bond parameters continuation= no; first dynamics run ; constraint_algorithm = lincs ; holonomic constraints constraints = none ; all bonds (even heavy atom-H bonds) constrained ; Neighborsearching cutoff-scheme = Verlet ns_type = grid ; search neighboring grid cells nstlist = 30; 20 fs, largely irrelevant with Verlet rcoulomb= 1.0 ; short-range electrostatic cutoff (in nm) rvdw= 1.0 ; short-range van der Waals cutoff (in nm) ; Electrostatics coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics pme_order = 4 ; cubic interpolation fourierspacing = 0.16 ; grid spacing for FFT ; Temperature coupling is on tcoupl = V-rescale ; modified Berendsen thermostat tc-grps = Protein Non-Protein ; two coupling groups - more accurate tau_t = 0.1 0.1 ; time constant, in ps ref_t = 1 1 ; reference temperature, one for each group, in K ; Pressure coupling is off pcoupl = no; no pressure coupling in NVT ; Periodic boundary conditions pbc = xyz ; 3-D PBC ; Dispersion correction DispCorr= EnerPres ; account for cut-off vdW scheme ; Velocity generation gen_vel = yes ; assign velocities from Maxwell distribution gen_temp= 1; temperature for Maxwell distribution gen_seed= -1; generate a random seed Any clue about what is going on? Cheers, Alex -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to
Re: [gmx-users] Regarding Performance Tuning for GROMACS
On 7/17/14, 12:39 PM, vidhya sankar wrote: Hai Justin Thank you for your Previous Reply Is there is Any Restrictions in usage of Number of Processor (in Clustering ) for Performance Tuning in gromacs 4 and After Version. especially when i use PME When I surf I got the Following Link http://www.mpibpc.mpg.de/262136/PosterHuenfeld2009.pdf Please give me the Brief Detail and Guidance. Benchmarking and tuning should always be done. There are limits to the number of processors, but they are dependent upon system size and related to DD. Generally Gromacs can scale to a few hundred atoms per core. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Regarding Performance Tuning for GROMACS
Hi, Benchmarking and tuning is generally quite machine-specific, but you could have a look at this great work done by Carsten Kutzner on SuperMUC: http://www.mpibpc.mpg.de/11832367/kutzner13talk-Parco.pdf https://www.mpibpc.mpg.de/14613164/Kutzner_2014_ParCo-conf2013.pdf On Thu, Jul 17, 2014 at 7:24 PM, Justin Lemkul jalem...@vt.edu wrote: On 7/17/14, 12:39 PM, vidhya sankar wrote: Hai Justin Thank you for your Previous Reply Is there is Any Restrictions in usage of Number of Processor (in Clustering ) for Performance Tuning in gromacs 4 and After Version. especially when i use PME When I surf I got the Following Link http://www.mpibpc.mpg.de/262136/PosterHuenfeld2009.pdf Please give me the Brief Detail and Guidance. Benchmarking and tuning should always be done. There are limits to the number of processors, but they are dependent upon system size and related to DD. Generally Gromacs can scale to a few hundred atoms per core. Note that depending on the machine (especially network), simulation system (as well as the desired minimum parallel efficiency = definition of scale), as Carsten also showed, the limit can be even lower! Cheers, -- Szilárd -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Can't allocate memory problem
Hi, I am currently experiencing a Can't allocate memory problem on Gromacs 4.6.5 with GPU acceleration. Actually, I am running my simulations on Stampede/TACC supercomputers with their GPU queue. My first experience is when the simulation length longer than 10 ns, the system starts to throw out the Can't allocate memory problem as follows: Fatal error: Not enough memory. Failed to realloc 1403808 bytes for f_t-f, f_t-f=0xa912a010 (called from file /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c, line 3840) For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors --- These Gromacs Guys Really Rock (P.J. Meulenhoff) : Cannot allocate memory Error on node 0, will try to stop all the nodes Halting parallel program mdrun_mpi_gpu on CPU 0 out of 4 --- Program mdrun_mpi_gpu, VERSION 4.6.5 Source code file: /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/smalloc.c, line: 241 Fatal error: Not enough memory. Failed to realloc 1403808 bytes for f_t-f, f_t-f=0xaa516e90 (called from file /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c, line 3840) For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors --- Recently, this error occurs even I run a short NVT equilibrium. This problem also exists when I use Gromacs 5.0 with GPU acceleration. I looked up the Gromacs errors website to check the reasons for this. But it seems that none of those reasons will fit in this situation. I use a very good computer, the Stampede and I run short simulations. And I know gromacs use nanometers as unit. I tried all the solutions that I can figure out but the problem becomes more severe. Is there anybody that has an idea on solving this issue? Thank you. Yunlong Davis Yunlong Liu BCMB - Second Year PhD Candidate School of Medicine The Johns Hopkins University E-mail: yliu...@jhmi.edumailto:yliu...@jhmi.edu -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Library file table.xvg not found in current dir nor in default directories
Dear Users, My system includes 5 types of beads (coarse-grain) - 15 tabulated potentials. My mdp: integrator = md tinit= 0.0 dt = 0.01 nsteps = 140; 10 ns and 10 nm nstcomm = 100 nstcalcenergy = 100 nstxout = 0 nstvout = 0 nstenergy= 0 nstlog = 1000 nstxtcout= 100 energygrps = ACI BAS GLY NON POL energygrp_table = ACI ACI ACI BAS ACI GLY ACI NON ACI POL BAS BAS BAS GLY GLY GLY GLY NON GLY POL NON BAS NON NON POL BAS POL NON POL POL table-extension = 1.0 nstlist = 10 ns_type = grid pbc = xyz rlist= 2.0 coulombtype = User rcoulomb = 2.0 vdw_type = User rvdw = 2.0 tcoupl = V-rescale tc_grps = System tau_t = 0.1 ref_t = 300 pcoupl = no gen_vel = no continuation = yes constraints = none constraint_algorithm = Lincs lincs_iter = 1 lincs_order = 4 I have 15 tables for tabulated potentials: table_X_X.xvg as above in the mdp. when I run grompp I get the error: Library file table.xvg not found in current dir nor in default directories Seems like additional group is specified if gmx is looking for table.xvg So I observe tpr file: energygrp_flags[ 0]: 2 2 2 2 2 0 energygrp_flags[ 1]: 2 2 2 2 2 0 energygrp_flags[ 2]: 2 2 2 2 2 0 energygrp_flags[ 3]: 2 2 2 2 2 0 energygrp_flags[ 4]: 2 2 2 2 2 0 energygrp_flags[ 5]: 0 0 0 0 0 0 And indeed there are 6 energy groups read but in my mdp there are only 5. Would you please advise? Steven -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Library file table.xvg not found in current dir nor in default directories
Sorry I get the error with mdrun not grompp which works fine On Fri, Jul 18, 2014 at 1:41 PM, Steven Neumann s.neuman...@gmail.com wrote: Dear Users, My system includes 5 types of beads (coarse-grain) - 15 tabulated potentials. My mdp: integrator = md tinit= 0.0 dt = 0.01 nsteps = 140; 10 ns and 10 nm nstcomm = 100 nstcalcenergy = 100 nstxout = 0 nstvout = 0 nstenergy= 0 nstlog = 1000 nstxtcout= 100 energygrps = ACI BAS GLY NON POL energygrp_table = ACI ACI ACI BAS ACI GLY ACI NON ACI POL BAS BAS BAS GLY GLY GLY GLY NON GLY POL NON BAS NON NON POL BAS POL NON POL POL table-extension = 1.0 nstlist = 10 ns_type = grid pbc = xyz rlist= 2.0 coulombtype = User rcoulomb = 2.0 vdw_type = User rvdw = 2.0 tcoupl = V-rescale tc_grps = System tau_t = 0.1 ref_t = 300 pcoupl = no gen_vel = no continuation = yes constraints = none constraint_algorithm = Lincs lincs_iter = 1 lincs_order = 4 I have 15 tables for tabulated potentials: table_X_X.xvg as above in the mdp. when I run grompp I get the error: Library file table.xvg not found in current dir nor in default directories Seems like additional group is specified if gmx is looking for table.xvg So I observe tpr file: energygrp_flags[ 0]: 2 2 2 2 2 0 energygrp_flags[ 1]: 2 2 2 2 2 0 energygrp_flags[ 2]: 2 2 2 2 2 0 energygrp_flags[ 3]: 2 2 2 2 2 0 energygrp_flags[ 4]: 2 2 2 2 2 0 energygrp_flags[ 5]: 0 0 0 0 0 0 And indeed there are 6 energy groups read but in my mdp there are only 5. Would you please advise? Steven -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.