[gmx-users] Overall charge -0.0002
Hi In a amino acid water simulation the grompp is trowing one warning NOTE 2 [file topol.top, line 27]: System has non-zero total charge: -0.000200 Total charge should normally be an integer. See http://www.gromacs.org/Documentation/Floating_Point_Arithmetic for discussion on how close it should be to an integer. WARNING 1 [file topol.top, line 27]: You are using Ewald electrostatics in a system with net charge. This can lead to severe artifacts, such as ions moving into regions with low dielectric, due to the uniform background charge. We suggest to neutralize your system with counter ions, possibly in combination with a physiological salt concentration. In this case..is it suitable to use -maxwarn option in grompp. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Generate OPLS Parameters for a modified residue
Hello gromacs users, Any suggestions on how to generate OPLS ff parameters for a modified residue? I found a few servers (listed below): - TPPmktop: http://erg.biophys.msu.ru/tpp/ - LigParGen: http://zarbi.chem.yale.edu/ligpargen/ Can I get your insights if you have used anyone of these tools? Many thanks, Neena -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Kindly help
Here u can use maxwarn 1...It is obvious that any system has some overall charge. To neutralise the system you have to add ions to the system to neutralise it... On Tue 25 Jun, 2019, 5:40 PM Abhishek Acharya, wrote: > Dear Kalpana, > > As you would note in the output files, grompp throws one warning . The way > warnings work in grompp is to not allow obviously inconsistent choices of > options. So if there is warning(s) while running grompp, no .tpr file is > generated. However, there may be some situations (as in your case too), > where you know that the warning is not relevant in the given context. For > such cases, grompp has a -maxwarn flag, which can be used to suppress these > warning and direct grompp to produce the .tpr file. In your case adding > -maxwarn 1 to grompp command should work. > > Just a note of caution: -maxwarn flag should only be used when you are > quite sure that the warnings are really irrelevant, otherwise you risk > gross misuse of the flag. ;) > > Hope this helps. > Abhishek > > On Mon, Jun 24, 2019 at 4:19 PM kalpana wrote: > > > Dear all, > > I have worked with the same commands and setting in previous version of > > ubuntu and gromacs. Now with new system and up-gradation, I am facing > > problem. First kindly see the gmx information and then see the fatal > error, > > I am getting at grompp. Kindly find the attached ions.mdp as well and see > > the other .mdp files too and guide me. > > Thanks & Regards > > Kalpana > > > > > > 1. > > gmx --version > > > > GROMACS version:2019.3 > > Precision: single > > Memory model: 64 bit > > MPI library:thread_mpi > > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) > > GPU support:CUDA > > SIMD instructions: AVX_512 > > FFT library:fftw-3.3.8-sse2-avx > > RDTSCP usage: enabled > > TNG support:enabled > > Hwloc support: hwloc-1.11.6 > > Tracing support:disabled > > C compiler: /usr/bin/gcc GNU 8.3.0 > > C compiler flags:-mavx512f -mfma -g -fno-inline > > C++ compiler: /usr/bin/c++ GNU 8.3.0 > > C++ compiler flags: -mavx512f -mfma-std=c++11 -g -fno-inline > > CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda > compiler > > driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on > > Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1, > > V10.1.168 > > CUDA compiler > > > > > flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;; > > ;-mavx512f;-mfma;-std=c++11;-g;-fno-inline; > > CUDA driver:10.10 > > CUDA runtime: N/A > > > > > > 2. > > gmx pdb2gmx -f 1model1A1.pdb -o model1_processed.gro -water tip3p > > no warning and notes in pdb2gmx run > > > > 3. > > gmx editconf -f model1_processed.gro -o model1_newbox.gro -c -d 1.0 -bt > > dodecahedron > > no warning and notes in editconf > > > > 4. > > gmx solvate -cp model1_newbox.gro -cs spc216.gro -o model1_solv.gro -p > > topol.top > > > > WARNING: Masses and atomic (Van der Waals) radii will be guessed > > based on residue and atom names, since they could not be > > definitively assigned from the information in your input > > files. These guessed numbers might deviate from the mass > > and radius of the atom type. Please check the output > > files if necessary. > > > > NOTE: From version 5.0 gmx solvate uses the Van der Waals radii > > from the source below. This means the results may be different > > compared to previous GROMACS versions. > > > > PLEASE READ AND CITE THE FOLLOWING REFERENCE > > A. Bondi > > van der Waals Volumes and Radii > > J. Phys. Chem. 68 (1964) pp. 441-451 > > --- Thank You --- > > > > 5. > > gmx grompp -f ions.mdp -c model1_solv.gro -p topol.top -o ions.tpr > > > > NOTE 1 [file topol.top, line 60959]: > > System has non-zero total charge: -13.00 > > Total charge should normally be an integer. See > > http://www.gromacs.org/Documentation/Floating_Point_Arithmetic > > for discussion on how close it should be to an integer. > > > > WARNING 1 [file topol.top, line 60959]: > > You are using Ewald electrostatics in a system with net charge. This > can > > lead to severe artifacts, such as ions moving into regions with low > > dielectric, due to the uniform background charge. We suggest to > > neutralize your system with counter ions, possibly in combination with > a > > physiological salt concentration. > > > > PLEASE READ AND CITE THE FOLLOWING REFERENCE > > J. S. Hub, B. L. de Groot, H. Grubmueller, G. Groenhof > > Quantifying Artifacts in Ewald Simulations of Inhomogeneous Systems
Re: [gmx-users] appropriate force fielf
Do it the easy way. Find some literature that approximates your simulation and-after verifying their results with more lit- replicate their work PB > On Jun 25, 2019, at 9:38 AM, starlight wrote: > > Hi, I want to perform some simulations to study the interaction of 2 very > small peptides with each other in the water. I want to put these peptides > separately in water to give a structure and then do a simulation of them > with each other in water to study the peptide-peptide interaction. I need > to know the position of the hydrogen bonds that form between these peptides > in water. > So I want to know about the true force field and water molecule model for > these simulations. I find few articles that try some force fields to such > simulations but they don't say which ff is more appropriate than the > others. Would you please help me to know about it and recommend some > articles. Thank you > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a > mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Kindly help
Dear Kalpana, As you would note in the output files, grompp throws one warning . The way warnings work in grompp is to not allow obviously inconsistent choices of options. So if there is warning(s) while running grompp, no .tpr file is generated. However, there may be some situations (as in your case too), where you know that the warning is not relevant in the given context. For such cases, grompp has a -maxwarn flag, which can be used to suppress these warning and direct grompp to produce the .tpr file. In your case adding -maxwarn 1 to grompp command should work. Just a note of caution: -maxwarn flag should only be used when you are quite sure that the warnings are really irrelevant, otherwise you risk gross misuse of the flag. ;) Hope this helps. Abhishek On Mon, Jun 24, 2019 at 4:19 PM kalpana wrote: > Dear all, > I have worked with the same commands and setting in previous version of > ubuntu and gromacs. Now with new system and up-gradation, I am facing > problem. First kindly see the gmx information and then see the fatal error, > I am getting at grompp. Kindly find the attached ions.mdp as well and see > the other .mdp files too and guide me. > Thanks & Regards > Kalpana > > > 1. > gmx --version > > GROMACS version:2019.3 > Precision: single > Memory model: 64 bit > MPI library:thread_mpi > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) > GPU support:CUDA > SIMD instructions: AVX_512 > FFT library:fftw-3.3.8-sse2-avx > RDTSCP usage: enabled > TNG support:enabled > Hwloc support: hwloc-1.11.6 > Tracing support:disabled > C compiler: /usr/bin/gcc GNU 8.3.0 > C compiler flags:-mavx512f -mfma -g -fno-inline > C++ compiler: /usr/bin/c++ GNU 8.3.0 > C++ compiler flags: -mavx512f -mfma-std=c++11 -g -fno-inline > CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler > driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on > Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1, > V10.1.168 > CUDA compiler > > flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;; > ;-mavx512f;-mfma;-std=c++11;-g;-fno-inline; > CUDA driver:10.10 > CUDA runtime: N/A > > > 2. > gmx pdb2gmx -f 1model1A1.pdb -o model1_processed.gro -water tip3p > no warning and notes in pdb2gmx run > > 3. > gmx editconf -f model1_processed.gro -o model1_newbox.gro -c -d 1.0 -bt > dodecahedron > no warning and notes in editconf > > 4. > gmx solvate -cp model1_newbox.gro -cs spc216.gro -o model1_solv.gro -p > topol.top > > WARNING: Masses and atomic (Van der Waals) radii will be guessed > based on residue and atom names, since they could not be > definitively assigned from the information in your input > files. These guessed numbers might deviate from the mass > and radius of the atom type. Please check the output > files if necessary. > > NOTE: From version 5.0 gmx solvate uses the Van der Waals radii > from the source below. This means the results may be different > compared to previous GROMACS versions. > > PLEASE READ AND CITE THE FOLLOWING REFERENCE > A. Bondi > van der Waals Volumes and Radii > J. Phys. Chem. 68 (1964) pp. 441-451 > --- Thank You --- > > 5. > gmx grompp -f ions.mdp -c model1_solv.gro -p topol.top -o ions.tpr > > NOTE 1 [file topol.top, line 60959]: > System has non-zero total charge: -13.00 > Total charge should normally be an integer. See > http://www.gromacs.org/Documentation/Floating_Point_Arithmetic > for discussion on how close it should be to an integer. > > WARNING 1 [file topol.top, line 60959]: > You are using Ewald electrostatics in a system with net charge. This can > lead to severe artifacts, such as ions moving into regions with low > dielectric, due to the uniform background charge. We suggest to > neutralize your system with counter ions, possibly in combination with a > physiological salt concentration. > > PLEASE READ AND CITE THE FOLLOWING REFERENCE > J. S. Hub, B. L. de Groot, H. Grubmueller, G. Groenhof > Quantifying Artifacts in Ewald Simulations of Inhomogeneous Systems with a > Net > Charge > J. Chem. Theory Comput. 10 (2014) pp. 381-393 > --- Thank You --- > > Removing all charge groups because cutoff-scheme=Verlet > Analysing residue names: > There are: 424Protein residues > There are: 16060 Water residues > Analysing Protein... > Number of degrees of freedom in T-Coupling group rest is 115683.00 > Calculating fourier grid dimensions for X Y Z > Using a
[gmx-users] from n2t to topology with similar groups
Dear All, I am trying to generate the topology file of a structure of highly substituted GO sheets via n2t-file. I have written a n2t-file accounting for every group present in the structure. With the charges I have set for each atom, I should get a neutral structure, but I don’t. I’ve managed to understand that the problem lies in some atoms like the O atoms and the H atoms of -oh and -cooh groups, being too similar in the coordinate file. When building the topology, GROMACS assigns all O and H to the fist atomtype present in the n2t-file. I’ve thought I could force the position of the similar atoms, in order to obtain differences of distance, via VMD and then re-write the n2t. It would require a massive amount of time, so I beg you all for better ideas before I start. Thanks you a lot for getting there, hope to hear you soon, Claire -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Minimization stops without reaching the requested precision Fmax < 1000
Already replied https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-June/125707.html Which just links to http://www.gromacs.org/Documentation/Errors#Stepsize_too_small.2c_or_no_change_in_energy._Converged_to_machine_precision.2c_but_not_to_the_requested_precision which explains what the error message means. Catch ya, Dr. Dallas Warren Drug Delivery, Disposition and Dynamics Monash Institute of Pharmaceutical Sciences, Monash University 381 Royal Parade, Parkville VIC 3052 dallas.war...@monash.edu - When the only tool you own is a hammer, every problem begins to resemble a nail. On Tue, 25 Jun 2019 at 19:51, Mahdi Bagherpoor wrote: > > Dear Gromacs Users, > > I am trying to add the tetrahedral zinc atom dummy model into CHARMM36 > force field in Gromacs. In my system, this dummy zinc interact with four > Cystein residues (CYM). The parameters of the zinc are available in CHARMM > format, which I used to use it in NAMD package. After adding appropriate > parameters, when I do energy minimization, I get this error: > > > Energy minimization has stopped, but the forces have not converged to the > requested precision Fmax < 1000 (which may not be possible for your system). > ... > Steepest Descents converged to machine precision in 1045 steps, > but did not reach the requested Fmax < 1000. > Potential Energy = -1.5922652e+06 > Maximum force = 3.2106902e+07 on atom 3526 > Norm of force = 1.4857810e+05 > > > Atom 3526 is dummy atom of zinc ion. The dummy parameters added in > ffbonded.itp file are: > -- ffbonded.itp > > ;zinc dummy model, bonds > HK ZD 1 0.0900451872.00 > HK HK 1 0.1470451872.00 > ;zinc dummy model, angles > HK ZD HK 5 109.50 460.24 0. > 0.00 > HK HK HK 560.00 460.24 0. > 0.00 > HK HK ZD 535.25 460.24 0. > 0.00 > ;zinc dummy model, dihedrals > ZD HK HK HK 935.30 0.00 2 > HK ZD HK HK 9 120.00 0.00 2 > HK HK HK HK 970.00 0.00 2 > > --- ffnonbonded.itp > > ;zinc dummy model >ZD3061.380.000 A 0.3100 0.1 >HK 1 1.0080000.500 A 0. 0.0 > > = > In contrast to Gromacs, when I do minimization in NAMD, there is no such an > error in minimization. I should mention that I am using single precision > platform of Gromacs. > I would appreciate if you could share me in case you have some ideas about > this problem. > > Cheers, > Mahdi > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a > mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Minimization stops without reaching the requested precision Fmax < 1000
Dear Gromacs Users, I am trying to add the tetrahedral zinc atom dummy model into CHARMM36 force field in Gromacs. In my system, this dummy zinc interact with four Cystein residues (CYM). The parameters of the zinc are available in CHARMM format, which I used to use it in NAMD package. After adding appropriate parameters, when I do energy minimization, I get this error: Energy minimization has stopped, but the forces have not converged to the requested precision Fmax < 1000 (which may not be possible for your system). ... Steepest Descents converged to machine precision in 1045 steps, but did not reach the requested Fmax < 1000. Potential Energy = -1.5922652e+06 Maximum force = 3.2106902e+07 on atom 3526 Norm of force = 1.4857810e+05 Atom 3526 is dummy atom of zinc ion. The dummy parameters added in ffbonded.itp file are: -- ffbonded.itp ;zinc dummy model, bonds HK ZD 1 0.0900451872.00 HK HK 1 0.1470451872.00 ;zinc dummy model, angles HK ZD HK 5 109.50 460.24 0. 0.00 HK HK HK 560.00 460.24 0. 0.00 HK HK ZD 535.25 460.24 0. 0.00 ;zinc dummy model, dihedrals ZD HK HK HK 935.30 0.00 2 HK ZD HK HK 9 120.00 0.00 2 HK HK HK HK 970.00 0.00 2 --- ffnonbonded.itp ;zinc dummy model ZD3061.380.000 A 0.3100 0.1 HK 1 1.0080000.500 A 0. 0.0 = In contrast to Gromacs, when I do minimization in NAMD, there is no such an error in minimization. I should mention that I am using single precision platform of Gromacs. I would appreciate if you could share me in case you have some ideas about this problem. Cheers, Mahdi -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] appropriate force fielf
Hi, I want to perform some simulations to study the interaction of 2 very small peptides with each other in the water. I want to put these peptides separately in water to give a structure and then do a simulation of them with each other in water to study the peptide-peptide interaction. I need to know the position of the hydrogen bonds that form between these peptides in water. So I want to know about the true force field and water molecule model for these simulations. I find few articles that try some force fields to such simulations but they don't say which ff is more appropriate than the others. Would you please help me to know about it and recommend some articles. Thank you -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Fetal error at grompp
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-June/125706.html Catch ya, Dr. Dallas Warren Drug Delivery, Disposition and Dynamics Monash Institute of Pharmaceutical Sciences, Monash University 381 Royal Parade, Parkville VIC 3052 dallas.war...@monash.edu - When the only tool you own is a hammer, every problem begins to resemble a nail. On Tue, 25 Jun 2019 at 17:05, kalpana wrote: > > Dear all, > I have worked with the same commands and setting in previous version of > ubuntu and gromacs. Now with new system and up-gradation, I am facing > problem. First kindly see the gmx information and then see the fatal error, > I am getting at grompp. Kindly find the attached ions.mdp as well and see > the other .mdp files too and guide me. > Thanks & Regards > Kalpana > > > 1. > gmx --version > > GROMACS version:2019.3 > Precision: single > Memory model: 64 bit > MPI library:thread_mpi > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) > GPU support:CUDA > SIMD instructions: AVX_512 > FFT library:fftw-3.3.8-sse2-avx > RDTSCP usage: enabled > TNG support:enabled > Hwloc support: hwloc-1.11.6 > Tracing support:disabled > C compiler: /usr/bin/gcc GNU 8.3.0 > C compiler flags:-mavx512f -mfma -g -fno-inline > C++ compiler: /usr/bin/c++ GNU 8.3.0 > C++ compiler flags: -mavx512f -mfma-std=c++11 -g -fno-inline > CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler > driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on > Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1, V10.1.168 > CUDA compiler > flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;; > ;-mavx512f;-mfma;-std=c++11;-g;-fno-inline; > CUDA driver:10.10 > CUDA runtime: N/A > > > 2. > gmx pdb2gmx -f 1model1A1.pdb -o model1_processed.gro -water tip3p > no warning and notes in pdb2gmx run > > 3. > gmx editconf -f model1_processed.gro -o model1_newbox.gro -c -d 1.0 -bt > dodecahedron > no warning and notes in editconf > > 4. > gmx solvate -cp model1_newbox.gro -cs spc216.gro -o model1_solv.gro -p > topol.top > > WARNING: Masses and atomic (Van der Waals) radii will be guessed > based on residue and atom names, since they could not be > definitively assigned from the information in your input > files. These guessed numbers might deviate from the mass > and radius of the atom type. Please check the output > files if necessary. > > NOTE: From version 5.0 gmx solvate uses the Van der Waals radii > from the source below. This means the results may be different > compared to previous GROMACS versions. > > PLEASE READ AND CITE THE FOLLOWING REFERENCE > A. Bondi > van der Waals Volumes and Radii > J. Phys. Chem. 68 (1964) pp. 441-451 > --- Thank You --- > > 5. > gmx grompp -f ions.mdp -c model1_solv.gro -p topol.top -o ions.tpr > > NOTE 1 [file topol.top, line 60959]: > System has non-zero total charge: -13.00 > Total charge should normally be an integer. See > http://www.gromacs.org/Documentation/Floating_Point_Arithmetic > for discussion on how close it should be to an integer. > > WARNING 1 [file topol.top, line 60959]: > You are using Ewald electrostatics in a system with net charge. This can > lead to severe artifacts, such as ions moving into regions with low > dielectric, due to the uniform background charge. We suggest to > neutralize your system with counter ions, possibly in combination with a > physiological salt concentration. > > PLEASE READ AND CITE THE FOLLOWING REFERENCE > J. S. Hub, B. L. de Groot, H. Grubmueller, G. Groenhof > Quantifying Artifacts in Ewald Simulations of Inhomogeneous Systems with a > Net > Charge > J. Chem. Theory Comput. 10 (2014) pp. 381-393 > --- Thank You --- > > Removing all charge groups because cutoff-scheme=Verlet > Analysing residue names: > There are: 424Protein residues > There are: 16060 Water residues > Analysing Protein... > Number of degrees of freedom in T-Coupling group rest is 115683.00 > Calculating fourier grid dimensions for X Y Z > Using a fourier grid of 80x80x80, spacing 0.116 0.116 0.116 > Estimate for the relative computational load of the PME mesh part: 0.34 > This run will generate roughly 4 Mb of data > > There was 1 note > > There was 1 warning > > --- > Program: gmx grompp, version 2019.3 > Source file: src/gromacs/gmxpreprocess/grompp.cpp (line 2315) > >
[gmx-users] Fetal error at grompp
Dear all, I have worked with the same commands and setting in previous version of ubuntu and gromacs. Now with new system and up-gradation, I am facing problem. First kindly see the gmx information and then see the fatal error, I am getting at grompp. Kindly find the attached ions.mdp as well and see the other .mdp files too and guide me. Thanks & Regards Kalpana 1. gmx --version GROMACS version:2019.3 Precision: single Memory model: 64 bit MPI library:thread_mpi OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) GPU support:CUDA SIMD instructions: AVX_512 FFT library:fftw-3.3.8-sse2-avx RDTSCP usage: enabled TNG support:enabled Hwloc support: hwloc-1.11.6 Tracing support:disabled C compiler: /usr/bin/gcc GNU 8.3.0 C compiler flags:-mavx512f -mfma -g -fno-inline C++ compiler: /usr/bin/c++ GNU 8.3.0 C++ compiler flags: -mavx512f -mfma-std=c++11 -g -fno-inline CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1, V10.1.168 CUDA compiler flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;; ;-mavx512f;-mfma;-std=c++11;-g;-fno-inline; CUDA driver:10.10 CUDA runtime: N/A 2. gmx pdb2gmx -f 1model1A1.pdb -o model1_processed.gro -water tip3p no warning and notes in pdb2gmx run 3. gmx editconf -f model1_processed.gro -o model1_newbox.gro -c -d 1.0 -bt dodecahedron no warning and notes in editconf 4. gmx solvate -cp model1_newbox.gro -cs spc216.gro -o model1_solv.gro -p topol.top WARNING: Masses and atomic (Van der Waals) radii will be guessed based on residue and atom names, since they could not be definitively assigned from the information in your input files. These guessed numbers might deviate from the mass and radius of the atom type. Please check the output files if necessary. NOTE: From version 5.0 gmx solvate uses the Van der Waals radii from the source below. This means the results may be different compared to previous GROMACS versions. PLEASE READ AND CITE THE FOLLOWING REFERENCE A. Bondi van der Waals Volumes and Radii J. Phys. Chem. 68 (1964) pp. 441-451 --- Thank You --- 5. gmx grompp -f ions.mdp -c model1_solv.gro -p topol.top -o ions.tpr NOTE 1 [file topol.top, line 60959]: System has non-zero total charge: -13.00 Total charge should normally be an integer. See http://www.gromacs.org/Documentation/Floating_Point_Arithmetic for discussion on how close it should be to an integer. WARNING 1 [file topol.top, line 60959]: You are using Ewald electrostatics in a system with net charge. This can lead to severe artifacts, such as ions moving into regions with low dielectric, due to the uniform background charge. We suggest to neutralize your system with counter ions, possibly in combination with a physiological salt concentration. PLEASE READ AND CITE THE FOLLOWING REFERENCE J. S. Hub, B. L. de Groot, H. Grubmueller, G. Groenhof Quantifying Artifacts in Ewald Simulations of Inhomogeneous Systems with a Net Charge J. Chem. Theory Comput. 10 (2014) pp. 381-393 --- Thank You --- Removing all charge groups because cutoff-scheme=Verlet Analysing residue names: There are: 424Protein residues There are: 16060 Water residues Analysing Protein... Number of degrees of freedom in T-Coupling group rest is 115683.00 Calculating fourier grid dimensions for X Y Z Using a fourier grid of 80x80x80, spacing 0.116 0.116 0.116 Estimate for the relative computational load of the PME mesh part: 0.34 This run will generate roughly 4 Mb of data There was 1 note There was 1 warning --- Program: gmx grompp, version 2019.3 Source file: src/gromacs/gmxpreprocess/grompp.cpp (line 2315) Fatal error: Too many warnings (1). If you are sure all warnings are harmless, use the -maxwarn option. For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors --- -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.