[gmx-users] Error running mdrun (v-5.1.4) on a linux cluster.

2018-06-18 Thread Abhishek Acharya
Dear GROMACS users,

I installed mpi-enabled GROMACS (mdrun only) on a linux cluster. The
administrators recommended using Intel MPI so I used that. I also installed
a local version of gcc-5.5.0 (with required dependencies), as the version
available system-wide (4.4.6 I believe) was not compatible with C11
standard. Using FFTW-3.3.7 and latest BLAS and LAPACK libraries. The
compilation ran without any errors.

I could not run any regtests as runs are only allowed via PBS scripts.
However, I tried to run a test simulation using a tpr file generated using
the same gromacs version.

On submitting the job using qsub, I find that the queue status is C. The
output err file shows the following error, which I have never seen before.

.
.
GROMACS:  mdrun_mpi, VERSION 5.1.4
Executable:   /home/acusers/pbalaji/install/gromacs-514-impi/bin/mdrun_mpi

---
Program: mdrun_mpi, VERSION 5.1.4

---
Program: mdrun_mpi, VERSION 5.1.4
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)


---
Program: mdrun_mpi, VERSION 5.1.4
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)


---
Program: mdrun_mpi, VERSION 5.1.4
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

Error in user input:

---
Program: mdrun_mpi, VERSION 5.1.4
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

Error in user input:

---
Program: mdrun_mpi, VERSION 5.1.4
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

Error in user input:
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

Error in user input:
Invalid command-line options
Error in user input:
Invalid command-line options
Error in user input:
Invalid command-line options
Invalid command-line options
Invalid command-line options
Invalid command-line options
  In command-line option -s
File 'testfile.tpr' does not exist or is not accessible.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Halting parallel program mdrun_mpi on rank 3 out of 12
  In command-line option -s
File 'testfile.tpr' does not exist or is not accessible.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Halting parallel program mdrun_mpi on rank 5 out of 12
  In command-line option -s
File 'testfile.tpr' does not exist or is not accessible.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Halting parallel program mdrun_mpi on rank 6 out of 12

---
Program: mdrun_mpi, VERSION 5.1.4
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

  In command-line option -s
File 'testfile.tpr' does not exist or is not accessible.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Halting parallel program mdrun_mpi on rank 11 out of 12
.
.
and more of the same.

Some clues to how this issue can be solved would certainly help.

Thanks in advance.

Abhishek
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] nvt.mdp for water surface tension

2018-06-18 Thread Rana Ali
Dear users
It will be great help if somebody provide me the nvt.mdp file for
calculation of water surface tension.

Thanks in advance

Ranadeepu
India
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] spatial restraints

2018-06-18 Thread Simon Kit Sang Chu
Hi Stefano,

You may consider "gmx make_ndx" to make an index file and append a
restraint section (POSRES) at the end of your existing topology to call the
indices.

I think this is a minimal alteration to your topology.

Regards,
Simon

在 2018年6月19日週二 08:17,Stefano Guglielmo  寫道:

> Hello gromacs users,
>
> I would like to know if it is possible to add spatial restraints to a
> selected subgroup of atoms of my system (protein+membrane+ligand+water)
> without generating a new topology: I have a top generated with parmed fed
> with the suitable amber ff and I would like just to add the atoms that
> should be restrained to the existing topology.
> Thanks in advance
> Stefano
>
> --
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] spatial restraints

2018-06-18 Thread Stefano Guglielmo
Hello gromacs users,

I would like to know if it is possible to add spatial restraints to a
selected subgroup of atoms of my system (protein+membrane+ligand+water)
without generating a new topology: I have a top generated with parmed fed
with the suitable amber ff and I would like just to add the atoms that
should be restrained to the existing topology.
Thanks in advance
Stefano

-- 
Stefano GUGLIELMO PhD
Assistant Professor of Medicinal Chemistry
Department of Drug Science and Technology
Via P. Giuria 9
10125 Turin, ITALY
ph. +39 (0)11 6707178
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-18 Thread Alex
Persistence is enabled so I don't have to overclock again. To be honest, I
am still not entirely comfortable with the notion of ranks, after reading
the acceleration document a bunch of times. Parts of log file below and I
will obviously appreciate suggestions/clarifications:

Command line:
  gmx mdrun -nt 4 -ntmpi 2 -npme 1 -pme gpu -nb gpu -s run_unstretch.tpr -o
traj_unstretch.trr -g md.log -c unstretched.gro

GROMACS version:2018
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  SSE4.1
FFT library:fftw-3.3.5-sse2
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  disabled
Tracing support:disabled
Built on:   2018-02-13 19:43:29
Built by:   smolyan@MINTbox [CMAKE]
Build OS/arch:  Linux 4.4.0-112-generic x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Xeon(R) CPU   W3530  @ 2.80GHz
Build CPU family:   6   Model: 26   Stepping: 5
Build CPU features: apic clfsh cmov cx8 cx16 htt intel lahf mmx msr
nonstop_tsc pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
C compiler: /usr/bin/cc GNU 5.4.0
C compiler flags:-msse4.1 -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /usr/bin/c++ GNU 5.4.0
C++ compiler flags:  -msse4.1-std=c++11   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
driver;Copyright (c) 2005-2017 NVIDIA Corporation;Built on
Fri_Nov__3_21:07:56_CDT_2017;Cuda compilation tools, release 9.1, V9.1.85
CUDA compiler
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;
;-msse4.1;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:9.10
CUDA runtime:   9.10


Running on 1 node with total 4 cores, 4 logical cores, 1 compatible GPU
Hardware detected:
  CPU info:
Vendor: Intel
Brand:  Intel(R) Xeon(R) CPU   W3530  @ 2.80GHz
Family: 6   Model: 26   Stepping: 5
Features: apic clfsh cmov cx8 cx16 htt intel lahf mmx msr nonstop_tsc
pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
  Hardware topology: Basic
Sockets, cores, and logical processors:
  Socket  0: [   0]
  Socket  1: [   1]
  Socket  2: [   2]
  Socket  3: [   3]
  GPU info:
Number of GPUs detected: 1
#0: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible



M E G A - F L O P S   A C C O U N T I N G

 NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
 RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
 W3=SPC/TIP3p  W4=TIP4p (single or pairs)
 V=Potential and force  V=Potential only  F=Force only

 Computing:   M-Number M-Flops  % Flops
-
 Pair Search distance check  547029.956656 4923269.610 0.0
 NxN Ewald Elec. + LJ [F] 485658021.416832 32053429413.51198.0
 NxN Ewald Elec. + LJ [V] 4905656.839680   524905281.846 1.6
 1,4 nonbonded interactions  140625.00562512656250.506 0.0
 Reset In Box  4599.00   13797.000 0.0
 CG-CoM4599.018396   13797.055 0.0
 Bonds48000.001920 2832000.113 0.0
 Angles   94650.00378615901200.636 0.0
 RB-Dihedrals186600.00746446090201.844 0.1
 Pos. Restr.   2600.000104  13.005 0.0
 Virial4610.268441   82984.832 0.0
 Stop-CM 91.998396 919.984 0.0
 Calc-Ekin45990.036792 1241730.993 0.0
 Constraint-V318975.012759 2551800.102 0.0
 Constraint-Vir3189.762759   76554.306 0.0
 Settle  106325.00425334342976.374 0.1
 Virtual Site 3  107388.258506 3973365.565 0.0
-
 Total 32703165544.282   100.0
-


D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S

 av. #atoms communicated per step for force:  2 x 0.0
 av. #atoms communicated per step for vsites: 3 x 0.0
 av. #atoms communicated per step 

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-18 Thread Szilárd Páll
On Mon, Jun 18, 2018 at 2:22 AM, Alex  wrote:

> Thanks for the heads up. With the K40c instead of GTX 960 here's what I
> did and here are the results:
>
> 1. Enabled persistence mode and overclocked the card via nvidia-smi:
> http://acceleware.com/blog/gpu-boost-nvidias-tesla-k40-gpus


Note that: persistence mode is only for convenience.


> 2. Offloaded PME's FFT to GPU (which wasn't the case with GTX 960), this
> brough the "pme mesh / force" ratio to something like 1.07.
>

I still think you are running multiple ranks which is unlikely to be ideal,
but without seeing a log file, it's hard to tell..

The result is a solid increase in performance on a small-ish system (20K
> atoms): 90 ns/day instead of 65-70. I don't use this box for anything
> except prototyping, but still the swap + tweaks were pretty useful.


>
> Alex
>
>
>
> On 6/15/2018 1:20 PM, Szilárd Páll wrote:
>
>> Hi,
>>
>> Regarding the K40 vs GTX 960 question, the K40 will likely be a bit
>> faster (though it'l consume more power if that matters). The
>> difference will be at most 20% in total performance, I think -- and
>> with small systems likely negligible (as a smaller card with higher
>> clocks is more efficient at small tasks than a large card with lower
>> clocks).
>>
>> Regarding the load balance note, you are correct, the "pme mesh/force"
>> means the ratio of time spent in computing PME forces on a separate
>> task/rank and the rest of the forces (including nonbonded, bonded,
>> etc.). With GPU offload this is a bit more tricky as the observed time
>> is the time spent waiting for the GPU results, but the take-away is
>> the same: when a run shows "pme mesh/force" far from 1, there is
>> imbalance affecting performance.
>>
>> However, note that with a single GPU I've yet to see a case where you
>> get better performance by running multiple ranks rather than simply
>> running OpenMP-only. Also note that what a "weak GPU" can
>> case-by-case, so I recommend taking the 1-2 minutes to do a short run
>> and check for a certain hardware + simulation setup is it better to
>> offload all of PME or keep the FFTs on the CPU.
>>
>> We'll do our best to automate more of these choices, but for now if
>> you care about performance it's useful to test before doing long runs.
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>> On Thu, Jun 14, 2018 at 2:09 AM, Alex  wrote:
>>
>>> Question: in the DD output (md.log) that looks like "DD  step xx  pme
>>> mesh/force 1.229," what is the ratio? Does it mean the pme calculations
>>> take longer by the shown factor than the nonbonded interactions?
>>> With GTX 960, the ratio is consistently ~0.85, with Tesla K40 it's ~1.25.
>>> My mdrun line contains  -pmefft cpu (per Szilard's advice for weak GPUs,
>>> I
>>> believe). Would it then make sense to offload the fft to the K40?
>>>
>>> Thank you,
>>>
>>> Alex
>>>
>>> On Wed, Jun 13, 2018 at 4:53 PM, Alex  wrote:
>>>
>>> So, swap, then? Thank you!



 On Wed, Jun 13, 2018 at 4:49 PM, paul buscemi  wrote:

   flops trumps clock speed…..
>
> On Jun 13, 2018, at 3:45 PM, Alex  wrote:
>>
>> Hi all,
>>
>> I have an old "prototyping" box with a 4-core Xeon and an old GTX 960.
>>
> We
>
>> have a Tesla K40 laying around and there's only one PCIE slot
>> available
>>
> in
>
>> this machine. Would it make sense to swap the cards, or is it already
>> bottlenecked by the CPU? I compared the specs and 960 has a higher
>> clock
>> speed, while K40's FP performance is better. Should I swap the GPUs?
>>
>> Thanks,
>>
>> Alex
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>>
> /Mailing_Lists/GMX-Users_List before posting!
>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


 --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at http://www.gromacs.org/Support
>>> /Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>>
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at 

Re: [gmx-users] Shell (Drude) model for polarization in GROMACS

2018-06-18 Thread Justin Lemkul




On 6/18/18 4:05 PM, Eric Smoll wrote:

Justin,

Thank you so much for the rapid and clear reply!  Sorry to ask for a bit
more clarification.

The thole_polarization isn't in the manual at all.  Is it structured the
same way as the [ polarization ] directive in the manual:

[ thole_polarization ]
; Atom i j type alpha
1 2 1 0.001

If I want Thole corrections, am I correct in assuming that I should list
*all shells* in the system under this thole_polarization directive with (as
you pointed out) "i" or "j" as the shell?  If "i" is the shell, "j" is the
core. If "j" is the shell, "i" is the core.


You have to list everything explicitly, including the shielding factor, 
and between dipole pairs (Thole isn't between an atom and its shell, 
it's between neighboring dipoles). I honestly don't know what the format 
is; I completely re-wrote the Thole code for our Drude implementation 
(still not officially incorporated into a release due to DD issues, but 
we're close to a fix...)



The code for "init_shell_flexcon" was very helpful.  Thank you!
  nstcalcenergy must be set to 1.  The code says that domain decomposition
is not supported so multi-node MPI calculations are not allowed.  I can
still use an MPI-enabled GROMACS executable on a single node for shell MD,
correct?  Thread parallelization is still permitted, correct?


Presumably you're limited to OpenMPI, but again I have no idea about 
this code. I've never actually used it.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Shell (Drude) model for polarization in GROMACS

2018-06-18 Thread Eric Smoll
Justin,

Thank you so much for the rapid and clear reply!  Sorry to ask for a bit
more clarification.

The thole_polarization isn't in the manual at all.  Is it structured the
same way as the [ polarization ] directive in the manual:

[ thole_polarization ]
; Atom i j type alpha
1 2 1 0.001

If I want Thole corrections, am I correct in assuming that I should list
*all shells* in the system under this thole_polarization directive with (as
you pointed out) "i" or "j" as the shell?  If "i" is the shell, "j" is the
core. If "j" is the shell, "i" is the core.

The code for "init_shell_flexcon" was very helpful.  Thank you!
 nstcalcenergy must be set to 1.  The code says that domain decomposition
is not supported so multi-node MPI calculations are not allowed.  I can
still use an MPI-enabled GROMACS executable on a single node for shell MD,
correct?  Thread parallelization is still permitted, correct?

Thanks again, Justin. I hope my questions are clear and easy to answer.

Best,
Eric

On Mon, Jun 18, 2018 at 12:07 PM, Justin Lemkul  wrote:

>
>
> On 6/18/18 3:00 PM, Eric Smoll wrote:
>
>> Hello GROMACS users,
>>
>> I am looking over the shell (Drude) model for polarization in GROMACS.
>> There isn't much information available in the manual (probably because
>> this
>> feature is rarely used).  I was hoping someone knowledgeable about
>> polarizable simulations in GROMACS could help answer a few of my
>> questions:
>>
>> (1) How exactly are Thole functions turned on? The 2018.1 manual does not
>> specify. I am guessing they are hard-coded into all shell molecular
>> dynamics simulations. Am I correct?
>>
>
> They're activated with a [thole_polarization] directive.
>
> (2) To add a shell to a topology, I suspect I must specify a shell atomtype
>> (setting the particle type to "S" for shell) and list the shells in the
>> atoms directives.  Using the "[ atomtypes ]" format in oplsaa.ff for a
>> molecule with a res-name of "ABC", I assume the following will produce two
>> shell particles with a charge of +1 and -1. Am I correct?
>>
>> [ atomtypes ]
>> ; atom-type bond-type atomic-number mass charge particle-type sigma
>> epsilon
>> opls_ Sh 1 0 0 S 0 0
>> ; I am guessing that
>>
>> [ atoms ]
>> ; atom-number atom-type res-number res-name atom-name charge-grp charge
>> 1 opls_ 1 ABC Sh1 1 +1
>> 2 opls_ 1 ABC Sh2 2 -1
>> 
>>
>
> Looks right.
>
> (3) I suppose connectivity of each shell to its core is indicated with the
>> "[ polarizability ]" directive. I am guessing "i" is the core atom and "j"
>> is the shell particle.
>>
>
> It actually doesn't matter. mdrun figures it out on the fly based on
> particle type - see init_shell_flexcon in shellfc.cpp.
>
> (4) Since the code carries out an SCF calculation to relax the position of
>> the shells at every timestep, I assume there is no need to specify shell
>> masses or to thermostat the shell DOF.  Does GROMACS omit shell particles
>> from thermostats?  The manual does not specify.
>>
>
> Yes. Massless particles are not subjected to dynamical update routines.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Shell (Drude) model for polarization in GROMACS

2018-06-18 Thread Justin Lemkul




On 6/18/18 3:00 PM, Eric Smoll wrote:

Hello GROMACS users,

I am looking over the shell (Drude) model for polarization in GROMACS.
There isn't much information available in the manual (probably because this
feature is rarely used).  I was hoping someone knowledgeable about
polarizable simulations in GROMACS could help answer a few of my
questions:

(1) How exactly are Thole functions turned on? The 2018.1 manual does not
specify. I am guessing they are hard-coded into all shell molecular
dynamics simulations. Am I correct?


They're activated with a [thole_polarization] directive.


(2) To add a shell to a topology, I suspect I must specify a shell atomtype
(setting the particle type to "S" for shell) and list the shells in the
atoms directives.  Using the "[ atomtypes ]" format in oplsaa.ff for a
molecule with a res-name of "ABC", I assume the following will produce two
shell particles with a charge of +1 and -1. Am I correct?

[ atomtypes ]
; atom-type bond-type atomic-number mass charge particle-type sigma epsilon
opls_ Sh 1 0 0 S 0 0
; I am guessing that

[ atoms ]
; atom-number atom-type res-number res-name atom-name charge-grp charge
1 opls_ 1 ABC Sh1 1 +1
2 opls_ 1 ABC Sh2 2 -1



Looks right.


(3) I suppose connectivity of each shell to its core is indicated with the
"[ polarizability ]" directive. I am guessing "i" is the core atom and "j"
is the shell particle.


It actually doesn't matter. mdrun figures it out on the fly based on 
particle type - see init_shell_flexcon in shellfc.cpp.



(4) Since the code carries out an SCF calculation to relax the position of
the shells at every timestep, I assume there is no need to specify shell
masses or to thermostat the shell DOF.  Does GROMACS omit shell particles
from thermostats?  The manual does not specify.


Yes. Massless particles are not subjected to dynamical update routines.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-18 Thread Justin Lemkul




On 6/18/18 9:11 AM, Own 12121325 wrote:

thanks Justin for the suggestions!

but generally this antiquated approach produces correct thermodynamical
ensembles (since I do not call grompp again), doesn't it?  Following this
method the last snapshot from the trajectory will be taken to continue job,
right ?


Possibly. I haven't done this since the 3.3.x series, though, because 
checkpoints guarantee an exact continuation. The combination of .trr + 
.edr is not exact but is close. It probably all comes out in the noise 
of the simulation, but if you can choose between an exact continuation 
and an approximate one, why not be exact? :)


-Justin


2018-06-15 14:22 GMT+02:00 Justin Lemkul :



On 6/14/18 4:01 AM, Own 12121325 wrote:


Hello,

I wonder to know if it's necessary to provide edr file for the completed
part of the simulation in order to continue the md job assuming that I
provide the trajectory (with coordinates and velocities) using
gmx convert-tpr.

Does the ensembles produced by mdrun will be the same following these two
methods of the continuation:

gmx convert-tpr -s prev.tpr -f prev.trr -o next.tpr -extend 1000
gmx mdrun -v -deffnm next

compared to

gmx convert-tpr -s prev.tpr -f prev.trr *-e prev.edr* -o next.tpr -extend
1000
gmx mdrun -v -deffnm next


You don't use either .trr or .edr files. Just generate the new .tpr file
with however much more time you want and pick up from the exact time point
you were with mdrun -cpi prev.cpt. Use of .trr and .edr files to extend
simulations is an antiquated approach.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Shell (Drude) model for polarization in GROMACS

2018-06-18 Thread Eric Smoll
Hello GROMACS users,

I am looking over the shell (Drude) model for polarization in GROMACS.
There isn't much information available in the manual (probably because this
feature is rarely used).  I was hoping someone knowledgeable about
polarizable simulations in GROMACS could help answer a few of my
questions:

(1) How exactly are Thole functions turned on? The 2018.1 manual does not
specify. I am guessing they are hard-coded into all shell molecular
dynamics simulations. Am I correct?

(2) To add a shell to a topology, I suspect I must specify a shell atomtype
(setting the particle type to "S" for shell) and list the shells in the
atoms directives.  Using the "[ atomtypes ]" format in oplsaa.ff for a
molecule with a res-name of "ABC", I assume the following will produce two
shell particles with a charge of +1 and -1. Am I correct?

[ atomtypes ]
; atom-type bond-type atomic-number mass charge particle-type sigma epsilon
opls_ Sh 1 0 0 S 0 0
; I am guessing that

[ atoms ]
; atom-number atom-type res-number res-name atom-name charge-grp charge
1 opls_ 1 ABC Sh1 1 +1
2 opls_ 1 ABC Sh2 2 -1


(3) I suppose connectivity of each shell to its core is indicated with the
"[ polarizability ]" directive. I am guessing "i" is the core atom and "j"
is the shell particle.

(4) Since the code carries out an SCF calculation to relax the position of
the shells at every timestep, I assume there is no need to specify shell
masses or to thermostat the shell DOF.  Does GROMACS omit shell particles
from thermostats?  The manual does not specify.

Thanks for the help!

Best,
Eric
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Protein potential energy

2018-06-18 Thread Justin Lemkul




On 6/18/18 4:15 AM, Ming Tang wrote:

Dear list,

I pulled a protein in water. In order to get the trend of the potential energy 
terms of the protein, I defined energygrps and reran the system. May I ask can 
I get the right trend of the protein potential energy terms using this approach?


No. You'd have to strip away all non-protein components from the 
trajectory, make a matching .tpr file, and use mdrun -rerun. Then you'll 
get the protein's potential energy, which also happens to be a 
meaningless quantity.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Heavy water H-H radial distribution function

2018-06-18 Thread Justin Lemkul



On 6/18/18 3:47 AM, Haelee Hyun wrote:


Dear GROMACS users,

I'm wondering how can I correctly describe H-H radial distribution 
function of heavy water.


Please check atteched file HH_rdf.PNG which is a calculated result of 
H-H radial distribution from my simulation.




The list does not accept attachments.

The first peak is due to the intramolecular interaction of water 
molecules.


It shows almolst 7 at 0.13 nm but when comparing this result with an 
experimental data, the experimental data shows just almost 2 at the 
first peak.


I have tried many times of simulations but i couldn't find why this 
huge difference is caused.


I used tip4p/2005f water model and used potential is atteched below.



You need to use the -excl flag to read topology exclusions. Doing so 
will get rid of this intramolecular peak.


-Justin


[ defaults ]
; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ
1  3  yes  0.5 0.5

[ moleculetype ]
; molname nrexcl
SOL  2

[ atoms ]
; id at type res nr  residu name at name cg nr charge
1   opls_113    1   SOL  OW 1   0.0
2   opls_114    1   SOL HW1 1   0.5564
3   opls_114    1   SOL HW2 1   0.5564
4   opls_115    1   SOL  MW 1  -1.1128

;[nonbond_params]
; i j funct q   V    W
;1 2 1 0.5564  3.16440e-01  7.74907e-01
;1 3 1   0.5564  3.16440e-01  7.74907e-01


#ifndef FLEXIBLE
[ settles ]
; OW    funct   doh    dhh
1   1   0.09664    0.1

#else

[ bonds ]
; i j funct length   D   beta
1 2  3    0.09419   432.581   22.87   ; For TIP4P/2005f Water b0, 
D, beta
1 3  3    0.09419   432.581   22.87   ; For TIP4P/2005f Water b0, 
D, beta


[ angles ]
; i j k funct angle force.c.
2 1 3 1 107.4 367.81
#endif

[ exclusions ]
1 2 3 4
2 1 3 4
3 1 2 4
4 1 2 3

; The position of the virtual site is computed as follows:
;
; const = distance (OD) / [ cos (angle(DOH))  * distance (OH) ]
;   0.015 nm / [ cos (52.26 deg) * 0.09572 nm ]

; Vsite pos x4 = x1 + a*(x2-x1) + b*(x3-x1)

[ virtual_sites3 ]
; Vsite from   funct a  b
4 1 2 3 1 0.13288  0.13288


I used -DFLEXIBLE option and energy minimization, NVT, NPT 
equlibration and NVE production run.


If someone would find some wrong things, please let me know.

Thank you.

Haelee Hyun








--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Compilation issue, MacOS 10.13.5 - Gromacs 2018 - CUDA 9.2

2018-06-18 Thread Kevin Boyd
Hi,

In general you're not supposed to mix C compilers. I've had linking
errors in the past, eg with using different versions of GCC between
the -DC_CMAKE_C_COMPILER and -DCUDA_HOST_COMPILER.

See this post for a discussion.

https://www.mail-archive.com/gromacs.org_gmx-users@maillist.sys.kth.se/msg32682.html

Kevin

On Mon, Jun 18, 2018 at 2:14 PM, Владимир Богданов
 wrote:
>
>
> HI,
>
>
>
> I tried to install gromacs with cuda support on MacBook Pro 2015 (macOS
> 10.13.4) + eGPU (nvidia titan xp) many times and always have got errors. I
> didn’t try to install Ubuntu on my MacBook and then install gromacs with
> cuda, but I guess it could work. Excuse me for my English.
>
>
>
> Vlad.
>
>
>
> 15.06.2018, 12:36, "Florian Nachon" :
>
> Hi,
>
> I’m struggling to install Gormacs 2018 on my MacBook pro (late 2013) with
> cuda support for the NVIDIA GeForce GT 750M.
>
> I’m using Clang-6.0 installed with macports for OpenMP support, and clang
> from Xcode 9.2 for support of cuda 9.2 and cmake stage is fine using :
>
> cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/
> -DCMAKE_C_COMPILER=/opt/local/bin/clang-mp-6.0
> -DCMAKE_CXX_COMPILER=/opt/local/bin/clang++-mp-6.0 -DGMX_SIMD=AVX2_256
> -DCUDA_HOST_COMPILER=/usr/bin/clang -DGMX_FFT_LIBRARY=fftw3
>
>
> But I have apparently a linking issue for gpu_utilstest_cuda at the end of
> the make stage :
>
> Scanning dependencies of target gpu_utilstest_cuda
> [ 98%] Linking CXX shared library
> ../../../../lib/libgpu_utilstest_cuda.dylib
> Undefined symbols for architecture x86_64:
>   "gmx::formatString(char const*, ...)", referenced from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "gmx::GromacsException::setInfo(std::__1::type_index const&,
> std::__1::unique_ptr std::__1::default_delete >&&)", referenced
> from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "gmx::GromacsException::GromacsException(gmx::ExceptionInitializer
> const&)", referenced from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "gmx::internal::assertHandler(char const*, char const*, char const*, char
> const*, int)", referenced from:
>   gmx::doDeviceTransfers(gmx_gpu_info_t const&, gmx::ArrayRef const>, gmx::ArrayRef) in
> gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "gmx::internal::IExceptionInfo::~IExceptionInfo()", referenced from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   gmx::ExceptionInfo gmx::ThrowLocation>::~ExceptionInfo() in
> gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   gmx::ExceptionInfo gmx::ThrowLocation>::~ExceptionInfo() in
> gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "typeinfo for gmx::InternalError", referenced from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "typeinfo for gmx::internal::IExceptionInfo", referenced from:
>   typeinfo for gmx::ExceptionInfo gmx::ThrowLocation> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   "vtable for gmx::InternalError", referenced from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   NOTE: a missing vtable usually means the first non-inline virtual member
> function has no definition.
>   "vtable for gmx::GromacsException", referenced from:
>   gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*)
> in gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   gmx::InternalError::~InternalError() in
> gpu_utilstest_cuda_generated_devicetransfers.cu.o
>   NOTE: a missing vtable usually means the first non-inline virtual member
> function has no definition.
> ld: symbol(s) not found for architecture x86_64
> clang: error: linker command failed with exit code 1 (use -v to see
> invocation)
> src/gromacs/gpu_utils/tests/CMakeFiles/gpu_utilstest_cuda.dir/build.make:79:
> recipe for target 'lib/libgpu_utilstest_cuda.dylib' failed
> make[2]: *** [lib/libgpu_utilstest_cuda.dylib] Error 1
> CMakeFiles/Makefile2:3382: recipe for target
> 'src/gromacs/gpu_utils/tests/CMakeFiles/gpu_utilstest_cuda.dir/all' failed
> make[1]: ***
> [src/gromacs/gpu_utils/tests/CMakeFiles/gpu_utilstest_cuda.dir/all] Error 2
> Makefile:162: recipe for target 'all' failed
> make: *** [all] Error 2
>
>
> Any clue?
>
> Florian
>
>
>
> --
> Gromacs Users mailing list
>
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 

Re: [gmx-users] Compilation issue, MacOS 10.13.5 - Gromacs 2018 - CUDA 9.2

2018-06-18 Thread Владимир Богданов
 HI, I tried to install gromacs with cuda support on MacBook Pro 2015 (macOS 10.13.4) + eGPU (nvidia titan xp) many times and always have got errors. I didn’t try to install Ubuntu on my MacBook and then install gromacs with cuda, but I guess it could work. Excuse me for my English.  Vlad.  15.06.2018, 12:36, "Florian Nachon" :Hi,I’m struggling to install Gormacs 2018 on my MacBook pro (late 2013) with cuda support for the NVIDIA GeForce GT 750M.I’m using Clang-6.0 installed with macports for OpenMP support, and clang from Xcode 9.2 for support of cuda 9.2 and cmake stage is fine using :cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_C_COMPILER=/opt/local/bin/clang-mp-6.0 -DCMAKE_CXX_COMPILER=/opt/local/bin/clang++-mp-6.0 -DGMX_SIMD=AVX2_256 -DCUDA_HOST_COMPILER=/usr/bin/clang -DGMX_FFT_LIBRARY=fftw3But I have apparently a linking issue for gpu_utilstest_cuda at the end of the make stage :Scanning dependencies of target gpu_utilstest_cuda[ 98%] Linking CXX shared library ../../../../lib/libgpu_utilstest_cuda.dylibUndefined symbols for architecture x86_64:  "gmx::formatString(char const*, ...)", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "gmx::GromacsException::setInfo(std::__1::type_index const&, std::__1::unique_ptr >&&)", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "gmx::GromacsException::GromacsException(gmx::ExceptionInitializer const&)", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "gmx::internal::assertHandler(char const*, char const*, char const*, char const*, int)", referenced from:  gmx::doDeviceTransfers(gmx_gpu_info_t const&, gmx::ArrayRef, gmx::ArrayRef) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "gmx::internal::IExceptionInfo::~IExceptionInfo()", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  gmx::ExceptionInfo::~ExceptionInfo() in gpu_utilstest_cuda_generated_devicetransfers.cu.o  gmx::ExceptionInfo::~ExceptionInfo() in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "typeinfo for gmx::InternalError", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "typeinfo for gmx::internal::IExceptionInfo", referenced from:  typeinfo for gmx::ExceptionInfo in gpu_utilstest_cuda_generated_devicetransfers.cu.o  "vtable for gmx::InternalError", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.  "vtable for gmx::GromacsException", referenced from:  gmx::(anonymous namespace)::throwUponFailure(cudaError, char const*) in gpu_utilstest_cuda_generated_devicetransfers.cu.o  gmx::InternalError::~InternalError() in gpu_utilstest_cuda_generated_devicetransfers.cu.o  NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.ld: symbol(s) not found for architecture x86_64clang: error: linker command failed with exit code 1 (use -v to see invocation)src/gromacs/gpu_utils/tests/CMakeFiles/gpu_utilstest_cuda.dir/build.make:79: recipe for target 'lib/libgpu_utilstest_cuda.dylib' failedmake[2]: *** [lib/libgpu_utilstest_cuda.dylib] Error 1CMakeFiles/Makefile2:3382: recipe for target 'src/gromacs/gpu_utils/tests/CMakeFiles/gpu_utilstest_cuda.dir/all' failedmake[1]: *** [src/gromacs/gpu_utils/tests/CMakeFiles/gpu_utilstest_cuda.dir/all] Error 2Makefile:162: recipe for target 'all' failedmake: *** [all] Error 2Any clue?Florian --Gromacs Users mailing list* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists* For (un)subscribe requests visithttps://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.  -- C уважением, Владимир А. Богданов -- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Only 2 ns/day

2018-06-18 Thread Kevin Boyd
Hi,

One source of poor performance is certainly that you don't have SIMD
enabled. Try recompiling with SIMD enabled (the log file suggests
AVX_128_FMA). If you are compiling on gromacs on the same node
architecture that you plan to run gromacs on (and you really should be
doing this), it should detect the SIMD compatibility automatically.
See the install guide.

http://manual.gromacs.org/documentation/2016/install-guide/

Also, it looks like your version of gromacs was compiled with the PGI
C and C++ compilers, which is not recommended. Consider using GCC. The
age of the compiler version matters as well - use as recent a version
as possible for performance and feature compatibility.

Kevin

On Mon, Jun 18, 2018 at 6:38 PM, Alex  wrote:
> Dear all,
> I use 2 nodes (each has 32 cores) for a system of 41933 atoms (containing
> water-short polymer on a solid surface). The performance is too poor, only
> 2.43 ns/day, although the imbalance is normal and around 4.9 %.
>
> The submission command I used is:
> gmx_mpi mdrun -ntomp 1 -deffnm eql1 -s eql1.tpr -rdd 1.5 -dds 0. -npme
> 4 -ntomp_pme 1 -g eql1.log -v
>
> I had to use -rdd 1.5 otherwise the DD error would show up. I also tested
> manually different -npme and among all those only -npme 4 and -npme 24
> works with these 64 cores.
> I checked the -npme manually because I am not familiar with the gmx
> tune_pme.
>
> Below I have shared the log file of the simulation and I would be so
> appreciated if one could help me improve the performance.
>
> https://drive.google.com/open?id=12fX5URhvYZexST76pw3Q8wQrEPwSyzKL
>
> Thank you very much.
>
> Regards,
> Alex
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Only 2 ns/day

2018-06-18 Thread Alex
Dear all,
I use 2 nodes (each has 32 cores) for a system of 41933 atoms (containing
water-short polymer on a solid surface). The performance is too poor, only
2.43 ns/day, although the imbalance is normal and around 4.9 %.

The submission command I used is:
gmx_mpi mdrun -ntomp 1 -deffnm eql1 -s eql1.tpr -rdd 1.5 -dds 0. -npme
4 -ntomp_pme 1 -g eql1.log -v

I had to use -rdd 1.5 otherwise the DD error would show up. I also tested
manually different -npme and among all those only -npme 4 and -npme 24
works with these 64 cores.
I checked the -npme manually because I am not familiar with the gmx
tune_pme.

Below I have shared the log file of the simulation and I would be so
appreciated if one could help me improve the performance.

https://drive.google.com/open?id=12fX5URhvYZexST76pw3Q8wQrEPwSyzKL

Thank you very much.

Regards,
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] angle calculation

2018-06-18 Thread SHAHEE ISLAM
hi,
i want to calculate the angle between the two beeta sheet of a
protein.for this reason i have made a index file containing two beeta
sheet group.can anyone please suggest me how can i do this.
i am using this command
gmx gangle -f ../pbc340.xtc/md-340k-0-1 -s ../equilibration.gro -n
b-1-3-sheet-p1.ndx -oav angle.xvg -dt 10
the error is
Inconsistency in user input:
Number of positions in selection 1 in the first group not divisible by 3

thanking you
shahee
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Continuation of the gromacs job using gmx convert-tpr

2018-06-18 Thread Own 12121325
thanks Justin for the suggestions!

but generally this antiquated approach produces correct thermodynamical
ensembles (since I do not call grompp again), doesn't it?  Following this
method the last snapshot from the trajectory will be taken to continue job,
right ?

2018-06-15 14:22 GMT+02:00 Justin Lemkul :

>
>
> On 6/14/18 4:01 AM, Own 12121325 wrote:
>
>> Hello,
>>
>> I wonder to know if it's necessary to provide edr file for the completed
>> part of the simulation in order to continue the md job assuming that I
>> provide the trajectory (with coordinates and velocities) using
>> gmx convert-tpr.
>>
>> Does the ensembles produced by mdrun will be the same following these two
>> methods of the continuation:
>>
>> gmx convert-tpr -s prev.tpr -f prev.trr -o next.tpr -extend 1000
>> gmx mdrun -v -deffnm next
>>
>> compared to
>>
>> gmx convert-tpr -s prev.tpr -f prev.trr *-e prev.edr* -o next.tpr -extend
>> 1000
>> gmx mdrun -v -deffnm next
>>
>
> You don't use either .trr or .edr files. Just generate the new .tpr file
> with however much more time you want and pick up from the exact time point
> you were with mdrun -cpi prev.cpt. Use of .trr and .edr files to extend
> simulations is an antiquated approach.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Restarting the job on a remote cluster

2018-06-18 Thread Quyen V. Vu
Hi,


> I had two MD simulations running on a remote cluster through my putty
> session. One for 50 ns and the other for 30 ns. However, due to an
> unfortunate event, my session was closed as the windows machine shut down.
> I have two questions now.
>
> 1. How do I find out until where the simulation was completed?
> ​
>

​you can check on your simulation *.log file
​


> ​
>
> 2. How do I continue the MD run if possible, from where the run stopped
> abruptly?
>
​http://www.gromacs.org/Documentation/How-tos/Doing_Restarts
GROMACS will use the checkpoint to restrat the simulation​

​Best,
Quyen​
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Restarting the job on a remote cluster

2018-06-18 Thread sai manohar
 Greetings and a good day.

I had two MD simulations running on a remote cluster through my putty
session. One for 50 ns and the other for 30 ns. However, due to an
unfortunate event, my session was closed as the windows machine shut down.
I have two questions now.

1. How do I find out until where the simulation was completed?
2. How do I continue the MD run if possible, from where the run stopped
abruptly?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEL generated by dPCA and radius of gyrationa vs RMSD

2018-06-18 Thread David van der Spoel

Den 2018-06-18 kl. 12:04, skrev Seera Suryanarayana:

Dear gromacs users,

I have generated free energy landscape by two methods such as dPCA and
radius of gyration vs RMSD to average structure. In dPCA  I got less number
of meta conformational states than radius of gyration vs RMSD method. Can I
use the second method for my paper submission?

Thanks in advance
Surya
Graduate student
India.

You have to compare the two, in particular you have to analyze what the 
dPCA states mean in cartesian space. They may be very similar.


Then you have to ask yourself the question whether the results have any 
predictive value that can be evaluated experimentally...


--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] FEL generated by dPCA and radius of gyrationa vs RMSD

2018-06-18 Thread Seera Suryanarayana
Dear gromacs users,

I have generated free energy landscape by two methods such as dPCA and
radius of gyration vs RMSD to average structure. In dPCA  I got less number
of meta conformational states than radius of gyration vs RMSD method. Can I
use the second method for my paper submission?

Thanks in advance
Surya
Graduate student
India.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] membrane-protein system by using charmm36 ff

2018-06-18 Thread Alex
What happened after 10 ns? In any case, reasonable equilibration is quite
important in these simulations to ensure lipid integrity and also avoid
protein distortions in production. Whether the structure you got after 10
ns is any good, i guess noone would know, so i'd just try to follow the
protocol for membrane equilibration.

Alex

On Jun 18, 2018 3:33 AM, "Olga Press"  wrote:

Alex thank you for your advice.
I have a problem with the pre-equilibration of the membrane
before embedding the protein into it.
I've used charm-gui membrane builder website and followed the README file
at it is, including the length of the MD production and continued the
protocol as you mentioned. The problem is that at the time I've started the
simulation I produced only 10ns equilibration of the membrane.  When I've
checked the pressure it didn't reach the 1bar, and I continued the protocol
by performing log NPT equilibration (200ns)  of the entire system
(protein+membrane).
So, my question is should I start from the beginning or should I continue?
Thank you,
Olga


2018-06-18 12:02 GMT+03:00 Alex :

> in point #1, 'it' refers to the protein. ;)
>
>
>
> On 6/18/2018 3:00 AM, Alex wrote:
>
>> I haven't done lipid+protein simulations in a while, but your NVT
>> equilibration appears to be a bit strange, because equilibration under
>> pressure is very important for the lipid.
>>
>> Here is my general suggestion -- it may be too careful, but this is from
>> some experience with very poorly behaving porins:
>>
>> 1. Embed protein into a pre-equilibrated (semiisotropic NPT) membrane and
>> restrain it.
>>
>> 2. Run NPT equilibration of the system in multiple steps (say, a few ns
>> each), gradually reducing protein restraint.
>>
>> 3. NPT or NVT production.
>>
>> The choices for thermostats/barostats for all equilibration and
>> production runs should be appropriate.
>>
>> Alex
>>
>>
>> On 6/18/2018 2:50 AM, Olga Press wrote:
>>
>>> Thank you for your help!
>>> How important is it to make a good pre-equilibration before embedding a
>>> protein into the membrane if I'm going to perform long (200-300ns)
>>> equilibration of the whole system (mempare+protein) using NVT followed
by
>>> NPT ensemble before production of MD simulation?
>>> Thank you all for your help.
>>>
>>>
>>> Olga
>>>
>>>
>>>
>>>
>>> 2018-06-17 15:34 GMT+03:00 Shreyas Kaptan :
>>>
>>> Hi.

 Maybe you already know this but you can also build the whole embedded
 system with charmm-gui. Also, your parameters appear reasonable to me
at
 first glance.

 As for the equilibration, that is a system specific question. If you
 have a
 "simple" uniform lipid content in the bilayer I would say from my
 experience, that the equilibration depends on the lipid heads and
tails.
 Large heads and long tails generally imply a longer equilibration.
Mixed
 lipids can require up to "microseconds" worth of equilibratio. I would
 take
 the saturation to a nearly fixed value of the Area per lipid and the
 bilayer thickness as an indication that it is safe to consider the
 "equilibration" enough.

 Do not use the 0.495 ns as some timescale. It is in fact quite short.



 On Sun, Jun 17, 2018 at 1:25 PM Olga Press 
 wrote:

 Dear Gromacs users,
> I'm new in the field of Molecular Dynamics especially in using
Gromacs.
> I have several questions regarding mdp file and I'll be very grateful
> if
> you can help me with them.
> I'm using a membrane-protein system with Charmm36 ff. After I have
> constructed bilayer membrane by using CHARMM-GUI membrane builder I
> have
> run the README file as it, without changing the equilibration time
> (total
> equilibration time of 0.475ns). Followed by embedded protein into the
> membrane by using g_membed and performed solvation and minimization of
>
 the

> entire system as was described in the KALP15-DPPC  tutorial by Dr.
> Justin
> A.Lemkul.
>
> those are my questions:
> 1. Does the pre-equilibration of 0.475ns is enough before embedding
>
 protein

> into the membrane and followed by long equilibration of the whole
> system
> for 200ns  by using NVT followed by NPT equilibration?
>
> 2. I've read that when using CHARMM36 ff in gromacs is better to
switch
>
 the

> following parameters
>   constraints = h-bonds
> cutoff-scheme = Verlet
> vdwtype = cutoff
> vdw-modifier = force-switch
> rlist = 1.2
> rvdw = 1.2
> rvdw-switch = 1.0
> coulombtype = PME
> rcoulomb = 1.2
> DispCorr = no
>
> I'm using the original mdout.mdp files produces by gromacs.Are those
> parameters optimal for a membrane-protein system or just for the
> lipids?
>
> Thank you all for your help.
> Olga
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> 

Re: [gmx-users] membrane-protein system by using charmm36 ff

2018-06-18 Thread Shreyas Kaptan
Hi.

it is quite important that you do a pre-equilibration generally. This is
assuming that you are starting with membranes built from scratch. If you
have a patch that you obtained from charmm-gui website or other sources
that someone has previously equilibrated, then, of course, you can forgo
the pre-equilibration. The reason pre-equilibration is so important is that
membranes generated by insertion methods tend to have a lot of clashes. NVT
simulations (even short ones, approx. 1ns) ensure that given a box size you
can accommodate the lipids and relax them before you attach the pressure
coupling.

An equlibration of 200-300 ns *might* be an overkill if your protein does
not have a too large hydrophobic mismatch. but of course, more
equilibration only helps. Once again, use parameters appropriate for the
system to determine if the equilibration is enough instead of using some
default timescale.

Shreyas

On Mon, Jun 18, 2018 at 10:51 AM Olga Press  wrote:

> Thank you for your help!
> How important is it to make a good pre-equilibration before embedding a
> protein into the membrane if I'm going to perform long (200-300ns)
> equilibration of the whole system (mempare+protein) using NVT followed by
> NPT ensemble before production of MD simulation?
> Thank you all for your help.
>
>
> Olga
>
>
>
>
> 2018-06-17 15:34 GMT+03:00 Shreyas Kaptan :
>
> > Hi.
> >
> > Maybe you already know this but you can also build the whole embedded
> > system with charmm-gui. Also, your parameters appear reasonable to me at
> > first glance.
> >
> > As for the equilibration, that is a system specific question. If you
> have a
> > "simple" uniform lipid content in the bilayer I would say from my
> > experience, that the equilibration depends on the lipid heads and tails.
> > Large heads and long tails generally imply a longer equilibration. Mixed
> > lipids can require up to "microseconds" worth of equilibratio. I would
> take
> > the saturation to a nearly fixed value of the Area per lipid and the
> > bilayer thickness as an indication that it is safe to consider the
> > "equilibration" enough.
> >
> > Do not use the 0.495 ns as some timescale. It is in fact quite short.
> >
> >
> >
> > On Sun, Jun 17, 2018 at 1:25 PM Olga Press 
> wrote:
> >
> > > Dear Gromacs users,
> > > I'm new in the field of Molecular Dynamics especially in using Gromacs.
> > > I have several questions regarding mdp file and I'll be very grateful
> if
> > > you can help me with them.
> > > I'm using a membrane-protein system with Charmm36 ff. After I have
> > > constructed bilayer membrane by using CHARMM-GUI membrane builder I
> have
> > > run the README file as it, without changing the equilibration time
> (total
> > > equilibration time of 0.475ns). Followed by embedded protein into the
> > > membrane by using g_membed and performed solvation and minimization of
> > the
> > > entire system as was described in the KALP15-DPPC  tutorial by Dr.
> Justin
> > > A.Lemkul.
> > >
> > > those are my questions:
> > > 1. Does the pre-equilibration of 0.475ns is enough before embedding
> > protein
> > > into the membrane and followed by long equilibration of the whole
> system
> > > for 200ns  by using NVT followed by NPT equilibration?
> > >
> > > 2. I've read that when using CHARMM36 ff in gromacs is better to switch
> > the
> > > following parameters
> > >  constraints = h-bonds
> > > cutoff-scheme = Verlet
> > > vdwtype = cutoff
> > > vdw-modifier = force-switch
> > > rlist = 1.2
> > > rvdw = 1.2
> > > rvdw-switch = 1.0
> > > coulombtype = PME
> > > rcoulomb = 1.2
> > > DispCorr = no
> > >
> > > I'm using the original mdout.mdp files produces by gromacs.Are those
> > > parameters optimal for a membrane-protein system or just for the
> lipids?
> > >
> > > Thank you all for your help.
> > > Olga
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> > --
> > Shreyas Sanjay Kaptan
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> 

Re: [gmx-users] membrane-protein system by using charmm36 ff

2018-06-18 Thread Olga Press
Alex thank you for your advice.
I have a problem with the pre-equilibration of the membrane
before embedding the protein into it.
I've used charm-gui membrane builder website and followed the README file
at it is, including the length of the MD production and continued the
protocol as you mentioned. The problem is that at the time I've started the
simulation I produced only 10ns equilibration of the membrane.  When I've
checked the pressure it didn't reach the 1bar, and I continued the protocol
by performing log NPT equilibration (200ns)  of the entire system
(protein+membrane).
So, my question is should I start from the beginning or should I continue?
Thank you,
Olga

2018-06-18 12:02 GMT+03:00 Alex :

> in point #1, 'it' refers to the protein. ;)
>
>
>
> On 6/18/2018 3:00 AM, Alex wrote:
>
>> I haven't done lipid+protein simulations in a while, but your NVT
>> equilibration appears to be a bit strange, because equilibration under
>> pressure is very important for the lipid.
>>
>> Here is my general suggestion -- it may be too careful, but this is from
>> some experience with very poorly behaving porins:
>>
>> 1. Embed protein into a pre-equilibrated (semiisotropic NPT) membrane and
>> restrain it.
>>
>> 2. Run NPT equilibration of the system in multiple steps (say, a few ns
>> each), gradually reducing protein restraint.
>>
>> 3. NPT or NVT production.
>>
>> The choices for thermostats/barostats for all equilibration and
>> production runs should be appropriate.
>>
>> Alex
>>
>>
>> On 6/18/2018 2:50 AM, Olga Press wrote:
>>
>>> Thank you for your help!
>>> How important is it to make a good pre-equilibration before embedding a
>>> protein into the membrane if I'm going to perform long (200-300ns)
>>> equilibration of the whole system (mempare+protein) using NVT followed by
>>> NPT ensemble before production of MD simulation?
>>> Thank you all for your help.
>>>
>>>
>>> Olga
>>>
>>>
>>>
>>>
>>> 2018-06-17 15:34 GMT+03:00 Shreyas Kaptan :
>>>
>>> Hi.

 Maybe you already know this but you can also build the whole embedded
 system with charmm-gui. Also, your parameters appear reasonable to me at
 first glance.

 As for the equilibration, that is a system specific question. If you
 have a
 "simple" uniform lipid content in the bilayer I would say from my
 experience, that the equilibration depends on the lipid heads and tails.
 Large heads and long tails generally imply a longer equilibration. Mixed
 lipids can require up to "microseconds" worth of equilibratio. I would
 take
 the saturation to a nearly fixed value of the Area per lipid and the
 bilayer thickness as an indication that it is safe to consider the
 "equilibration" enough.

 Do not use the 0.495 ns as some timescale. It is in fact quite short.



 On Sun, Jun 17, 2018 at 1:25 PM Olga Press 
 wrote:

 Dear Gromacs users,
> I'm new in the field of Molecular Dynamics especially in using Gromacs.
> I have several questions regarding mdp file and I'll be very grateful
> if
> you can help me with them.
> I'm using a membrane-protein system with Charmm36 ff. After I have
> constructed bilayer membrane by using CHARMM-GUI membrane builder I
> have
> run the README file as it, without changing the equilibration time
> (total
> equilibration time of 0.475ns). Followed by embedded protein into the
> membrane by using g_membed and performed solvation and minimization of
>
 the

> entire system as was described in the KALP15-DPPC  tutorial by Dr.
> Justin
> A.Lemkul.
>
> those are my questions:
> 1. Does the pre-equilibration of 0.475ns is enough before embedding
>
 protein

> into the membrane and followed by long equilibration of the whole
> system
> for 200ns  by using NVT followed by NPT equilibration?
>
> 2. I've read that when using CHARMM36 ff in gromacs is better to switch
>
 the

> following parameters
>   constraints = h-bonds
> cutoff-scheme = Verlet
> vdwtype = cutoff
> vdw-modifier = force-switch
> rlist = 1.2
> rvdw = 1.2
> rvdw-switch = 1.0
> coulombtype = PME
> rcoulomb = 1.2
> DispCorr = no
>
> I'm using the original mdout.mdp files produces by gromacs.Are those
> parameters optimal for a membrane-protein system or just for the
> lipids?
>
> Thank you all for your help.
> Olga
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
 --
 Shreyas Sanjay Kaptan
 --
 

[gmx-users] FEP from .top gathered with TOPOGROMACS from NAMD-CHARMM MD

2018-06-18 Thread Francesco Pietra
Hello:

I am new to GROMACS.

I would like to run ligand-protein absolute FEP simulations with
GROMACS-CHARMM36 ff. My aim is to compare with the same simulations I
carried out with NAMD-CHARMM36 ff.

I got the .top file inclusive of all parametrss via TOPOGROMACS. I plan to
obtain all necessary files to first continue MD equilibrations with GROMACS
by passing the NAMD equilibrated .pdb file and the .top file to GROMPP (I
am looking for direction on howto).

The thermodynamic cycle I used with NAMD is similar to the one in the 2016
GROMACS FEP tutorial. However,  I am rather confused on how to add flags
and restraints after the MD equilibration. With NAMD, FEP flags and
restraints are simply added to the general MD configuration file.

Thanks for advice.

francesco pietra
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] membrane-protein system by using charmm36 ff

2018-06-18 Thread Alex

in point #1, 'it' refers to the protein. ;)


On 6/18/2018 3:00 AM, Alex wrote:
I haven't done lipid+protein simulations in a while, but your NVT 
equilibration appears to be a bit strange, because equilibration under 
pressure is very important for the lipid.


Here is my general suggestion -- it may be too careful, but this is 
from some experience with very poorly behaving porins:


1. Embed protein into a pre-equilibrated (semiisotropic NPT) membrane 
and restrain it.


2. Run NPT equilibration of the system in multiple steps (say, a few 
ns each), gradually reducing protein restraint.


3. NPT or NVT production.

The choices for thermostats/barostats for all equilibration and 
production runs should be appropriate.


Alex


On 6/18/2018 2:50 AM, Olga Press wrote:

Thank you for your help!
How important is it to make a good pre-equilibration before embedding a
protein into the membrane if I'm going to perform long (200-300ns)
equilibration of the whole system (mempare+protein) using NVT 
followed by

NPT ensemble before production of MD simulation?
Thank you all for your help.


Olga




2018-06-17 15:34 GMT+03:00 Shreyas Kaptan :


Hi.

Maybe you already know this but you can also build the whole embedded
system with charmm-gui. Also, your parameters appear reasonable to 
me at

first glance.

As for the equilibration, that is a system specific question. If you 
have a

"simple" uniform lipid content in the bilayer I would say from my
experience, that the equilibration depends on the lipid heads and 
tails.
Large heads and long tails generally imply a longer equilibration. 
Mixed
lipids can require up to "microseconds" worth of equilibratio. I 
would take

the saturation to a nearly fixed value of the Area per lipid and the
bilayer thickness as an indication that it is safe to consider the
"equilibration" enough.

Do not use the 0.495 ns as some timescale. It is in fact quite short.



On Sun, Jun 17, 2018 at 1:25 PM Olga Press  
wrote:



Dear Gromacs users,
I'm new in the field of Molecular Dynamics especially in using 
Gromacs.
I have several questions regarding mdp file and I'll be very 
grateful if

you can help me with them.
I'm using a membrane-protein system with Charmm36 ff. After I have
constructed bilayer membrane by using CHARMM-GUI membrane builder I 
have
run the README file as it, without changing the equilibration time 
(total

equilibration time of 0.475ns). Followed by embedded protein into the
membrane by using g_membed and performed solvation and minimization of

the
entire system as was described in the KALP15-DPPC  tutorial by Dr. 
Justin

A.Lemkul.

those are my questions:
1. Does the pre-equilibration of 0.475ns is enough before embedding

protein
into the membrane and followed by long equilibration of the whole 
system

for 200ns  by using NVT followed by NPT equilibration?

2. I've read that when using CHARMM36 ff in gromacs is better to 
switch

the

following parameters
  constraints = h-bonds
cutoff-scheme = Verlet
vdwtype = cutoff
vdw-modifier = force-switch
rlist = 1.2
rvdw = 1.2
rvdw-switch = 1.0
coulombtype = PME
rcoulomb = 1.2
DispCorr = no

I'm using the original mdout.mdp files produces by gromacs.Are those
parameters optimal for a membrane-protein system or just for the 
lipids?


Thank you all for your help.
Olga
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Shreyas Sanjay Kaptan
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.





--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Error : symtab get_symtab_handle 367 not found

2018-06-18 Thread ARNAB MUKHERJEE
Hi,

I am simulating a Martini coarse grained DNA protamine system. I have 1
infinite DNA + 1 protamine. Since while using the martini python script for
coarse graining it removes the 2 phosphate atoms of the terminal base
pairs, so in order to have the right infinite DNA system, while building
it, I had build 2 extra base pairs, which I removed them manually and
renumbered the topology .itp file atom numbers to match with the .gro file.
So my periodic cuboidal box size in Z direction (the DNA is aligned along
Z) is # of BPs*3.38 Angstrom, so that the pbc along Z makes the DNA
infinite.

The problem is when I simulate this system, during the short NPT
equilibration, it shows this error :

  Fatal error:
symtab get_symtab_handle 367 not found

The strange thing is if I submit a short run, in interactive mode for eg,
it runs fine, as long as it stays connected. But when I submit the complete
run in the cluster, it shows this error, before even starting Step 0. I
tried to google about this error, but didn't find much info. I have been
running this simulation in version 5.0.6. I checked also using newer
version of gromacs 5.1.4, it shows running status, but in the .log file it
shows this :

 Started mdrun on rank 0 Thu Jun 14 01:29:37 2018
   Step   Time Lambda
  00.00.0


Not all bonded interactions have been properly assigned to the domain
decomposition cells

Are the 2 different errors that I get in the different versions, connected?

To test if there is a problem in the force field (.itp file) since I had to
modify it manually to renumber the atoms, I ran a finite 50 BP DNA with
modified .itp file for the 50 BP DNA, and it runs fine.

I am not able to understand what is the problem. I am pasting my input .mdp
parameters that I used for the run. I used semiisotropic pressure coupling
as I want to keep the Z dimension of the box constant. I have also frozen 4
atoms in the 2 ends of the DNA since I want to keep the DNA aligned along
Z, since I later want to apply an E field along Z direction.

 title   = NVT equilibration with position restraint on all solute
(topology modified)
; Run parameters
integrator  = md; leap-frog integrator
;nsteps = 3000  ; 1 * 50 = 500 ps
nsteps  = 50
dt  = 0.001 ; 1 fs
; Output control
nstxout = 0 ; save coordinates every 10 ps
nstvout = 0 ; save velocities every 10 ps
nstcalcenergy   = 50
nstenergy   = 1000  ; save energies every 1 ps
nstxtcout   = 2500
;nstxout-compressed  = 5000   ; save compressed coordinates every 1.0 ps
 ; nstxout-compressed replaces nstxtcout
;compressed-x-grps  = System  ; replaces xtc-grps
nstlog  = 1000  ; update log file every 1 ps
; Bond parameters
continuation= no   ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = none  ; all bonds (even heavy atom-H bonds)
constrained
;lincs_iter = 2 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
epsilon_r   = 15
; Neighborsearching
cutoff-scheme   = Verlet
ns_type = grid  ; search neighboring grid cels
nstlist = 10; 20 fs
rvdw_switch = 1.0
rlist   = 1.2   ; short-range neighborlist cutoff (in nm)
rcoulomb= 1.2   ; short-range electrostatic cutoff (in nm)
rvdw= 1.2   ; short-range van der Waals cutoff (in nm)
vdwtype = Cut-off   ; Twin range cut-offs rvdw >= rlist
;vdw-modifier= Force-switch
;Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.12  ; grid spacing for FFT
; Temperature coupling is on
tcoupl  = v-rescale
tc_grps = System
tau_t   = 1.0
ref_t   = 300

;energygrps = DNA W_ION_Protein
;energygrp-excl = DNA DNA
freezegrps = DNA-Frozen-Atoms
freezedim = Y Y Y

; Pressure coupling is off
;pcoupl = no; no pressure coupling in NVT
Pcoupl = parrinello-rahman
Pcoupltype  = semiisotropic
tau_p   = 5.0
compressibility = 3e-4 0
ref_p   = 1.0 1.0
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Dispersion correctiion
DispCorr= no; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes   ; assign velocities from Maxwell distribution
gen_temp= 300   ; temperature for Maxwell distribution
gen_seed= -1; generate a random seed
; COM motion removal
; These options remove motion of the protein/bilayer relative to the
solvent/ions
nstcomm = 50
comm-mode   = Linear
comm-grps   = System
;
refcoord_scaling = com
;refcoord_scaling = all

I would highly appreciate any help.

Thank you in advance,

Regards,

Arnab Mukherjee
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? 

Re: [gmx-users] membrane-protein system by using charmm36 ff

2018-06-18 Thread Alex
I haven't done lipid+protein simulations in a while, but your NVT 
equilibration appears to be a bit strange, because equilibration under 
pressure is very important for the lipid.


Here is my general suggestion -- it may be too careful, but this is from 
some experience with very poorly behaving porins:


1. Embed protein into a pre-equilibrated (semiisotropic NPT) membrane 
and restrain it.


2. Run NPT equilibration of the system in multiple steps (say, a few ns 
each), gradually reducing protein restraint.


3. NPT or NVT production.

The choices for thermostats/barostats for all equilibration and 
production runs should be appropriate.


Alex


On 6/18/2018 2:50 AM, Olga Press wrote:

Thank you for your help!
How important is it to make a good pre-equilibration before embedding a
protein into the membrane if I'm going to perform long (200-300ns)
equilibration of the whole system (mempare+protein) using NVT followed by
NPT ensemble before production of MD simulation?
Thank you all for your help.


Olga




2018-06-17 15:34 GMT+03:00 Shreyas Kaptan :


Hi.

Maybe you already know this but you can also build the whole embedded
system with charmm-gui. Also, your parameters appear reasonable to me at
first glance.

As for the equilibration, that is a system specific question. If you have a
"simple" uniform lipid content in the bilayer I would say from my
experience, that the equilibration depends on the lipid heads and tails.
Large heads and long tails generally imply a longer equilibration. Mixed
lipids can require up to "microseconds" worth of equilibratio. I would take
the saturation to a nearly fixed value of the Area per lipid and the
bilayer thickness as an indication that it is safe to consider the
"equilibration" enough.

Do not use the 0.495 ns as some timescale. It is in fact quite short.



On Sun, Jun 17, 2018 at 1:25 PM Olga Press  wrote:


Dear Gromacs users,
I'm new in the field of Molecular Dynamics especially in using Gromacs.
I have several questions regarding mdp file and I'll be very grateful if
you can help me with them.
I'm using a membrane-protein system with Charmm36 ff. After I have
constructed bilayer membrane by using CHARMM-GUI membrane builder I have
run the README file as it, without changing the equilibration time (total
equilibration time of 0.475ns). Followed by embedded protein into the
membrane by using g_membed and performed solvation and minimization of

the

entire system as was described in the KALP15-DPPC  tutorial by Dr. Justin
A.Lemkul.

those are my questions:
1. Does the pre-equilibration of 0.475ns is enough before embedding

protein

into the membrane and followed by long equilibration of the whole system
for 200ns  by using NVT followed by NPT equilibration?

2. I've read that when using CHARMM36 ff in gromacs is better to switch

the

following parameters
  constraints = h-bonds
cutoff-scheme = Verlet
vdwtype = cutoff
vdw-modifier = force-switch
rlist = 1.2
rvdw = 1.2
rvdw-switch = 1.0
coulombtype = PME
rcoulomb = 1.2
DispCorr = no

I'm using the original mdout.mdp files produces by gromacs.Are those
parameters optimal for a membrane-protein system or just for the lipids?

Thank you all for your help.
Olga
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Shreyas Sanjay Kaptan
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] membrane-protein system by using charmm36 ff

2018-06-18 Thread Olga Press
Thank you for your help!
How important is it to make a good pre-equilibration before embedding a
protein into the membrane if I'm going to perform long (200-300ns)
equilibration of the whole system (mempare+protein) using NVT followed by
NPT ensemble before production of MD simulation?
Thank you all for your help.


Olga




2018-06-17 15:34 GMT+03:00 Shreyas Kaptan :

> Hi.
>
> Maybe you already know this but you can also build the whole embedded
> system with charmm-gui. Also, your parameters appear reasonable to me at
> first glance.
>
> As for the equilibration, that is a system specific question. If you have a
> "simple" uniform lipid content in the bilayer I would say from my
> experience, that the equilibration depends on the lipid heads and tails.
> Large heads and long tails generally imply a longer equilibration. Mixed
> lipids can require up to "microseconds" worth of equilibratio. I would take
> the saturation to a nearly fixed value of the Area per lipid and the
> bilayer thickness as an indication that it is safe to consider the
> "equilibration" enough.
>
> Do not use the 0.495 ns as some timescale. It is in fact quite short.
>
>
>
> On Sun, Jun 17, 2018 at 1:25 PM Olga Press  wrote:
>
> > Dear Gromacs users,
> > I'm new in the field of Molecular Dynamics especially in using Gromacs.
> > I have several questions regarding mdp file and I'll be very grateful if
> > you can help me with them.
> > I'm using a membrane-protein system with Charmm36 ff. After I have
> > constructed bilayer membrane by using CHARMM-GUI membrane builder I have
> > run the README file as it, without changing the equilibration time (total
> > equilibration time of 0.475ns). Followed by embedded protein into the
> > membrane by using g_membed and performed solvation and minimization of
> the
> > entire system as was described in the KALP15-DPPC  tutorial by Dr. Justin
> > A.Lemkul.
> >
> > those are my questions:
> > 1. Does the pre-equilibration of 0.475ns is enough before embedding
> protein
> > into the membrane and followed by long equilibration of the whole system
> > for 200ns  by using NVT followed by NPT equilibration?
> >
> > 2. I've read that when using CHARMM36 ff in gromacs is better to switch
> the
> > following parameters
> >  constraints = h-bonds
> > cutoff-scheme = Verlet
> > vdwtype = cutoff
> > vdw-modifier = force-switch
> > rlist = 1.2
> > rvdw = 1.2
> > rvdw-switch = 1.0
> > coulombtype = PME
> > rcoulomb = 1.2
> > DispCorr = no
> >
> > I'm using the original mdout.mdp files produces by gromacs.Are those
> > parameters optimal for a membrane-protein system or just for the lipids?
> >
> > Thank you all for your help.
> > Olga
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
> Shreyas Sanjay Kaptan
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Protein potential energy

2018-06-18 Thread Ming Tang
Dear list,

I pulled a protein in water. In order to get the trend of the potential energy 
terms of the protein, I defined energygrps and reran the system. May I ask can 
I get the right trend of the protein potential energy terms using this approach?

Your help is appreciated.
Thanks,
Tammy

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Heavy water H-H radial distribution function

2018-06-18 Thread Haelee Hyun
Dear GROMACS users,
 
I'm wondering how can I correctly describe H-H radial distribution function of heavy water.
Please check atteched file HH_rdf.PNG which is a calculated result of H-H radial distribution from my simulation.
The first peak is due to the intramolecular interaction of water molecules.
It shows almolst 7 at 0.13 nm but when comparing this result with an experimental data, the experimental data shows just almost 2 at the first peak.
I have tried many times of simulations but i couldn't find why this huge difference is caused.
I used tip4p/2005f water model and used potential is atteched below.
 
[ defaults ]; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ1  3  yes  0.5 0.5
 
[ moleculetype ]; molname nrexclSOL  2
 
[ atoms ]; id at type res nr  residu name at name cg nr charge1   opls_113    1   SOL  OW 1   0.02   opls_114    1   SOL HW1 1   0.55643   opls_114    1   SOL HW2 1   0.55644   opls_115    1   SOL  MW 1  -1.1128
 
;[nonbond_params]; i j funct q   V    W;1 2 1 0.5564  3.16440e-01  7.74907e-01 ;1 3 1   0.5564  3.16440e-01  7.74907e-01
#ifndef FLEXIBLE[ settles ]; OW    funct   doh    dhh1   1   0.09664    0.1
#else
[ bonds ]; i j funct length   D   beta1 2  3    0.09419   432.581   22.87   ; For TIP4P/2005f Water b0, D, beta 1 3  3    0.09419   432.581   22.87   ; For TIP4P/2005f Water b0, D, beta  [ angles ]; i j k funct angle force.c.2 1 3 1 107.4 367.81 #endif
[ exclusions ]1 2 3 42 1 3 43 1 2 44 1 2 3
 
; The position of the virtual site is computed as follows:;; const = distance (OD) / [ cos (angle(DOH))  * distance (OH) ];   0.015 nm / [ cos (52.26 deg) * 0.09572 nm ]
; Vsite pos x4 = x1 + a*(x2-x1) + b*(x3-x1)
[ virtual_sites3 ]; Vsite from   funct a  b4 1 2 3 1 0.13288  0.13288
I used -DFLEXIBLE option and energy minimization, NVT, NPT equlibration and NVE production run.
If someone would find some wrong things, please let me know.
 
Thank you.
Haelee Hyun
 


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] from unit cell to supercell

2018-06-18 Thread Bukunmi Akinwunmi
Hi,
I would like to increase my model size to have a bigger model in all 
directions. When I used the genconf command, my model increased in x direction 
but multiplied in y and z directions i.e gmx genconf -f in.gro -o out.gro -nbox 
2 2 2 gave me 4 molecules instead of a big single molecule. How do I fix this.

Best,
Bukunmi.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition error

2018-06-18 Thread Mark Abraham
Hi,

The implicit solvent support got a bit broken between 4.5 and 4.6, and
nobody has yet worked out how to fix it, sorry. If you can run with 1 cpu,
do that. Otherwise, please use GROMACS 4.5.7.

Mark

On Mon, Jun 18, 2018 at 9:21 AM Chhaya Singh 
wrote:

> I am running a simulation having protein in implicit solvent using amber
> ff99sb forcefield and gbsa as solvent .
> I am not able to use more than one cpu.
> It always gives domain decomposition error if i use more than one cpu.
> when i tried running using one cpu then it gave me this error :
> "Fatal error:
> Too many LINCS warnings (12766)
> If you know what you are doing you can adjust the lincs warning threshold
> in your mdp file
> or set the environment variable GMX_MAXCONSTRWARN to -1,
> but normally it is better to fix the problem".
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] domain decomposition error

2018-06-18 Thread Chhaya Singh
I am running a simulation having protein in implicit solvent using amber
ff99sb forcefield and gbsa as solvent .
I am not able to use more than one cpu.
It always gives domain decomposition error if i use more than one cpu.
when i tried running using one cpu then it gave me this error :
"Fatal error:
Too many LINCS warnings (12766)
If you know what you are doing you can adjust the lincs warning threshold
in your mdp file
or set the environment variable GMX_MAXCONSTRWARN to -1,
but normally it is better to fix the problem".
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.