Re: [gmx-users] Residue XXX not found in residue topology database

2019-08-26 Thread Justin Lemkul




On 8/26/19 8:27 PM, Neena Susan Eappen wrote:

Hello gromacs users,

I saw the following error for a modified residue I added though I edited all 
necessary files including PDB. I also read the gromacs documentation on this 
error.

Residue XXX not found in residue topology database.

Any hints on what might be happening?


Not without a precise listing of exactly what you did. 99% of the cases 
that raise this error are specifically addressed by 
http://manual.gromacs.org/current/user-guide/run-time-errors.html#residue-xxx-not-found-in-residue-topology-database 
in concert with http://manual.gromacs.org/current/how-to/topology.html


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Residue XXX not found in residue topology database

2019-08-26 Thread Neena Susan Eappen
Hello gromacs users,

I saw the following error for a modified residue I added though I edited all 
necessary files including PDB. I also read the gromacs documentation on this 
error.

Residue XXX not found in residue topology database.

Any hints on what might be happening?

Many thanks,
Neena
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] HtoD cudaMemcpyAsync failed: invalid argument

2019-08-26 Thread Navneet Kumar Singh
Thank You Sir.

Again installed gromacs2016.6 (final version from gromacs2016.x).
Throwing same error(HtoD cudaMemcpyAsync failed: invalid argument)

. Its running on CPU, but not on GPU. Work of 3 days will take 15 days.
Already completed simulation for other complexes. This complex creating
problem due to vsites3. PLEASE HELP DEADLINE IS APPROACHING.

On Tue, Aug 27, 2019 at 12:23 AM Mark Abraham 
wrote:

> Hi,
>
> If you have a reason to have to use 2016.x, then please get the latest
> version (which is always advisable when starting new work) because this
> issue is fixed there. You also don't have to run energy minimization on
> GPUs, so you could just append -nb cpu to your mdrun command line to avoid
> the problem.
>
> Mark
>
> On Mon, 26 Aug 2019 at 20:44, Navneet Kumar Singh 
> wrote:
>
> > Currently I am using only gromacs16.5. As this have *"PLEASE NOTE* that
> the
> > current versions do support lone pair construction on halogens, however
> the
> > current construction is only compatible with GROMACS-2016.x and by using
> > gmx grompp -maxwarn 1 to override the warning about lone pair
> > construction."
> >
> > I was unable to understand this "For all other GROMACS versions, you will
> > have to manually edit the topology to use "3fad" construction and
> > appropriate atom numbers."as I was using 2018.4. So I switched to
> > version-16. But now trapped in this HtoD cudaMemcpyAsync failed: invalid
> > argumenterror.
> >
> > On Tue, Aug 27, 2019 at 12:09 AM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > You're running 2016.x which had a bug, not the 2018.x you thought you
> > were
> > > using. Use GMXRC or your cluster's modules to select the version you
> want
> > > to use in the terminal or script that you want to use.
> > >
> > > Mark
> > >
> > > On Mon, 26 Aug 2019 at 20:34, Navneet Kumar Singh <
> navneet...@gmail.com>
> > > wrote:
> > >
> > > > What kind of error is this. Previously Gromacs 2018.4 version was
> > running
> > > > fine using GPU. But Now this error I am getting.
> > > >
> > > > _
> > > >
> > > > gmx mdrun -v -deffnm em
> > > >   :-) GROMACS - gmx mdrun, 2016.5 (-:
> > > >
> > > > GROMACS is written by:
> > > >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > > > Bjelkmar
> > > >  Aldert van Buuren   Rudi van Drunen Anton FeenstraGerrit
> > > Groenhof
> > > >  Christoph Junghans   Anca HamuraruVincent Hindriksen Dimitrios
> > > > Karkoulis
> > > > Peter KassonJiri Kraus  Carsten Kutzner  Per
> > Larsson
> > > >   Justin A. Lemkul   Magnus Lundborg   Pieter MeulenhoffErik
> > Marklund
> > > >Teemu Murtola   Szilard Pall   Sander Pronk  Roland
> > Schulz
> > > >   Alexey Shvetsov Michael Shirts Alfons Sijbers Peter
> > > Tieleman
> > > >   Teemu Virolainen  Christian WennbergMaarten Wolf
> > > >and the project leaders:
> > > > Mark Abraham, Berk Hess, Erik Lindahl, and David van der
> Spoel
> > > >
> > > > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > > > Copyright (c) 2001-2017, The GROMACS development team at
> > > > Uppsala University, Stockholm University and
> > > > the Royal Institute of Technology, Sweden.
> > > > check out http://www.gromacs.org for more information.
> > > >
> > > > GROMACS is free software; you can redistribute it and/or modify it
> > > > under the terms of the GNU Lesser General Public License
> > > > as published by the Free Software Foundation; either version 2.1
> > > > of the License, or (at your option) any later version.
> > > >
> > > > GROMACS:  gmx mdrun, version 2016.5
> > > > Executable:   /usr/local/gromacs/bin/gmx
> > > > Data prefix:  /usr/local/gromacs
> > > > Working dir:  /home/nitttr/Desktop/Hydroxy_chloroquine
> > > > Command line:
> > > >   gmx mdrun -v -deffnm em
> > > >
> > > >
> > > > Back Off! I just backed up em.log to ./#em.log.7#
> > > >
> > > > Running on 1 node with total 16 cores, 32 logical cores, 1 compatible
> > GPU
> > > > Hardware detected:
> > > >   CPU info:
> > > > Vendor: Intel
> > > > Brand:  Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
> > > > SIMD instructions most likely to fit this hardware: AVX_512
> > > > SIMD instructions selected at GROMACS compile time: AVX_512
> > > >
> > > >   Hardware topology: Basic
> > > >   GPU info:
> > > > Number of GPUs detected: 1
> > > > #0: NVIDIA Tesla P4, compute cap.: 6.1, ECC: yes, stat:
> compatible
> > > >
> > > > Reading file em.tpr, VERSION 2016.5 (single precision)
> > > > Using 1 MPI thread
> > > > Using 32 OpenMP threads
> > > >
> > > > 1 compatible GPU is present, with ID 0
> > > > 1 GPU auto-selected for this run.
> > > > Mapping of GPU ID to the 1 PP rank in this node: 0
> > > >
> > > > Application clocks (GPU clocks) for Tesla P4 are (3003,1531)
> > > >
> > > > Back Off! I just 

Re: [gmx-users] Question about Gromacs

2019-08-26 Thread David van der Spoel

Den 2019-08-26 kl. 20:53, skrev Najla Hosseini:

Dear David,

Hope you are doing well.
I am Gromacs user and I need to change the partial charge of molecules 
in force field or itp file as a function of distance during the run in 
Gromacs. Is it possible? How I should do that?
Please pose your questions on the mailing list, however this is not 
possible to dynamically.


If not, I should use two itp file with a condition for finishing one 
of them and starting the new tpr file based on another itp file, how I 
should do that?


Thank you so much.
I really appreciate your consideration and time.

Best Regards, Najla

--
/*Kind Regards,
*/
/*Najla *
/



--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] HtoD cudaMemcpyAsync failed: invalid argument

2019-08-26 Thread Mark Abraham
Hi,

If you have a reason to have to use 2016.x, then please get the latest
version (which is always advisable when starting new work) because this
issue is fixed there. You also don't have to run energy minimization on
GPUs, so you could just append -nb cpu to your mdrun command line to avoid
the problem.

Mark

On Mon, 26 Aug 2019 at 20:44, Navneet Kumar Singh 
wrote:

> Currently I am using only gromacs16.5. As this have *"PLEASE NOTE* that the
> current versions do support lone pair construction on halogens, however the
> current construction is only compatible with GROMACS-2016.x and by using
> gmx grompp -maxwarn 1 to override the warning about lone pair
> construction."
>
> I was unable to understand this "For all other GROMACS versions, you will
> have to manually edit the topology to use "3fad" construction and
> appropriate atom numbers."as I was using 2018.4. So I switched to
> version-16. But now trapped in this HtoD cudaMemcpyAsync failed: invalid
> argumenterror.
>
> On Tue, Aug 27, 2019 at 12:09 AM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > You're running 2016.x which had a bug, not the 2018.x you thought you
> were
> > using. Use GMXRC or your cluster's modules to select the version you want
> > to use in the terminal or script that you want to use.
> >
> > Mark
> >
> > On Mon, 26 Aug 2019 at 20:34, Navneet Kumar Singh 
> > wrote:
> >
> > > What kind of error is this. Previously Gromacs 2018.4 version was
> running
> > > fine using GPU. But Now this error I am getting.
> > >
> > > _
> > >
> > > gmx mdrun -v -deffnm em
> > >   :-) GROMACS - gmx mdrun, 2016.5 (-:
> > >
> > > GROMACS is written by:
> > >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > > Bjelkmar
> > >  Aldert van Buuren   Rudi van Drunen Anton FeenstraGerrit
> > Groenhof
> > >  Christoph Junghans   Anca HamuraruVincent Hindriksen Dimitrios
> > > Karkoulis
> > > Peter KassonJiri Kraus  Carsten Kutzner  Per
> Larsson
> > >   Justin A. Lemkul   Magnus Lundborg   Pieter MeulenhoffErik
> Marklund
> > >Teemu Murtola   Szilard Pall   Sander Pronk  Roland
> Schulz
> > >   Alexey Shvetsov Michael Shirts Alfons Sijbers Peter
> > Tieleman
> > >   Teemu Virolainen  Christian WennbergMaarten Wolf
> > >and the project leaders:
> > > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> > >
> > > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > > Copyright (c) 2001-2017, The GROMACS development team at
> > > Uppsala University, Stockholm University and
> > > the Royal Institute of Technology, Sweden.
> > > check out http://www.gromacs.org for more information.
> > >
> > > GROMACS is free software; you can redistribute it and/or modify it
> > > under the terms of the GNU Lesser General Public License
> > > as published by the Free Software Foundation; either version 2.1
> > > of the License, or (at your option) any later version.
> > >
> > > GROMACS:  gmx mdrun, version 2016.5
> > > Executable:   /usr/local/gromacs/bin/gmx
> > > Data prefix:  /usr/local/gromacs
> > > Working dir:  /home/nitttr/Desktop/Hydroxy_chloroquine
> > > Command line:
> > >   gmx mdrun -v -deffnm em
> > >
> > >
> > > Back Off! I just backed up em.log to ./#em.log.7#
> > >
> > > Running on 1 node with total 16 cores, 32 logical cores, 1 compatible
> GPU
> > > Hardware detected:
> > >   CPU info:
> > > Vendor: Intel
> > > Brand:  Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
> > > SIMD instructions most likely to fit this hardware: AVX_512
> > > SIMD instructions selected at GROMACS compile time: AVX_512
> > >
> > >   Hardware topology: Basic
> > >   GPU info:
> > > Number of GPUs detected: 1
> > > #0: NVIDIA Tesla P4, compute cap.: 6.1, ECC: yes, stat: compatible
> > >
> > > Reading file em.tpr, VERSION 2016.5 (single precision)
> > > Using 1 MPI thread
> > > Using 32 OpenMP threads
> > >
> > > 1 compatible GPU is present, with ID 0
> > > 1 GPU auto-selected for this run.
> > > Mapping of GPU ID to the 1 PP rank in this node: 0
> > >
> > > Application clocks (GPU clocks) for Tesla P4 are (3003,1531)
> > >
> > > Back Off! I just backed up em.trr to ./#em.trr.7#
> > >
> > > Back Off! I just backed up em.edr to ./#em.edr.7#
> > >
> > > Steepest Descents:
> > >Tolerance (Fmax)   =  1.0e+03
> > >Number of steps=5
> > >
> > > ---
> > > Program: gmx mdrun, version 2016.5
> > > Source file: src/gromacs/gpu_utils/cudautils.cu (line 105)
> > >
> > > Fatal error:
> > > HtoD cudaMemcpyAsync failed: invalid argument
> > >
> > > For more information and tips for troubleshooting, please check the
> > GROMACS
> > > website at http://www.gromacs.org/Documentation/Errors
> > > 

Re: [gmx-users] (no subject)

2019-08-26 Thread Navneet Kumar Singh
I have seen this in Gromacs manual.

[ virtual_sites2 ]
; Site  fromfunct  a
5   1 2 1  0.7439756

for type 3 like this:

[ virtual_sites3 ]
; Site  from   funct   a  b
5   1 2 3  1   0.7439756  0.128012

for type 3fd like this:

[ virtual_sites3 ]
; Site  from   funct   a  d
5   1 2 3  2   0.5-0.105

for type 3fad like this:

[ virtual_sites3 ]
; Site  from   funct   theta  d
5   1 2 3  3   1200.5

for type 3out like this:

[ virtual_sites3 ]
; Site  from   funct   a  b  c
5   1 2 3  4   -0.4   -0.4   6.9281

for type 4fdn like this:

[ virtual_sites4 ]
; Site  from  funct   a  b  c
5   1 2 3 4   2   1.00.9   0.105

__


I have this from unk.itp file.

[ virtual_sites3 ]
; Site   from funct a   d
   52 12323 2 0   -0.164

[ exclusions ]
; ai  aj
152 1
   5223 1
   5221 1
   5222 1
   5218 1
   5219 1
   5251 1
   5250 1


Now I have to add this in topology file. Is this supposed to add in
system.top ?

On Tue, Aug 27, 2019 at 12:11 AM Justin Lemkul  wrote:

>
>
> On 8/26/19 2:33 PM, Navneet Kumar Singh wrote:
> > I can't understand meaning of this "For all other GROMACS versions, you
> > will have to manually edit the topology to use "3fad" construction and
> > appropriate atom numbers." If I am using version other than the
> > Gromacs2016.x.
> >
> > Can I get example of any topology file where these kind of
> construction've
> > for loan pair done manually in the topology file?
>
> Please see the manual for examples of what the different construction
> types are.
>
> -Justin
>
> > On Mon, Aug 26, 2019 at 11:57 PM Justin Lemkul  wrote:
> >
> >>
> >> On 8/26/19 1:14 PM, Navneet Kumar Singh wrote:
> >>> Do I have to add vsite3 information from unk.itp to system.top files?
> >> That directive belongs in the topology to which it corresponds. If it is
> >> in unk.itp, then it is already in system.top. You don't need to include
> >> anything else.
> >>
> >> -Justin
> >>
> >>> -maxwarn 1 flag I checked in gromacs 16 version and it's running there.
> >>>
> >>> On Mon, 26 Aug 2019, 00:58 Justin Lemkul,  wrote:
> >>>
>  On 8/25/19 2:17 PM, Navneet Kumar Singh wrote:
> > Yeah! I have read that.
> >
> > uses -maxwarn 1 to produce .tpr file using grompp command as
> mentioned
> >> in
>  Since you are getting an error rather than a warning, that means you
> are
>  not using GROMACS 2016.x, as the instructions I pointed to you say.
> 
>  If you want to use a different GROMACS version, you have to change the
>  topology. Again see the link I provided for a description of what to
> do.
> 
>  -Justin
> 
> > python script cgenff_charmm2gmx_py2.py
> >
> > output of cgenff_charmm2gmx_py2.py
> >
> >>
> _
> > NOTE 1: Code tested with python 2.7.12. Your version: 2.7.16
> (default,
>  Mar
> > 26 2019, 10:00:46)
> > [GCC 5.4.0 20160609]
> >
> > NOTE 2: Please be sure to use the same version of CGenFF in your
> > simulations that was used during parameter generation:
> > --Version of CGenFF detected in  unk.str : 4.0
> > --Version of CGenFF detected in  charmm36-jul2017.ff/forcefield.doc :
> >> 4.0
> > NOTE 3: To avoid duplicated parameters, do NOT select the 'Include
> > parameters that are already in CGenFF' option when uploading a
> molecule
> > into CGenFF.
> >
> > NOTE 4: 1 lone pairs found in topology that are not in the mol2 file.
>  This
> > is not a problem, just FYI!
> >
> >  DONE 
> > Conversion complete.
> > The molecule topology has been written to unk.itp
> > Additional parameters needed by the molecule are written to unk.prm,
>  which
> > needs to be included in the system .top
> >
> > PLEASE NOTE: lone pair construction requires duplicate host atom
> >> numbers,
> > which will make grompp complain
> > To produce .tpr files, the user MUST use -maxwarn 1 to circumvent
> this
>  check
> 
> >>
> 
> > After that I am using same -maxwarn 1 but still, it's giving error.
> It
>  may
> > be some silly 

Re: [gmx-users] HtoD cudaMemcpyAsync failed: invalid argument

2019-08-26 Thread Navneet Kumar Singh
Currently I am using only gromacs16.5. As this have *"PLEASE NOTE* that the
current versions do support lone pair construction on halogens, however the
current construction is only compatible with GROMACS-2016.x and by using
gmx grompp -maxwarn 1 to override the warning about lone pair construction."

I was unable to understand this "For all other GROMACS versions, you will
have to manually edit the topology to use "3fad" construction and
appropriate atom numbers."as I was using 2018.4. So I switched to
version-16. But now trapped in this HtoD cudaMemcpyAsync failed: invalid
argumenterror.

On Tue, Aug 27, 2019 at 12:09 AM Mark Abraham 
wrote:

> Hi,
>
> You're running 2016.x which had a bug, not the 2018.x you thought you were
> using. Use GMXRC or your cluster's modules to select the version you want
> to use in the terminal or script that you want to use.
>
> Mark
>
> On Mon, 26 Aug 2019 at 20:34, Navneet Kumar Singh 
> wrote:
>
> > What kind of error is this. Previously Gromacs 2018.4 version was running
> > fine using GPU. But Now this error I am getting.
> >
> > _
> >
> > gmx mdrun -v -deffnm em
> >   :-) GROMACS - gmx mdrun, 2016.5 (-:
> >
> > GROMACS is written by:
> >  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> > Bjelkmar
> >  Aldert van Buuren   Rudi van Drunen Anton FeenstraGerrit
> Groenhof
> >  Christoph Junghans   Anca HamuraruVincent Hindriksen Dimitrios
> > Karkoulis
> > Peter KassonJiri Kraus  Carsten Kutzner  Per Larsson
> >   Justin A. Lemkul   Magnus Lundborg   Pieter MeulenhoffErik Marklund
> >Teemu Murtola   Szilard Pall   Sander Pronk  Roland Schulz
> >   Alexey Shvetsov Michael Shirts Alfons Sijbers Peter
> Tieleman
> >   Teemu Virolainen  Christian WennbergMaarten Wolf
> >and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > Copyright (c) 2001-2017, The GROMACS development team at
> > Uppsala University, Stockholm University and
> > the Royal Institute of Technology, Sweden.
> > check out http://www.gromacs.org for more information.
> >
> > GROMACS is free software; you can redistribute it and/or modify it
> > under the terms of the GNU Lesser General Public License
> > as published by the Free Software Foundation; either version 2.1
> > of the License, or (at your option) any later version.
> >
> > GROMACS:  gmx mdrun, version 2016.5
> > Executable:   /usr/local/gromacs/bin/gmx
> > Data prefix:  /usr/local/gromacs
> > Working dir:  /home/nitttr/Desktop/Hydroxy_chloroquine
> > Command line:
> >   gmx mdrun -v -deffnm em
> >
> >
> > Back Off! I just backed up em.log to ./#em.log.7#
> >
> > Running on 1 node with total 16 cores, 32 logical cores, 1 compatible GPU
> > Hardware detected:
> >   CPU info:
> > Vendor: Intel
> > Brand:  Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
> > SIMD instructions most likely to fit this hardware: AVX_512
> > SIMD instructions selected at GROMACS compile time: AVX_512
> >
> >   Hardware topology: Basic
> >   GPU info:
> > Number of GPUs detected: 1
> > #0: NVIDIA Tesla P4, compute cap.: 6.1, ECC: yes, stat: compatible
> >
> > Reading file em.tpr, VERSION 2016.5 (single precision)
> > Using 1 MPI thread
> > Using 32 OpenMP threads
> >
> > 1 compatible GPU is present, with ID 0
> > 1 GPU auto-selected for this run.
> > Mapping of GPU ID to the 1 PP rank in this node: 0
> >
> > Application clocks (GPU clocks) for Tesla P4 are (3003,1531)
> >
> > Back Off! I just backed up em.trr to ./#em.trr.7#
> >
> > Back Off! I just backed up em.edr to ./#em.edr.7#
> >
> > Steepest Descents:
> >Tolerance (Fmax)   =  1.0e+03
> >Number of steps=5
> >
> > ---
> > Program: gmx mdrun, version 2016.5
> > Source file: src/gromacs/gpu_utils/cudautils.cu (line 105)
> >
> > Fatal error:
> > HtoD cudaMemcpyAsync failed: invalid argument
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> >
> > --
> >
> >
> >
> >
> >
> >
> >  Thanks & Regards
> > ___
> >
> > [image: photo]
> > *NAVNEET KUMAR*
> > Doctoral Student
> > Dept. of Pharmacoinformatics
> > National Institute of Pharmaceutical Education and Research, Sector 67,
> > S.A.S. Nagar - 160062, Punjab (INDIA)
> > P +918017967647  <+918017967647> |
> > E navneet...@gmail.com  
> > 
> >  
> >
> > Please consider your environmental responsibility. 

Re: [gmx-users] (no subject)

2019-08-26 Thread Justin Lemkul




On 8/26/19 2:33 PM, Navneet Kumar Singh wrote:

I can't understand meaning of this "For all other GROMACS versions, you
will have to manually edit the topology to use "3fad" construction and
appropriate atom numbers." If I am using version other than the
Gromacs2016.x.

Can I get example of any topology file where these kind of construction've
for loan pair done manually in the topology file?


Please see the manual for examples of what the different construction 
types are.


-Justin


On Mon, Aug 26, 2019 at 11:57 PM Justin Lemkul  wrote:



On 8/26/19 1:14 PM, Navneet Kumar Singh wrote:

Do I have to add vsite3 information from unk.itp to system.top files?

That directive belongs in the topology to which it corresponds. If it is
in unk.itp, then it is already in system.top. You don't need to include
anything else.

-Justin


-maxwarn 1 flag I checked in gromacs 16 version and it's running there.

On Mon, 26 Aug 2019, 00:58 Justin Lemkul,  wrote:


On 8/25/19 2:17 PM, Navneet Kumar Singh wrote:

Yeah! I have read that.

uses -maxwarn 1 to produce .tpr file using grompp command as mentioned

in

Since you are getting an error rather than a warning, that means you are
not using GROMACS 2016.x, as the instructions I pointed to you say.

If you want to use a different GROMACS version, you have to change the
topology. Again see the link I provided for a description of what to do.

-Justin


python script cgenff_charmm2gmx_py2.py

output of cgenff_charmm2gmx_py2.py


_

NOTE 1: Code tested with python 2.7.12. Your version: 2.7.16 (default,

Mar

26 2019, 10:00:46)
[GCC 5.4.0 20160609]

NOTE 2: Please be sure to use the same version of CGenFF in your
simulations that was used during parameter generation:
--Version of CGenFF detected in  unk.str : 4.0
--Version of CGenFF detected in  charmm36-jul2017.ff/forcefield.doc :

4.0

NOTE 3: To avoid duplicated parameters, do NOT select the 'Include
parameters that are already in CGenFF' option when uploading a molecule
into CGenFF.

NOTE 4: 1 lone pairs found in topology that are not in the mol2 file.

This

is not a problem, just FYI!

 DONE 
Conversion complete.
The molecule topology has been written to unk.itp
Additional parameters needed by the molecule are written to unk.prm,

which

needs to be included in the system .top

PLEASE NOTE: lone pair construction requires duplicate host atom

numbers,

which will make grompp complain
To produce .tpr files, the user MUST use -maxwarn 1 to circumvent this

check




After that I am using same -maxwarn 1 but still, it's giving error. It

may

be some silly mistake please let me know.


On Sun, Aug 25, 2019 at 9:43 PM Justin Lemkul  wrote:


On 8/25/19 11:49 AM, Navneet Kumar Singh wrote:

Thank You Sir!

Attached file can be downloaded from following link.

https://fil.email/OR7Nsh0f


Error

ERROR 1 [file unk.itp, line 497]:
  Duplicate atom index (23) in virtual_sites3
___

It support lone pair construction for halogens only. Please suggest

some

alternatives.

The link I provided before describes what you need to do. Please

consult

that information.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to 

Re: [gmx-users] HtoD cudaMemcpyAsync failed: invalid argument

2019-08-26 Thread Navneet Kumar Singh
This is make check reuslt


100% tests passed, 0 tests failed out of 27

Label Time Summary:
GTest =   1.24 sec*proc (18 tests)
IntegrationTest   =   5.93 sec*proc (2 tests)
MpiIntegrationTest=   0.44 sec*proc (1 test)
UnitTest  =   1.24 sec*proc (18 tests)

Total Test time (real) = 204.82 sec
[100%] Built target run-ctest
[100%] Built target check
___

Meaning installation was fine.

I have used gcc-6 as previously it was throwing error not to use gcc
version latter than than 6.0 .

On Tue, Aug 27, 2019 at 12:00 AM Navneet Kumar Singh 
wrote:

> What kind of error is this. Previously Gromacs 2018.4 version was running
> fine using GPU. But Now this error I am getting.
>
> _
>
> gmx mdrun -v -deffnm em
>   :-) GROMACS - gmx mdrun, 2016.5 (-:
>
> GROMACS is written by:
>  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> Bjelkmar
>  Aldert van Buuren   Rudi van Drunen Anton FeenstraGerrit Groenhof
>
>  Christoph Junghans   Anca HamuraruVincent Hindriksen Dimitrios
> Karkoulis
> Peter KassonJiri Kraus  Carsten Kutzner  Per Larsson
>
>   Justin A. Lemkul   Magnus Lundborg   Pieter MeulenhoffErik Marklund
>
>Teemu Murtola   Szilard Pall   Sander Pronk  Roland Schulz
>
>   Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman
>
>   Teemu Virolainen  Christian WennbergMaarten Wolf
>and the project leaders:
> Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
>
> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> Copyright (c) 2001-2017, The GROMACS development team at
> Uppsala University, Stockholm University and
> the Royal Institute of Technology, Sweden.
> check out http://www.gromacs.org for more information.
>
> GROMACS is free software; you can redistribute it and/or modify it
> under the terms of the GNU Lesser General Public License
> as published by the Free Software Foundation; either version 2.1
> of the License, or (at your option) any later version.
>
> GROMACS:  gmx mdrun, version 2016.5
> Executable:   /usr/local/gromacs/bin/gmx
> Data prefix:  /usr/local/gromacs
> Working dir:  /home/nitttr/Desktop/Hydroxy_chloroquine
> Command line:
>   gmx mdrun -v -deffnm em
>
>
> Back Off! I just backed up em.log to ./#em.log.7#
>
> Running on 1 node with total 16 cores, 32 logical cores, 1 compatible GPU
> Hardware detected:
>   CPU info:
> Vendor: Intel
> Brand:  Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
> SIMD instructions most likely to fit this hardware: AVX_512
> SIMD instructions selected at GROMACS compile time: AVX_512
>
>   Hardware topology: Basic
>   GPU info:
> Number of GPUs detected: 1
> #0: NVIDIA Tesla P4, compute cap.: 6.1, ECC: yes, stat: compatible
>
> Reading file em.tpr, VERSION 2016.5 (single precision)
> Using 1 MPI thread
> Using 32 OpenMP threads
>
> 1 compatible GPU is present, with ID 0
> 1 GPU auto-selected for this run.
> Mapping of GPU ID to the 1 PP rank in this node: 0
>
> Application clocks (GPU clocks) for Tesla P4 are (3003,1531)
>
> Back Off! I just backed up em.trr to ./#em.trr.7#
>
> Back Off! I just backed up em.edr to ./#em.edr.7#
>
> Steepest Descents:
>Tolerance (Fmax)   =  1.0e+03
>Number of steps=5
>
> ---
> Program: gmx mdrun, version 2016.5
> Source file: src/gromacs/gpu_utils/cudautils.cu (line 105)
>
> Fatal error:
> HtoD cudaMemcpyAsync failed: invalid argument
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> --
>
>
>
>
>
>
>  Thanks & Regards
> ___
>
> [image: photo]
> *NAVNEET KUMAR*
> Doctoral Student
> Dept. of Pharmacoinformatics
> National Institute of Pharmaceutical Education and Research, Sector 67,
> S.A.S. Nagar - 160062, Punjab (INDIA)
> P +918017967647  <+918017967647> |
> E navneet...@gmail.com  
> 
>  
>
> Please consider your environmental responsibility. Before printing this
> e-mail message, ask yourself whether you really need a hard copy.
>
>
>
>

-- 






 Thanks & Regards
___

[image: photo]
*NAVNEET KUMAR*
Doctoral Student
Dept. of Pharmacoinformatics
National Institute of Pharmaceutical Education and Research, Sector 67,
S.A.S. Nagar - 160062, Punjab (INDIA)
P +918017967647  

Re: [gmx-users] HtoD cudaMemcpyAsync failed: invalid argument

2019-08-26 Thread Mark Abraham
Hi,

You're running 2016.x which had a bug, not the 2018.x you thought you were
using. Use GMXRC or your cluster's modules to select the version you want
to use in the terminal or script that you want to use.

Mark

On Mon, 26 Aug 2019 at 20:34, Navneet Kumar Singh 
wrote:

> What kind of error is this. Previously Gromacs 2018.4 version was running
> fine using GPU. But Now this error I am getting.
>
> _
>
> gmx mdrun -v -deffnm em
>   :-) GROMACS - gmx mdrun, 2016.5 (-:
>
> GROMACS is written by:
>  Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
> Bjelkmar
>  Aldert van Buuren   Rudi van Drunen Anton FeenstraGerrit Groenhof
>  Christoph Junghans   Anca HamuraruVincent Hindriksen Dimitrios
> Karkoulis
> Peter KassonJiri Kraus  Carsten Kutzner  Per Larsson
>   Justin A. Lemkul   Magnus Lundborg   Pieter MeulenhoffErik Marklund
>Teemu Murtola   Szilard Pall   Sander Pronk  Roland Schulz
>   Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman
>   Teemu Virolainen  Christian WennbergMaarten Wolf
>and the project leaders:
> Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
>
> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> Copyright (c) 2001-2017, The GROMACS development team at
> Uppsala University, Stockholm University and
> the Royal Institute of Technology, Sweden.
> check out http://www.gromacs.org for more information.
>
> GROMACS is free software; you can redistribute it and/or modify it
> under the terms of the GNU Lesser General Public License
> as published by the Free Software Foundation; either version 2.1
> of the License, or (at your option) any later version.
>
> GROMACS:  gmx mdrun, version 2016.5
> Executable:   /usr/local/gromacs/bin/gmx
> Data prefix:  /usr/local/gromacs
> Working dir:  /home/nitttr/Desktop/Hydroxy_chloroquine
> Command line:
>   gmx mdrun -v -deffnm em
>
>
> Back Off! I just backed up em.log to ./#em.log.7#
>
> Running on 1 node with total 16 cores, 32 logical cores, 1 compatible GPU
> Hardware detected:
>   CPU info:
> Vendor: Intel
> Brand:  Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
> SIMD instructions most likely to fit this hardware: AVX_512
> SIMD instructions selected at GROMACS compile time: AVX_512
>
>   Hardware topology: Basic
>   GPU info:
> Number of GPUs detected: 1
> #0: NVIDIA Tesla P4, compute cap.: 6.1, ECC: yes, stat: compatible
>
> Reading file em.tpr, VERSION 2016.5 (single precision)
> Using 1 MPI thread
> Using 32 OpenMP threads
>
> 1 compatible GPU is present, with ID 0
> 1 GPU auto-selected for this run.
> Mapping of GPU ID to the 1 PP rank in this node: 0
>
> Application clocks (GPU clocks) for Tesla P4 are (3003,1531)
>
> Back Off! I just backed up em.trr to ./#em.trr.7#
>
> Back Off! I just backed up em.edr to ./#em.edr.7#
>
> Steepest Descents:
>Tolerance (Fmax)   =  1.0e+03
>Number of steps=5
>
> ---
> Program: gmx mdrun, version 2016.5
> Source file: src/gromacs/gpu_utils/cudautils.cu (line 105)
>
> Fatal error:
> HtoD cudaMemcpyAsync failed: invalid argument
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> --
>
>
>
>
>
>
>  Thanks & Regards
> ___
>
> [image: photo]
> *NAVNEET KUMAR*
> Doctoral Student
> Dept. of Pharmacoinformatics
> National Institute of Pharmaceutical Education and Research, Sector 67,
> S.A.S. Nagar - 160062, Punjab (INDIA)
> P +918017967647  <+918017967647> |
> E navneet...@gmail.com  
> 
>  
>
> Please consider your environmental responsibility. Before printing this
> e-mail message, ask yourself whether you really need a hard copy.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2019-08-26 Thread Navneet Kumar Singh
I can't understand meaning of this "For all other GROMACS versions, you
will have to manually edit the topology to use "3fad" construction and
appropriate atom numbers." If I am using version other than the
Gromacs2016.x.

Can I get example of any topology file where these kind of construction've
for loan pair done manually in the topology file?

On Mon, Aug 26, 2019 at 11:57 PM Justin Lemkul  wrote:

>
>
> On 8/26/19 1:14 PM, Navneet Kumar Singh wrote:
> > Do I have to add vsite3 information from unk.itp to system.top files?
>
> That directive belongs in the topology to which it corresponds. If it is
> in unk.itp, then it is already in system.top. You don't need to include
> anything else.
>
> -Justin
>
> > -maxwarn 1 flag I checked in gromacs 16 version and it's running there.
> >
> > On Mon, 26 Aug 2019, 00:58 Justin Lemkul,  wrote:
> >
> >>
> >> On 8/25/19 2:17 PM, Navneet Kumar Singh wrote:
> >>> Yeah! I have read that.
> >>>
> >>> uses -maxwarn 1 to produce .tpr file using grompp command as mentioned
> in
> >> Since you are getting an error rather than a warning, that means you are
> >> not using GROMACS 2016.x, as the instructions I pointed to you say.
> >>
> >> If you want to use a different GROMACS version, you have to change the
> >> topology. Again see the link I provided for a description of what to do.
> >>
> >> -Justin
> >>
> >>> python script cgenff_charmm2gmx_py2.py
> >>>
> >>> output of cgenff_charmm2gmx_py2.py
> >>>
> >>
> _
> >>> NOTE 1: Code tested with python 2.7.12. Your version: 2.7.16 (default,
> >> Mar
> >>> 26 2019, 10:00:46)
> >>> [GCC 5.4.0 20160609]
> >>>
> >>> NOTE 2: Please be sure to use the same version of CGenFF in your
> >>> simulations that was used during parameter generation:
> >>> --Version of CGenFF detected in  unk.str : 4.0
> >>> --Version of CGenFF detected in  charmm36-jul2017.ff/forcefield.doc :
> 4.0
> >>>
> >>> NOTE 3: To avoid duplicated parameters, do NOT select the 'Include
> >>> parameters that are already in CGenFF' option when uploading a molecule
> >>> into CGenFF.
> >>>
> >>> NOTE 4: 1 lone pairs found in topology that are not in the mol2 file.
> >> This
> >>> is not a problem, just FYI!
> >>>
> >>>  DONE 
> >>> Conversion complete.
> >>> The molecule topology has been written to unk.itp
> >>> Additional parameters needed by the molecule are written to unk.prm,
> >> which
> >>> needs to be included in the system .top
> >>>
> >>> PLEASE NOTE: lone pair construction requires duplicate host atom
> numbers,
> >>> which will make grompp complain
> >>> To produce .tpr files, the user MUST use -maxwarn 1 to circumvent this
> >> check
> >>
> 
> >>> After that I am using same -maxwarn 1 but still, it's giving error. It
> >> may
> >>> be some silly mistake please let me know.
> >>>
> >>>
> >>> On Sun, Aug 25, 2019 at 9:43 PM Justin Lemkul  wrote:
> >>>
>  On 8/25/19 11:49 AM, Navneet Kumar Singh wrote:
> > Thank You Sir!
> >
> > Attached file can be downloaded from following link.
> >
> > https://fil.email/OR7Nsh0f
> > 
> > 
> > Error
> >
> > ERROR 1 [file unk.itp, line 497]:
> >  Duplicate atom index (23) in virtual_sites3
> > ___
> >
> > It support lone pair construction for halogens only. Please suggest
> >> some
> > alternatives.
>  The link I provided before describes what you need to do. Please
> consult
>  that information.
> 
>  -Justin
> 
>  --
>  ==
> 
>  Justin A. Lemkul, Ph.D.
>  Assistant Professor
>  Office: 301 Fralin Hall
>  Lab: 303 Engel Hall
> 
>  Virginia Tech Department of Biochemistry
>  340 West Campus Dr.
>  Blacksburg, VA 24061
> 
>  jalem...@vt.edu | (540) 231-3129
>  http://www.thelemkullab.com
> 
>  ==
> 
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>  posting!
> 
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>  send a mail to gmx-users-requ...@gromacs.org.
> 
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> 

[gmx-users] HtoD cudaMemcpyAsync failed: invalid argument

2019-08-26 Thread Navneet Kumar Singh
What kind of error is this. Previously Gromacs 2018.4 version was running
fine using GPU. But Now this error I am getting.

_

gmx mdrun -v -deffnm em
  :-) GROMACS - gmx mdrun, 2016.5 (-:

GROMACS is written by:
 Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
Bjelkmar
 Aldert van Buuren   Rudi van Drunen Anton FeenstraGerrit Groenhof
 Christoph Junghans   Anca HamuraruVincent Hindriksen Dimitrios
Karkoulis
Peter KassonJiri Kraus  Carsten Kutzner  Per Larsson
  Justin A. Lemkul   Magnus Lundborg   Pieter MeulenhoffErik Marklund
   Teemu Murtola   Szilard Pall   Sander Pronk  Roland Schulz
  Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman
  Teemu Virolainen  Christian WennbergMaarten Wolf
   and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2016.5
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Working dir:  /home/nitttr/Desktop/Hydroxy_chloroquine
Command line:
  gmx mdrun -v -deffnm em


Back Off! I just backed up em.log to ./#em.log.7#

Running on 1 node with total 16 cores, 32 logical cores, 1 compatible GPU
Hardware detected:
  CPU info:
Vendor: Intel
Brand:  Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz
SIMD instructions most likely to fit this hardware: AVX_512
SIMD instructions selected at GROMACS compile time: AVX_512

  Hardware topology: Basic
  GPU info:
Number of GPUs detected: 1
#0: NVIDIA Tesla P4, compute cap.: 6.1, ECC: yes, stat: compatible

Reading file em.tpr, VERSION 2016.5 (single precision)
Using 1 MPI thread
Using 32 OpenMP threads

1 compatible GPU is present, with ID 0
1 GPU auto-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0

Application clocks (GPU clocks) for Tesla P4 are (3003,1531)

Back Off! I just backed up em.trr to ./#em.trr.7#

Back Off! I just backed up em.edr to ./#em.edr.7#

Steepest Descents:
   Tolerance (Fmax)   =  1.0e+03
   Number of steps=5

---
Program: gmx mdrun, version 2016.5
Source file: src/gromacs/gpu_utils/cudautils.cu (line 105)

Fatal error:
HtoD cudaMemcpyAsync failed: invalid argument

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

-- 






 Thanks & Regards
___

[image: photo]
*NAVNEET KUMAR*
Doctoral Student
Dept. of Pharmacoinformatics
National Institute of Pharmaceutical Education and Research, Sector 67,
S.A.S. Nagar - 160062, Punjab (INDIA)
P +918017967647  <+918017967647> |
E navneet...@gmail.com  

 

Please consider your environmental responsibility. Before printing this
e-mail message, ask yourself whether you really need a hard copy.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2019-08-26 Thread Justin Lemkul




On 8/26/19 1:14 PM, Navneet Kumar Singh wrote:

Do I have to add vsite3 information from unk.itp to system.top files?


That directive belongs in the topology to which it corresponds. If it is 
in unk.itp, then it is already in system.top. You don't need to include 
anything else.


-Justin


-maxwarn 1 flag I checked in gromacs 16 version and it's running there.

On Mon, 26 Aug 2019, 00:58 Justin Lemkul,  wrote:



On 8/25/19 2:17 PM, Navneet Kumar Singh wrote:

Yeah! I have read that.

uses -maxwarn 1 to produce .tpr file using grompp command as mentioned in

Since you are getting an error rather than a warning, that means you are
not using GROMACS 2016.x, as the instructions I pointed to you say.

If you want to use a different GROMACS version, you have to change the
topology. Again see the link I provided for a description of what to do.

-Justin


python script cgenff_charmm2gmx_py2.py

output of cgenff_charmm2gmx_py2.py


_

NOTE 1: Code tested with python 2.7.12. Your version: 2.7.16 (default,

Mar

26 2019, 10:00:46)
[GCC 5.4.0 20160609]

NOTE 2: Please be sure to use the same version of CGenFF in your
simulations that was used during parameter generation:
--Version of CGenFF detected in  unk.str : 4.0
--Version of CGenFF detected in  charmm36-jul2017.ff/forcefield.doc : 4.0

NOTE 3: To avoid duplicated parameters, do NOT select the 'Include
parameters that are already in CGenFF' option when uploading a molecule
into CGenFF.

NOTE 4: 1 lone pairs found in topology that are not in the mol2 file.

This

is not a problem, just FYI!

 DONE 
Conversion complete.
The molecule topology has been written to unk.itp
Additional parameters needed by the molecule are written to unk.prm,

which

needs to be included in the system .top

PLEASE NOTE: lone pair construction requires duplicate host atom numbers,
which will make grompp complain
To produce .tpr files, the user MUST use -maxwarn 1 to circumvent this

check


After that I am using same -maxwarn 1 but still, it's giving error. It

may

be some silly mistake please let me know.


On Sun, Aug 25, 2019 at 9:43 PM Justin Lemkul  wrote:


On 8/25/19 11:49 AM, Navneet Kumar Singh wrote:

Thank You Sir!

Attached file can be downloaded from following link.

https://fil.email/OR7Nsh0f


Error

ERROR 1 [file unk.itp, line 497]:
 Duplicate atom index (23) in virtual_sites3
___

It support lone pair construction for halogens only. Please suggest

some

alternatives.

The link I provided before describes what you need to do. Please consult
that information.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read 

Re: [gmx-users] CHOL still not recognised

2019-08-26 Thread Mark Abraham
Hi,

You should follow the error message instructions "... they should not have
the same chain ID as the adjacent protein chain" which you know is that X.
Make the protein have a different chain ID from the rest.

Mark

On Mon, 26 Aug 2019 at 19:27, Ayesha Fatima 
wrote:

> Dear Justin,
> Thank you for your earlier response. I opened the file in vmd and it
> returned me with a 3 letter residue name and an X identifier for the whole
> molecule.
>
> CRYST10.0000.0000.000  90.00  90.00  90.00 P 1   1
> ATOM  1  C3  CHL X   1  -1.678   2.702  17.328  1.00  0.00
>  MEMB C
> ATOM  2  H3  CHL X   1  -0.722   3.268  17.289  1.00  0.00
>  MEMB H
> ATOM  3  O3  CHL X   1  -2.187   2.686  18.672  1.00  0.00
>  MEMB O
> ATOM  4  H3' CHL X   1  -1.369   2.855  19.145  1.00  0.00
>  MEMB H
> ..
> ATOM   9473  OH2 TIP X   1   0.049   2.793  20.554  1.00  0.00
>  TIP3 O
> ATOM   9474  H1  TIP X   1   0.881   2.777  20.081  1.00  0.00
>  TIP3 H
> ATOM   9475  H2  TIP X   1   0.235   2.351  21.383  1.00  0.00
>  TIP3 H
> ATOM   9476  OH2 TIP X   2  -1.458  -2.368  19.953  1.00  0.00
>  TIP3 O
> ATOM   9477  H1  TIP X   2  -2.185  -2.795  20.406  1.00  0.00
>  TIP3 H
>
> now when i run pdb2gmx, the error is
> Fatal error:
> The residues in the chain CHL1--TIP3130 do not have a consistent type. The
> first residue has type 'Other', while residue TIP1 is of type 'Water'.
> Either
> there is a mistake in your chain, or it includes nonstandard residue names
> that have not yet been added to the residuetypes.dat file in the GROMACS
> library directory. If there are other molecules such as ligands, they
> should
> not have the same chain ID as the adjacent protein chain since it's a
> separate
> molecule.
>
> i cant find the solution to the error online.
> would you or anyone one the list guide?
> Thank you
> regards
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] CHOL still not recognised

2019-08-26 Thread Ayesha Fatima
Dear Justin,
Thank you for your earlier response. I opened the file in vmd and it
returned me with a 3 letter residue name and an X identifier for the whole
molecule.

CRYST10.0000.0000.000  90.00  90.00  90.00 P 1   1
ATOM  1  C3  CHL X   1  -1.678   2.702  17.328  1.00  0.00
 MEMB C
ATOM  2  H3  CHL X   1  -0.722   3.268  17.289  1.00  0.00
 MEMB H
ATOM  3  O3  CHL X   1  -2.187   2.686  18.672  1.00  0.00
 MEMB O
ATOM  4  H3' CHL X   1  -1.369   2.855  19.145  1.00  0.00
 MEMB H
..
ATOM   9473  OH2 TIP X   1   0.049   2.793  20.554  1.00  0.00
 TIP3 O
ATOM   9474  H1  TIP X   1   0.881   2.777  20.081  1.00  0.00
 TIP3 H
ATOM   9475  H2  TIP X   1   0.235   2.351  21.383  1.00  0.00
 TIP3 H
ATOM   9476  OH2 TIP X   2  -1.458  -2.368  19.953  1.00  0.00
 TIP3 O
ATOM   9477  H1  TIP X   2  -2.185  -2.795  20.406  1.00  0.00
 TIP3 H

now when i run pdb2gmx, the error is
Fatal error:
The residues in the chain CHL1--TIP3130 do not have a consistent type. The
first residue has type 'Other', while residue TIP1 is of type 'Water'.
Either
there is a mistake in your chain, or it includes nonstandard residue names
that have not yet been added to the residuetypes.dat file in the GROMACS
library directory. If there are other molecules such as ligands, they should
not have the same chain ID as the adjacent protein chain since it's a
separate
molecule.

i cant find the solution to the error online.
would you or anyone one the list guide?
Thank you
regards
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2019-08-26 Thread Navneet Kumar Singh
Do I have to add vsite3 information from unk.itp to system.top files?

-maxwarn 1 flag I checked in gromacs 16 version and it's running there.

On Mon, 26 Aug 2019, 00:58 Justin Lemkul,  wrote:

>
>
> On 8/25/19 2:17 PM, Navneet Kumar Singh wrote:
> > Yeah! I have read that.
> >
> > uses -maxwarn 1 to produce .tpr file using grompp command as mentioned in
>
> Since you are getting an error rather than a warning, that means you are
> not using GROMACS 2016.x, as the instructions I pointed to you say.
>
> If you want to use a different GROMACS version, you have to change the
> topology. Again see the link I provided for a description of what to do.
>
> -Justin
>
> > python script cgenff_charmm2gmx_py2.py
> >
> > output of cgenff_charmm2gmx_py2.py
> >
> _
> >
> > NOTE 1: Code tested with python 2.7.12. Your version: 2.7.16 (default,
> Mar
> > 26 2019, 10:00:46)
> > [GCC 5.4.0 20160609]
> >
> > NOTE 2: Please be sure to use the same version of CGenFF in your
> > simulations that was used during parameter generation:
> > --Version of CGenFF detected in  unk.str : 4.0
> > --Version of CGenFF detected in  charmm36-jul2017.ff/forcefield.doc : 4.0
> >
> > NOTE 3: To avoid duplicated parameters, do NOT select the 'Include
> > parameters that are already in CGenFF' option when uploading a molecule
> > into CGenFF.
> >
> > NOTE 4: 1 lone pairs found in topology that are not in the mol2 file.
> This
> > is not a problem, just FYI!
> >
> >  DONE 
> > Conversion complete.
> > The molecule topology has been written to unk.itp
> > Additional parameters needed by the molecule are written to unk.prm,
> which
> > needs to be included in the system .top
> >
> > PLEASE NOTE: lone pair construction requires duplicate host atom numbers,
> > which will make grompp complain
> > To produce .tpr files, the user MUST use -maxwarn 1 to circumvent this
> check
> >
> 
> >
> > After that I am using same -maxwarn 1 but still, it's giving error. It
> may
> > be some silly mistake please let me know.
> >
> >
> > On Sun, Aug 25, 2019 at 9:43 PM Justin Lemkul  wrote:
> >
> >>
> >> On 8/25/19 11:49 AM, Navneet Kumar Singh wrote:
> >>> Thank You Sir!
> >>>
> >>> Attached file can be downloaded from following link.
> >>>
> >>> https://fil.email/OR7Nsh0f
> >>> 
> >>> 
> >>> Error
> >>>
> >>> ERROR 1 [file unk.itp, line 497]:
> >>> Duplicate atom index (23) in virtual_sites3
> >>> ___
> >>>
> >>> It support lone pair construction for halogens only. Please suggest
> some
> >>> alternatives.
> >> The link I provided before describes what you need to do. Please consult
> >> that information.
> >>
> >> -Justin
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 Engel Hall
> >>
> >> Virginia Tech Department of Biochemistry
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalem...@vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] [HELP] Residue 'NI' not found in residue topology database

2019-08-26 Thread Justin Lemkul




On 8/26/19 12:04 PM, Edjan Silva wrote:

Dear users,

I am trying to perform a simulation with a protein which contains two
nickel atoms in the active site.

When using the pdb2gmx command the following error appears:

'NI' not found in residue topology database

I have edited the ions.itp file in the force field directory used (opls)
but the same error still appears.


pdb2gmx does not read .itp files, it reads .rtp files. You need to add a 
residue definition there.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2019.3 compilation with GPU support

2019-08-26 Thread Mark Abraham
Hi,

All versions of icc requires a standard library from an installation of
gcc. There are various dependencies between them, and your system admins
should have an idea which one is known to work well in your case. If you
need to help the GROMACS build find the right one, do check out the GROMACS
install guide for how to direct a particular gcc to be used with icc. I
would suggest nothing earlier than gcc 5.

Mark


On Mon, 26 Aug 2019 at 17:51, Prithwish Nandi 
wrote:

> Hi,
> I am trying to compile Gromacs-2019.3 at our HPC cluster. I successfully
> compiled the single and double precision versions, but it’s producing error
> for GPU support. (The error message is pasted below)
>
> I am using Intel/2018 update 4 and CUDA/10.0. The base gcc version is
> 4.8.5. Using MKL as the FFTW lib and mliicc as the C compiler, and mpiicpc
> as the CXX compiler.
>
> The error I am getting is given below.
>
> Do you have any clue for this?
>
> Thanks, //PN
>
> The error message:
>
>   Error generating file
>
> /xxx/xxx/gromacs/intel/2019.3/kay/gromacs-2019.3/build_gpu/src/gromacs/CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/./libgromacs_generated_nbnxn_cuda_kernel_pruneonly.cu.o
>
>
> make[2]: ***
> [src/gromacs/CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda_kernel_pruneonly.cu.o]
> Error 1
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(68): error:
> identifier "_mm_set1_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(70): error:
> identifier "_mm_set1_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(112): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_and_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(172): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(175): error:
> identifier "_mm_or_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(176): error:
> identifier "_mm_sub_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(177): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_hadd_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_cvtsd_f64" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_add_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(187): error:
> identifier "_mm_storeu_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(68): error:
> identifier "_mm_set1_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(70): error:
> identifier "_mm_set1_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(112): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error:
> identifier "_mm_and_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(172): error:
> identifier "_mm_set_epi64x" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(175): error:
> identifier "_mm_or_si128" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(176): error:
> identifier "_mm_sub_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(177): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_hadd_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error:
> identifier "_mm_cvtsd_f64" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_mul_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error:
> identifier "_mm_add_pd" is undefined
>
> /usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(187): error:
> identifier "_mm_storeu_pd" is undefined
>
> 14 errors detected in the compilation of
> "/localscratch/397189/tmpxft_6280_-6_gpubonded-impl.cpp4.ii".
>
>
>
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

[gmx-users] [HELP] Residue 'NI' not found in residue topology database

2019-08-26 Thread Edjan Silva
Dear users,

I am trying to perform a simulation with a protein which contains two
nickel atoms in the active site.

When using the pdb2gmx command the following error appears:

'NI' not found in residue topology database

I have edited the ions.itp file in the force field directory used (opls)
but the same error still appears.

Best regards,

Edjan.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs 2019.3 compilation with GPU support

2019-08-26 Thread Prithwish Nandi
Hi,
I am trying to compile Gromacs-2019.3 at our HPC cluster. I successfully 
compiled the single and double precision versions, but it’s producing error for 
GPU support. (The error message is pasted below)

I am using Intel/2018 update 4 and CUDA/10.0. The base gcc version is 4.8.5. 
Using MKL as the FFTW lib and mliicc as the C compiler, and mpiicpc as the CXX 
compiler.

The error I am getting is given below.

Do you have any clue for this?

Thanks, //PN

The error message:

  Error generating file
  
/xxx/xxx/gromacs/intel/2019.3/kay/gromacs-2019.3/build_gpu/src/gromacs/CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/./libgromacs_generated_nbnxn_cuda_kernel_pruneonly.cu.o


make[2]: *** 
[src/gromacs/CMakeFiles/libgromacs.dir/mdlib/nbnxn_cuda/libgromacs_generated_nbnxn_cuda_kernel_pruneonly.cu.o]
 Error 1
/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(68): error: 
identifier "_mm_set1_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(70): error: 
identifier "_mm_set1_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(112): error: 
identifier "_mm_set_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error: 
identifier "_mm_set_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error: 
identifier "_mm_and_si128" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(172): error: 
identifier "_mm_set_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(175): error: 
identifier "_mm_or_si128" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(176): error: 
identifier "_mm_sub_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(177): error: 
identifier "_mm_mul_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error: 
identifier "_mm_hadd_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error: 
identifier "_mm_cvtsd_f64" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error: 
identifier "_mm_mul_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error: 
identifier "_mm_add_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(187): error: 
identifier "_mm_storeu_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(68): error: 
identifier "_mm_set1_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(70): error: 
identifier "_mm_set1_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(112): error: 
identifier "_mm_set_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error: 
identifier "_mm_set_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(123): error: 
identifier "_mm_and_si128" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(172): error: 
identifier "_mm_set_epi64x" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(175): error: 
identifier "_mm_or_si128" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(176): error: 
identifier "_mm_sub_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(177): error: 
identifier "_mm_mul_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error: 
identifier "_mm_hadd_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(178): error: 
identifier "_mm_cvtsd_f64" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error: 
identifier "_mm_mul_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(185): error: 
identifier "_mm_add_pd" is undefined

/usr/include/c++/4.8.5/x86_64-redhat-linux/bits/opt_random.h(187): error: 
identifier "_mm_storeu_pd" is undefined

14 errors detected in the compilation of 
"/localscratch/397189/tmpxft_6280_-6_gpubonded-impl.cpp4.ii".






-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] wham analysis

2019-08-26 Thread Jochen Hub

Hi,

you can also use the pullf output for WHAM (option -if), this may be easier.

Cheers, Jochen

Am 26.08.19 um 12:59 schrieb Negar Parvizi:


  Dear all,I used Justin's tutorial(Tutorial 3: Umbrella Sampling: GROMACS 
Tutorial ) for my file which is protein-ligand complex.
The pulling force was in Y direction. when Umbrella sampling finished, "Wham" 
couldn't analysis the data because wham is in z direction.what should I do now for wham 
analysis? how can I change it to Y direction?what justin said: I didn't understand it:
"WHAM does not presuppose the axis or vector; it does what you tell it.  If you're referring 
to the x-axis label in the PMF profile being "z,"  that is just a generic (and perhaps 
imprecise) label that should be  changed to the Greek character xi, per conventional notation."

So I decided copy the error:
  Here is the error:

Found 25 tpr and 25 pull force files in tpr-files.dat and pullf-files.dat, 
respectively
Reading 12 tpr and pullf files
Automatic determination of boundaries...
Reading file umbrella0.tpr, VERSION 5.1.4 (single precision)
File umbrella0.tpr, 1 coordinates, geometry "distance", dimensions [N N Y], (1 
dimensions)

     Pull group coordinates not expected in pullx files.
     crd 0) k = 1000   position = 0.840198
     Use option -v to see this output for all input tpr files


Reading pull force file with pull geometry distance and 1 pull dimensions
Expecting these columns in pull file:

     0 reference columns for each individual pull coordinate
     1 data columns for each pull coordinate

With 1 pull groups, expect 2 columns (including the time column)
Reading file umbrella71.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella98.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella111.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella119.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella139.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella146.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella157.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella180.tpr, VERSION 5.1.4 (single precision)
Reading file umbrella202.tpr, VERSION 5.1.4 (single precision)



I would  appreciate any help
Tanks in advance,
Negar



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs is not recognising opls ff

2019-08-26 Thread Justin Lemkul




On 8/26/19 7:04 AM, Ayesha Fatima wrote:

Dear All,
I have come across another issue
When i want to use opls itp for cholesterol, it gives me this error " Fatal
error:
Residue 'OL' not found in residue topology database"
It does not take CHOL as the residue name as given below


That suggests your input file has incorrect formatting. If it is a PDB 
file, the column positions are fixed. The error is consistent with the 
"CHOL" residue name having been shifted by two characters/columns.


-Justin


[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  typeB
chargeB  massB
 1  opls_158   1   CHOL  C1  1   0.2050  12.011
 2  opls_140   1   CHOL  H1  1   0.0600  1.008
 3  opls_154   1   CHOL  O1  1  -0.6830  15.9994

Any suggestions?
Thank you
regards


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How to restrain small molecules in a region where the small molecules can move freely?

2019-08-26 Thread Justin Lemkul




On 8/26/19 5:41 AM, wtzou wrote:

Dear all,
I want to restrain some water in a big cluster where the water can move freely 
in the cluster but they cannot move out of the cluster. How can I do to 
implement the restrain?


Check the manual for flat-bottom restraints.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] wham analysis

2019-08-26 Thread John Whittaker
Hi,

>
>  Dear all,I used Justin's tutorial(Tutorial 3: Umbrella Sampling: GROMACS
> Tutorial ) for my file which is protein-ligand complex.
> The pulling force was in Y direction. when Umbrella sampling finished,
> "Wham" couldn't analysis the data because wham is in z direction.what
> should I do now for wham analysis? how can I change it to Y direction?what
> justin said: I didn't understand it:
> "WHAM does not presuppose the axis or vector; it does what you tell it. 
> If you're referring to the x-axis label in the PMF profile being "z," 
> that is just a generic (and perhaps imprecise) label that should be 
> changed to the Greek character xi, per conventional notation."
>
> So I decided copy the error:
>  Here is the error:
>>Found 25 tpr and 25 pull force files in tpr-files.dat and
>> pullf-files.dat, respectively
>>Reading 12 tpr and pullf files
>>Automatic determination of boundaries...
>>Reading file umbrella0.tpr, VERSION 5.1.4 (single precision)
>>File umbrella0.tpr, 1 coordinates, geometry "distance", dimensions [N N
>> Y], (1 dimensions)

Well, according to this ^ output, the pull coordinate was acting in the z
direction, *and* you were printing the distance between the COM's in the z
direction, not the y direction like you said.

>     Pull group coordinates not expected in pullx files.
>     crd 0) k = 1000   position = 0.840198
>     Use option -v to see this output for all input tpr files

it appears that you are printing not only just the distance between the
COM's of your groups, but also the pull group coordinates in your output
files.

the WHAM code is telling you that it doesn't expect coordinates in the
pullx file, it only expects the time (in the first column) and the
distance between the COM's calculated from whichever components you chose
in "pull-coord1-dim" (in the second column).

For example, this snippet

---

0.  1.74222
0.1000  1.72375
0.2000  1.72106
0.3000  1.71755
0.4000  1.72203
0.5000  1.72997
0.6000  1.71092
0.7000  1.69819
0.8000  1.6921
0.9000  1.70055
1.  1.69953
1.1000  1.7005
1.2000  1.69984
1.3000  1.70274
1.4000  1.70807
1.5000  1.71815
1.6000  1.73492
1.7000  1.74009
1.8000  1.76735
1.9000  1.77583
2.  1.76892
...



You should copy/paste your .mdp file and a snippet of your pullx.xvg
output. It seems like you might have done something different than you
thought.

- John

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gromacs is not recognising opls ff

2019-08-26 Thread Ayesha Fatima
Dear All,
I have come across another issue
When i want to use opls itp for cholesterol, it gives me this error " Fatal
error:
Residue 'OL' not found in residue topology database"
It does not take CHOL as the residue name as given below

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass  typeB
   chargeB  massB
1  opls_158   1   CHOL  C1  1   0.2050  12.011
2  opls_140   1   CHOL  H1  1   0.0600  1.008
3  opls_154   1   CHOL  O1  1  -0.6830  15.9994

Any suggestions?
Thank you
regards
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] wham analysis

2019-08-26 Thread Negar Parvizi

 Dear all,I used Justin's tutorial(Tutorial 3: Umbrella Sampling: GROMACS 
Tutorial ) for my file which is protein-ligand complex.
The pulling force was in Y direction. when Umbrella sampling finished, "Wham" 
couldn't analysis the data because wham is in z direction.what should I do now 
for wham analysis? how can I change it to Y direction?what justin said: I 
didn't understand it:
"WHAM does not presuppose the axis or vector; it does what you tell it.  If 
you're referring to the x-axis label in the PMF profile being "z,"  that is 
just a generic (and perhaps imprecise) label that should be  changed to the 
Greek character xi, per conventional notation."

So I decided copy the error:
 Here is the error:
>Found 25 tpr and 25 pull force files in tpr-files.dat and pullf-files.dat, 
>respectively
>Reading 12 tpr and pullf files
>Automatic determination of boundaries...
>Reading file umbrella0.tpr, VERSION 5.1.4 (single precision)
>File umbrella0.tpr, 1 coordinates, geometry "distance", dimensions [N N Y], (1 
>dimensions)
    Pull group coordinates not expected in pullx files.
    crd 0) k = 1000   position = 0.840198
    Use option -v to see this output for all input tpr files

>Reading pull force file with pull geometry distance and 1 pull dimensions
>Expecting these columns in pull file:
    0 reference columns for each individual pull coordinate
    1 data columns for each pull coordinate
>With 1 pull groups, expect 2 columns (including the time column)
>Reading file umbrella71.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella98.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella111.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella119.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella139.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella146.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella157.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella180.tpr, VERSION 5.1.4 (single precision)
>Reading file umbrella202.tpr, VERSION 5.1.4 (single precision)


I would  appreciate any help
Tanks in advance,
Negar

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] How to restrain small molecules in a region where the small molecules can move freely?

2019-08-26 Thread wtzou
Dear all,
I want to restrain some water in a big cluster where the water can move freely 
in the cluster but they cannot move out of the cluster. How can I do to 
implement the restrain?
Thank you very much!
Sincerely,
Wentian






--
Institute of Theoretical and Computational Chemistry,Nanjing University
NO. 163 Xianlin Avenue,Qixia District,Nanjing ,Jiangsu Province
Nanjing 210023,China
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.