Re: [gmx-users] Best force Field for Protein

2018-11-19 Thread Kovalskyy, Dmytro
Justn,

>Then there are force fields that do better than others, but
>the choice must be yours based on a thorough review of available
>literature; 

Can you provide a list of FFs more suitable for folding and IDP simulations?

Thank you,

Dmytro


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Justin Lemkul 

Sent: Monday, November 19, 2018 5:46 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Best force Field for Protein

On 11/18/18 10:46 AM, Edjan Silva wrote:
> Dear Gromacs users, I will perform a simulation of a protein with explicit
> solvent to verify structural changes in temperatures of 300 k and 310 k.

I doubt any force field will give you any meaningful differences at such
a small temperature interval.

> Given the various force fields available on the gromacs platform, which one
> would be most appropriate for my experiment?

There's no real way to know. Different force fields do well at different
things. Simulating "a protein" can mean a lot of different things. Are
you simulating a well-folded protein with a stable tertiary structure?
If so, just about any modern force field will perform equivalently. Are
you trying to fold a protein or simulate something intrinsically
disordered? Then there are force fields that do better than others, but
the choice must be yours based on a thorough review of available
literature; it's not something that strangers on an Internet forum can
conclude for you :)

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Running GPU issue

2018-11-15 Thread Kovalskyy, Dmytro
The error you saw is clear evidence of a bug. There's a few things you
might try to help narrow things down.
* Is the error reproducible on each run of mdrun?

Yes



* 2018.3 was not designed or tested on CUDA 10, which was released rather
later. If you have an earlier CUDA, please see if building with it
alleviates the error

No I have only CUDA 10.0 installed. Taking into account that I made that 
running (see replies to the next questions) please let me know whether should I 
go with CUDA 9.



* The error message could be triggered by multiple parts of the code; what
do you get with mdrun -nb gpu -pme cpu?

This way I get MD running, no crash. All 72 cores (dual Xeon Gold 6140)

with my system I got 

$mdrun -deffnm md -v -nb gpu -pme cpu

 (ns/day)(hour/ns)
Performance:   56.8670.422
$nvidia-smi
|   0  Quadro P5000On   | :D5:00.0  On |  Off |
| 35%   56CP063W / 180W |   1400MiB / 16256MiB | 37%  Default |



* Do you get any more diagnostics from running with a build with cmake
-DCMAKE_BUILD_TYPE=Debug?

Interesting comes here. Gromacs with Debug = ON allows to stably run MD with no 
additional params. 
I mean
$ gmx mdrun -deffnm md -v 
runs with no crash. However, only half of the Xeon cores are used (monitored 
with htop) 


 (ns/day)(hour/ns)
Performance:   69.5770.345
$nvidia-smi 
|  0  Quadro P5000On   | :D5:00.0  On |  Off |
| 38%   62CP068W / 180W |   1430MiB / 16256MiB | 56%  Default |



If I run 
$ gmx mdrun -deffnm md -v -nb gpu -pme cpu
Then all CPU cores are busy but performancs is poor
 (ns/day)(hour/ns)
Performance:   20.2491.185
$nvidia-smi 
|   0  Quadro P5000On   | :D5:00.0  On |  Off |
| 37%   57CP058W / 180W |   1408MiB / 16256MiB | 22%  Default |


Finally,

If I run same MD with no GPU at all (i.e. -nb cpu) then I got performance

 (ns/day)(hour/ns)
Performance:   47.5360.505


Does this help?


Dmytro



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Wednesday, November 14, 2018 1:37 PM
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Running GPU issue

Hi,

I expect that that warning is fine (for now).

The error you saw is clear evidence of a bug. There's a few things you
might try to help narrow things down.
* Is the error reproducible on each run of mdrun?
* 2018.3 was not designed or tested on CUDA 10, which was released rather
later. If you have an earlier CUDA, please see if building with it
alleviates the error
* The error message could be triggered by multiple parts of the code; what
do you get with mdrun -nb gpu -pme cpu?
* Do you get any more diagnostics from running with a build with cmake
-DCMAKE_BUILD_TYPE=Debug?

Mark

On Wed, Nov 14, 2018 at 1:09 PM Kovalskyy, Dmytro 
wrote:

> I forgot to add. While compiling gromacs I got following error at the very
> beggining:
>
>
> [  3%] Built target gpu_utilstest_cuda
> /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu: In
> function ?int do_sanity_checks(int, cudaDeviceProp*)?:
> /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu:258:28:
> warning: ?cudaError_t cudaThreadSynchronize()? is deprecated
> [-Wdeprecated-declarations]
>  if (cudaThreadSynchronize() != cudaSuccess)
> ^
> /usr/local/cuda/include/cuda_runtime_api.h:947:46: note: declared here
>  extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI
> cudaThreadSynchronize(void);
>
>
> But make has completed its job without falling down.
>
>
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 13, 2018 10:29 PM
> To: gmx-us...@gromacs.org
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Running GPU issue
>
> Hi,
>
> It can share.
>
> Mark
>
> On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro 
> wrote:
>
> > Hi,
> >
> >
> >
> > To perform GPU with Gromacs does it require exclusive  GPU card or
> Gromacs
> > can share the video card with X-server?
> >
> >
> > Thank you
> >
> >
> > Dmytro
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailm

Re: [gmx-users] Running GPU issue

2018-11-14 Thread Kovalskyy, Dmytro
I forgot to add. While compiling gromacs I got following error at the very 
beggining:


[  3%] Built target gpu_utilstest_cuda
/usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu: In function 
?int do_sanity_checks(int, cudaDeviceProp*)?:
/usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu:258:28: 
warning: ?cudaError_t cudaThreadSynchronize()? is deprecated 
[-Wdeprecated-declarations]
 if (cudaThreadSynchronize() != cudaSuccess)
^
/usr/local/cuda/include/cuda_runtime_api.h:947:46: note: declared here
 extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI 
cudaThreadSynchronize(void);


But make has completed its job without falling down.





From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Tuesday, November 13, 2018 10:29 PM
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Running GPU issue

Hi,

It can share.

Mark

On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro 
wrote:

> Hi,
>
>
>
> To perform GPU with Gromacs does it require exclusive  GPU card or Gromacs
> can share the video card with X-server?
>
>
> Thank you
>
>
> Dmytro
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Running GPU issue

2018-11-14 Thread Kovalskyy, Dmytro
 sigma= 0
   y:
 E0   = 0
 omega= 0
 t0   = 0
 sigma= 0
   z:
 E0   = 0
 omega= 0
 t0   = 0
 sigma= 0
grpopts:
   nrdf: 16011.7  141396
   ref-t: 300 300
   tau-t: 0.1 0.1
annealing:  No  No
annealing-npoints:   0   0
   acc:0   0   0
   nfreeze:   N   N   N
   energygrp-flags[  0]: 0

Changing nstlist from 20 to 80, rlist from 0.931 to 1.049

Using 1 MPI thread
Using 36 OpenMP threads 

1 GPU auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
  PP:0,PME:0
Application clocks (GPU clocks) for Quadro P5000 are (4513,1733)
Application clocks (GPU clocks) for Quadro P5000 are (4513,1733)
Pinning threads with an auto-selected logical core stride of 2
System total charge: -0.000
Will do PME sum in reciprocal space for electrostatic interactions.

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen 
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
  --- Thank You ---  

Using a Gaussian width (1/beta) of 0.288146 nm for Ewald
Potential shift: LJ r^-12: -3.541e+00 r^-6: -1.882e+00, Ewald -1.111e-05
Initialized non-bonded Ewald correction tables, spacing: 8.85e-04 size: 1018

Long Range LJ corr.:  3.3851e-04
Generated table with 1024 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1024 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 1024 data points for LJ12.
Tabscale = 500 points/nm
Generated table with 1024 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1024 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1024 data points for 1-4 LJ12.
Tabscale = 500 points/nm

Using GPU 8x8 nonbonded short-range kernels

Using a dual 8x4 pair-list setup updated with dynamic, rolling pruning:
  outer list: updated every 80 steps, buffer 0.149 nm, rlist 1.049 nm
  inner list: updated every 10 steps, buffer 0.003 nm, rlist 0.903 nm
At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be:
  outer list: updated every 80 steps, buffer 0.292 nm, rlist 1.192 nm
  inner list: updated every 10 steps, buffer 0.043 nm, rlist 0.943 nm

Using Lorentz-Berthelot Lennard-Jones combination rule


Initializing LINear Constraint Solver

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
  --- Thank You ---  

The number of constraints is 8054

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
  --- Thank You ---  


Intra-simulation communication will occur every 20 steps.
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
  --- Thank You ---  

There are: 78646 Atoms

Started mdrun on rank 0 Tue Nov 13 16:16:25 2018
   Step   Time
  00.0


---
Program: gmx mdrun, version 2018.3
Source file: src/gromacs/gpu_utils/cudautils.cu (line 110)

Fatal error:
HtoD cudaMemcpyAsync failed: invalid argument

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


Thank you,

Dmytro


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Tuesday, November 13, 2018 10:29 PM
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Running GPU issue

Hi,

It can share.

Mark

On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro 
wrote:

> Hi,
>
>
>
> To perform GPU with Gromacs does it require exclusive  GPU card or Gromacs
> can share the video card with X-server?
>
>
> Thank you
>
>
> Dmytro
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!

[gmx-users] Running GPU issue

2018-11-12 Thread Kovalskyy, Dmytro
Hi,



To perform GPU with Gromacs does it require exclusive  GPU card or Gromacs can 
share the video card with X-server?


Thank you


Dmytro
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.