[gmx-users] Free-energy on GMX-2019.1 ( lower performance on GPU)

2019-03-14 Thread praveen kumar
Dear All

I am trying to run the free-energy simulation using TI method in gromacs
2019.1 in a GPU machine  (containing two Nvidia Geforce 1080 TI cards ).
But unfortunately, am unable to run the free-energy simulation run on GPU.

The normal MD simulation (without free-energy )is able to run perfectly by
making use of GPU, which gives us excellent speed up in the simulation.
for example, 100 K atoms system is able to give us ~ 80 ns per day on a gpu
card.  (It uses > 80 % GPU usage)
When I am trying to run the free-energy simulations for the same system,
the performance drastically falls down to ~0.02 ns per day.  (It uses 0 %
GPU usage).

I am pasting the MDP files for Normal MD simulation and Free-energy
simulation below.
npt. mdp (MD simulation)


#
title= MD simulation
; Run parameters
integrator= md; leap-frog integrator
nsteps= 1  ; 2 * 6000   = 200 ns
dt= 0.002; 2 fs
; Output control
nstxout= 10  ; save coordinates every 10.0 ps
nstvout= 10  ; save velocities every 10.0 ps
nstfout= 10  ; save forces every 10.0 ps
nstenergy= 500; save energies every 10.0 ps
nstlog= 5000; update log file every 10.0 ps
nstxout-compressed  = 5000  ; save compressed coordinates every
10.0 ps, nstxout-compressed replaces nstxtcout
compressed-x-grps   = System; replaces xtc-grps
; Bond parameters
continuation= yes; Restarting after NVT
constraint_algorithm= lincs; holonomic constraints
constraints= h-bonds; H bonds constrained
lincs_iter= 1; accuracy of LINCS
lincs_order= 4; also related to accuracy
; Neighborsearching
cutoff-scheme   = Verlet
ns_type= grid; search neighboring grid cells
nstlist= 10; 20 fs, largely irrelevant with Verlet
rcoulomb= 1.2; short-range electrostatic cutoff (in nm)
rvdw= 1.2; short-range van der Waals cutoff (in nm)
rvdw-switch = 1.0
vdwtype = cutoff
vdw-modifier= force-switch
rlist = 1.2
; Electrostatics
coulombtype= PME; Particle Mesh Ewald for long-range
electrostatics
pme_order= 4; cubic interpolation
fourierspacing= 0.16; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= system; Water   ; two coupling
groups - more accurate
tau_t= 0.1 ;0.1  ; time constant, in ps
ref_t= 360  ;340 ; reference
temperature, one for each group, in K
; Pressure coupling is on
;pcoupl  =no
pcoupl= Parrinello-Rahman; Pressure coupling on in
NPT
pcoupltype= isotropic; uniform scaling of box
vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0   ;1.0 ; reference pressure, in
bar
compressibility = 4.5e-5 ; 4.5e-5; isothermal
compressibility of water, bar^-1
; Periodic boundary conditions
pbc= xyz; 3-D PBC
; Dispersion correction
DispCorr= no; account for cut-off vdW scheme
; Velocity generation
gen_vel= no; Velocity generation is off
##
npt. mdp ( for free-energy simulation)
##

; Run control
integrator   = sd   ; Langevin dynamics
tinit= 0
dt   = 0.002
nsteps   = 5; 100 ps
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy= 500
nstxout-compressed   = 0
; Neighborsearching and short-range nonbonded interactions
cutoff-scheme= verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
rlist= 1.2
; Electrostatics
coulombtype  = PME
rcoulomb = 1.2
; van der Waals
vdwtype  = cutoff
vdw-modifier = potential-switch
rvdw-switch  = 1.0
rvdw = 1.2
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = EnerPres
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order= 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
; Temperature coupling
; tcoupl is implicitly handled by the sd 

Re: [gmx-users] Steps for Markov State Modelling

2019-03-14 Thread Justin Lemkul




On 3/14/19 4:31 PM, Dallas Warren wrote:

https://www.google.com/search?hl===MSM+analysis+%20Frank+Noe


Also relevant:

https://www.livecomsjournal.org/article/5965-introduction-to-markov-state-modeling-with-the-pyemma-software-article-v1-0

-Justin


Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Fri, 15 Mar 2019 at 06:57, Soham Sarkar  wrote:


Dear all,
I have a protein trajectory in xtc format. I want to do the MSM
analysis on this trajectory to see how the process is going on and the
meta-stable states. I have followed the Video series by Frank Noe and team
on MSM, but it is not clear to me how to start it. I have some questions,
1) What are the python packages that I need to install?
2) How should I start it?
3) What kind of data they have generated?
Any one with the introductory steps of MSM analysis, any link or hand-on
tutorial video is highly appreciated.
- Soham
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Steps for Markov State Modelling

2019-03-14 Thread Dallas Warren
https://www.google.com/search?hl===MSM+analysis+%20Frank+Noe

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Fri, 15 Mar 2019 at 06:57, Soham Sarkar  wrote:

> Dear all,
>I have a protein trajectory in xtc format. I want to do the MSM
> analysis on this trajectory to see how the process is going on and the
> meta-stable states. I have followed the Video series by Frank Noe and team
> on MSM, but it is not clear to me how to start it. I have some questions,
> 1) What are the python packages that I need to install?
> 2) How should I start it?
> 3) What kind of data they have generated?
> Any one with the introductory steps of MSM analysis, any link or hand-on
> tutorial video is highly appreciated.
> - Soham
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Steps for Markov State Modelling

2019-03-14 Thread Soham Sarkar
Dear all,
   I have a protein trajectory in xtc format. I want to do the MSM
analysis on this trajectory to see how the process is going on and the
meta-stable states. I have followed the Video series by Frank Noe and team
on MSM, but it is not clear to me how to start it. I have some questions,
1) What are the python packages that I need to install?
2) How should I start it?
3) What kind of data they have generated?
Any one with the introductory steps of MSM analysis, any link or hand-on
tutorial video is highly appreciated.
- Soham
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] PDB file that can be read in Gromacs

2019-03-14 Thread RAHUL SURESH
Hi.

First request, try to go through gromacs documentation and tutorials. You
will get the clue.

Topology of any nonprotein should be generated by a third party software or
scripts depending on the forcefield. Have a detailed look into the
tutorials. That will definitely help you.

Thank you


On Fri 15 Mar, 2019, 12:56 AM Phuong Chau,  wrote:

> Hello everyone,
>
> I want to generate gromacs topology of a substance (a single chemical)
> which has a pdb file generated by RDKIT from SMILES representation of that
> substance (MolToPDBFile). However, when I input the pdb file generated by
> RDKit, it showed the error of "Residue 'UNL' not found in residue topology
> database".
>
> The general idea is:
> Input: Name of a substance (single chemical)
> Output: pdb file of the substance (does not have to be generated by RDKit)
> and the topology file of its susbtance that is generated by Gromacs.
>
> Could anyone tell me any possible solution to solve this problem?
>
> I am new to GROMACS.
>
> Thank you so much for your help.
> Phuong Chau
> Smith College '20
> Engineering and Data Science Major
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] PDB file that can be read in Gromacs

2019-03-14 Thread Justin Lemkul




On 3/14/19 3:21 PM, Phuong Chau wrote:

Hello everyone,

I want to generate gromacs topology of a substance (a single chemical)
which has a pdb file generated by RDKIT from SMILES representation of that
substance (MolToPDBFile). However, when I input the pdb file generated by
RDKit, it showed the error of "Residue 'UNL' not found in residue topology
database".

The general idea is:
Input: Name of a substance (single chemical)
Output: pdb file of the substance (does not have to be generated by RDKit)
and the topology file of its susbtance that is generated by Gromacs.

Could anyone tell me any possible solution to solve this problem?


pdb2gmx isn't magic :)

http://manual.gromacs.org/current/user-guide/run-time-errors.html#residue-xxx-not-found-in-residue-topology-database

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] PDB file that can be read in Gromacs

2019-03-14 Thread Phuong Chau
Hello everyone,

I want to generate gromacs topology of a substance (a single chemical)
which has a pdb file generated by RDKIT from SMILES representation of that
substance (MolToPDBFile). However, when I input the pdb file generated by
RDKit, it showed the error of "Residue 'UNL' not found in residue topology
database".

The general idea is:
Input: Name of a substance (single chemical)
Output: pdb file of the substance (does not have to be generated by RDKit)
and the topology file of its susbtance that is generated by Gromacs.

Could anyone tell me any possible solution to solve this problem?

I am new to GROMACS.

Thank you so much for your help.
Phuong Chau
Smith College '20
Engineering and Data Science Major
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Simulation is very slow

2019-03-14 Thread Benson Muite

Dear Yeongkyu,

In what sense is it slow? What are you comparing to and for what input data?

Are you using both GPUs or only 1 at a time?

Do you have any idea of relative performance differences for each of the 
GPUs you have? A recent inquiry on the list wanted a comparison between 
the two GPUs you mention.


Benson

On 3/14/19 5:56 PM, 이영규 wrote:

Dear gromacs users,

I installed gromacs 2019 today. When I run gromacs, it is really slow. I
don't know the reason. I am using GTX 1080 TI and TITAN XP for GPU and I
have 8 cores. Please help me.

Sincerely


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Simulation is very slow

2019-03-14 Thread Moir, Michael (MMoir)
Did you remember to use the -DGMX_GPU =ON CMake option when you built the new 
version?

Mike Moir

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of ???
Sent: Thursday, March 14, 2019 8:56 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [**EXTERNAL**] [gmx-users] Simulation is very slow

Dear gromacs users,

I installed gromacs 2019 today. When I run gromacs, it is really slow. I
don't know the reason. I am using GTX 1080 TI and TITAN XP for GPU and I
have 8 cores. Please help me.

Sincerely

-- 

Yeongkyu Lee

M.S student

Department of Physics

501, Jinjudaero, Jinju, Gyeongnam
,
52828, Korea

Email: monsterpl...@gmail.com

Phone: +82-10-8771-2190
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Simulation is very slow

2019-03-14 Thread Kevin Boyd
Hi,

We can't help without more information. Have you checked the log file to
make sure the GPUs are being seen/used? Can you post a link to a sample log
file?

Kevin

On Thu, Mar 14, 2019 at 11:57 AM 이영규  wrote:

> Dear gromacs users,
>
> I installed gromacs 2019 today. When I run gromacs, it is really slow. I
> don't know the reason. I am using GTX 1080 TI and TITAN XP for GPU and I
> have 8 cores. Please help me.
>
> Sincerely
>
> --
>
> Yeongkyu Lee
>
> M.S student
>
> Department of Physics
>
> 501, Jinjudaero, Jinju, Gyeongnam
> <
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaps.google.com%2F%3Fq%3D501%2C%2BJinjudaero%2C%2BJinju%2C%2BGyeongnam%26entry%3Dgmail%26source%3Dgdata=02%7C01%7Ckevin.boyd%40uconn.edu%7Ccfac0f38384047c55cc808d6a895b332%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636881758311061621sdata=zVZX7uoMiHRpohj4QMlgC%2BeOMkC%2FQ2N%2FzwkOS38xVBk%3Dreserved=0
> >,
> 52828, Korea
>
> Email: monsterpl...@gmail.com
>
> Phone: +82-10-8771-2190
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7Ccfac0f38384047c55cc808d6a895b332%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636881758311061621sdata=%2Bsx0%2BMv96gsjAMCRuuJdO86smGVs4eaEiC2XRnvPvYA%3Dreserved=0
> before posting!
>
> * Can't post? Read
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7Ccfac0f38384047c55cc808d6a895b332%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636881758311061621sdata=ZXqWCR7nK9waV4tuk00TELDAP%2BHii9qFrWX%2Bdov0zyA%3Dreserved=0
>
> * For (un)subscribe requests visit
>
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7Ccfac0f38384047c55cc808d6a895b332%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636881758311061621sdata=MOi7Z%2FZEWNUAc%2Fvx%2FAPDuLe5qZS49b7BU0Z03veU1zc%3Dreserved=0
> or send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Simulation is very slow

2019-03-14 Thread 이영규
Dear gromacs users,

I installed gromacs 2019 today. When I run gromacs, it is really slow. I
don't know the reason. I am using GTX 1080 TI and TITAN XP for GPU and I
have 8 cores. Please help me.

Sincerely

-- 

Yeongkyu Lee

M.S student

Department of Physics

501, Jinjudaero, Jinju, Gyeongnam
,
52828, Korea

Email: monsterpl...@gmail.com

Phone: +82-10-8771-2190
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] User tabulated potential and VdW Modifier

2019-03-14 Thread Kiesel, Matthias
Hello everybody,

I sent a message a week back with the same title and I hope this gets put into 
the right thread now (did not know how to respond directly). I actually did 
some more testing and I think I found out how everything works and I'm going to 
leave this here in case someone else ever wonders.

vdw-modifier = potential-shift does not shift a user specified potential, this 
can be seen from the substantial energy drift in the NVE simulations in my 
first message, which was eliminated by feeding a user list with shifted 
potential.

The modifier, however, adds and additional term to the LRC, namely the 
correction for the "lost" potential energy inside the cutoff sphere (see Frenkl 
and Smit). This correction is  not calculated with the user tabulated 
potential, but rather with the LJ potential (and even using this I wasn't 
completely able to reproduce the value, I'm 3kJ/mol short, which is appr. 0.2%).

>From the behaviour of the potential modifier I used, I would also guess the 
>other modifiers do not work and gromacs overrides whatever I choose with 
>"None".

kind regards,

Matthias
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Dielectric constant

2019-03-14 Thread Jianna Blocchi
 Dear gromacs users,How can I calculate dielectric constant of a polymer in 
gromacs?Thanks in advance.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-14 Thread Tafelmeier, Stefanie
Dear all,

I was not sure if the email before reached you, but again many thanks for your 
reply Szilárd.

As written below we are still facing a problem with the performance of your 
workstation.
I wrote before because of the error message when keeping occurring for mdrun 
simulation:

Assertion failed:
Condition: stat == cudaSuccess
Asynchronous H2D copy failed

As I mentioned all Versions to install (Gormacs, Cuda, nvcc, gcc) are the 
newest once now.

If I run mdrun without further settings it will lead to this error message. If 
I run it and choose the thread amount directly the mdrun is performing well. 
But only for –nt numbers between 1 – 22. Higher ones again lead to the before 
mentioned error message.
 
In order to investigate in more detail, I tried different versions for –nt, 
–ntmpi – ntomp also combined with –npme: 
-   The best performance in the sense of ns/day is with –nt 22 respectively 
–ntomp 22 alone. But then only 22 threads are involved. Which is fine if I run 
more than one mdrun simultaneously, as I can distribute the other 66 threads. 
The GPU usage is then around 65%.
-   A similar good performance is reached with mdrun  -ntmpi 4 -ntomp 18 
-npme 1 -pme gpu -nb gpu. But then 44 threads are involved. The GPU usage is 
then around 50%.

I read the information on 
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html 
which was very helpful, bur some things are still not clear now to me:
I was wondering if there is any other enhancement of the performance? Or what 
is the reason, that –nt maximum is at 22 threads? Could this be connected to 
the sockets (see details below) of your workstation? 
It is not clear to me how a number of thread (-nt) higher 22 can lead to the 
error regarding the Asynchronous H2D copy)

Please excuse all these questions. I would appreciate a lot  if you might have 
a hint for this problem as well.

Best regards,
Steffi

-

The workstation details are:
Running on 1 node with total 44 cores, 88 logical cores, 1 compatible GPU
Hardware detected:

  CPU info:
Vendor: Intel
Brand:  Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz
Family: 6   Model: 85   Stepping: 4
Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh cmov 
cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm 
pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic

Number of AVX-512 FMA units: 2
  Hardware topology: Basic
Sockets, cores, and logical processors:
  Socket  0: [   0  44] [   1  45] [   2  46] [   3  47] [   4  48] [   5  
49] [   6  50] [   7  51] [   8  52] [   9  53] [  10  54] [  11  55] [  12  
56] [  13  57] [  14  58] [  15  59] [  16  60] [  17  61] [  18  62] [  19  
63] [  20  64] [  21  65]
  Socket  1: [  22  66] [  23  67] [  24  68] [  25  69] [  26  70] [  27  
71] [  28  72] [  29  73] [  30  74] [  31  75] [  32  76] [  33  77] [  34  
78] [  35  79] [  36  80] [  37  81] [  38  82] [  39  83] [  40  84] [  41  
85] [  42  86] [  43  87]
  GPU info:
Number of GPUs detected: 1
#0: NVIDIA Quadro P6000, compute cap.: 6.1, ECC:  no, stat: compatible

-



-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Szilárd Páll
Gesendet: Donnerstag, 31. Januar 2019 17:15
An: Discussion list for GROMACS users
Betreff: Re: [gmx-users] WG: Issue with CUDA and gromacs

On Thu, Jan 31, 2019 at 2:14 PM Szilárd Páll  wrote:
>
> On Wed, Jan 30, 2019 at 5:15 PM Tafelmeier, Stefanie
>  wrote:
> >
> > Dear all,
> >
> > We are facing an issue with the CUDA toolkit.
> > We tried several combinations of gromacs versions and CUDA Toolkits. No 
> > Toolkit older than 9.2 was possible to try as there are no driver for 
> > nvidia available for a Quadro P6000.
> > Gromacs
>
> Install the latest 410.xx drivers and it will work; the NVIDIA driver
> download website (https://www.nvidia.com/Download/index.aspx)
> recommends 410.93.
>
> Here's a system with CUDA 10-compatible driver running o a system with
> a P6000: https://termbin.com/ofzo

Sorry, I misread that as "CUDA >=9.2 was not possible".

Note that the driver is backward compatible, so you can use a new
driver with older CUDA versions.

Also note that the oldest driver NVIDIA claims to have P6000 support
is 390.59 which is, as far as I know, one gen older than the 396 that
the CUDA 9.2 toolkit came with. This is however, not something I'd
recommend pursuing, use a new driver from the official site with any
CUDA version that GROMACS supports and it should be fine.

>
> > CUDA
> >
> > Error message
> >
> > 2019
> >
> > 10.0
> >
> > gmx mdrun:
> > Assertion failed:
> > Condition: stat == cudaSuccess
> > Asynchronous H2D copy failed
> >
> > 2019
> >
> > 9.2
> >
> > gmx mdrun:
> > Assertion failed:
> > Condition: stat == cudaSuccess
> > Asynchronous H2D copy failed
> >
> > 2018.5
> >
> > 9.2
> >
> > 

Re: [gmx-users] gromacs performance

2019-03-14 Thread Никита Шалин
Dear Mr Szilárd Páll

Give me advice please. I'm going to purchase a videocard for MD. But I doubt 
the choice between one RTX 2080 Ti and two GTX 1070 Ti. 
What will be better? 

Thank you for advance


>Среда, 13 марта 2019, 19:22 +03:00 от Szilárd Páll :
>
>Hi,
>
>First off, please post full log files; these contain much more than just
>the excerpts you paste in.
>
>Secondly, for parallel, multi-node runs this hardware is just too GPU-dense
>to achieve a good CPU-GPU load balance and scaling will be really hard too
>in most cases, but details will depend on the input systems and settings
>(info which we would see in the full log).
>
>Lastly, in general, running a decomposition assuming one rank per core with
>GPUs is generally inefficient, typically 2-3 ranks per GPU are ideal (but
>in this case the CPU-GPU load balance may be a stronger bottleneck).
>
>Cheers,
>--
>Szilárd
>
>
>On Fri, Mar 8, 2019 at 11:12 PM Carlos Rivas < cri...@infiniticg.com > wrote:
>
>> Hey guys,
>> Anybody running GROMACS on AWS ?
>>
>> I have a strong IT background , but zero understanding of GROMACS or
>> OpenMPI. ( even less using sge on AWS ),
>> Just trying to help some PHD Folks with their work.
>>
>> When I run gromacs using Thread-mpi on a single, very large node on AWS
>> things work fairly fast.
>> However, when I switch from thread-mpi to OpenMPI even though everything's
>> detected properly, the performance is horrible.
>>
>> This is what I am submitting to sge:
>>
>> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh
>> #!/bin/bash
>> #
>> #$ -cwd
>> #$ -j y
>> #$ -S /bin/bash
>> #$ -e out.err
>> #$ -o out.out
>> #$ -pe mpi 256
>>
>> cd /shared/charmm-gui/gromacs
>> touch start.txt
>> /bin/bash /shared/charmm-gui/gromacs/run_eq.bash
>> touch end.txt
>>
>> and this is my test script , provided by one of the Doctors:
>>
>> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash
>> #!/bin/bash
>> export GMXMPI="/usr/bin/mpirun --mca btl ^openib
>> /shared/gromacs/5.1.5/bin/gmx_mpi"
>>
>> export MDRUN="mdrun -ntomp 2 -npme 32"
>>
>> export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"
>>
>> for comm in min eq; do
>> if [ $comm == min ]; then
>>echo ${comm}
>>$GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c
>> step5_charmm2gmx.pdb -p topol.top
>>$GMXMPI $MDRUN -deffnm step6.0_minimization
>>
>> fi
>>
>> if [ $comm == eq ]; then
>>   for step in `seq 1 6`;do
>>echo $step
>>if [ $step -eq 1 ]; then
>>   echo ${step}
>>   $GMX grompp -f step6.${step}_equilibration.mdp -o
>> step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r
>> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>>   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
>>fi
>>if [ $step -gt 1 ]; then
>>   old=`expr $step - 1`
>>   echo $old
>>   $GMX grompp -f step6.${step}_equilibration.mdp -o
>> step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r
>> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>>   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
>>fi
>>   done
>> fi
>> done
>>
>>
>>
>>
>> during the output, I see this , and I get really excited, expecting
>> blazing speeds and yet, it's much worse than a single node:
>>
>> Command line:
>>   gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization
>>
>>
>> Back Off! I just backed up step6.0_minimization.log to
>> ./#step6.0_minimization.log.6#
>>
>> Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible
>> GPUs
>>   Cores per node:   32
>>   Logical cores per node:   64
>>   Compatible GPUs per node:  8
>>   All nodes have identical type(s) of GPUs
>> Hardware detected on host ip-10-10-5-89 (the node of MPI rank 0):
>>   CPU info:
>> Vendor: GenuineIntel
>> Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
>> SIMD instructions most likely to fit this hardware: AVX2_256
>> SIMD instructions selected at GROMACS compile time: AVX2_256
>>   GPU info:
>> Number of GPUs detected: 8
>> #0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #2: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #3: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #4: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #5: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #6: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>> #7: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
>> compatible
>>
>> Reading file step6.0_minimization.tpr, VERSION 5.1.5 (single precision)
>> Using 256 MPI processes
>> Using 2 OpenMP threads per MPI process
>>
>> On host ip-10-10-5-89 8 compatible GPUs are present, with IDs
>> 0,1,2,3,4,5,6,7
>> On host ip-10-10-5-89 8 GPUs 

[gmx-users] [MGMS-DS]: Molecular Modelling Workshop + Tim Clark Birthday Symposium: April 08-11, 2019 in Erlangen, Germany

2019-03-14 Thread Harald Lanig

Dear list subscribers,

we are very delighted to remind you about this year’s “33rdMolecular 
Modelling Workshop (MMWS)” (http://mmws2019.mgms-ds.de) that takes place 
on April, Monday 8th to Wednesday 10th, 2019, at the 
Friedrich-Alexander-University in Erlangen, Germany.


Please notice that registration is still possible until March 22nd, 2019!

The MMWS has a long history of giving young scientists (especially 
graduate students) the opportunity to present their work and dive into 
discussions with each other and the "older experts" of the field to gain 
valuable feedback from academic as well as industrial colleagues. Oral 
and poster contributions are welcome from all areas of molecular 
modelling – from the life sciences, computational biology and chemistry, 
cheminformatics, to materials sciences.


Starting with the scientific program on Monday after lunch should allow 
to avoid travelling on weekend keeping the expenses at a minimum. A 
hearty workshop dinner and a traditional joint evening in Erlangen’s 
Steinbach-Bräu brewery complement the scientific program. The workshop 
is organised by the German Section of the Molecular Graphics and 
Modelling Society (MGMS-DS e.V.).


### Satellite symposium in honour of Tim Clark's 70th birthday ###
Directly after the MMWS, there will be a one-day symposium on Thursday 
April 11th, 2019 to celebrate the occasion of Tim Clark's 70th birthday. 
You are invited to attend this meeting and extend your stay in Erlangen 
by an extra day. There will be invited lectures only and no contributed 
talks. There is no registration, conference desk, or extra fee. 
Nevertheless, we expect that you indicate your interest in attending on 
Thursday by checking the corresponding field during registration for the 
MMWS.
For further information about our Birthday Symposium, please refer to 
"Tim-Clark-Day" via https://mmws2019.mgms-ds.de/index.php?m=113


### Pre-conference workshop ###
For the second time at our Molecular Modeling Workshop, Schrödinger is 
offering a pre-conference workshop entiteled "Structure-based Drug 
Design using the Schrödinger Suite". If you are interested in 
participating at the software session, please checkmark the 
corresponding field upon filling the registration form.


### Plenary Speakers ###
We are very happy to announce that four outstanding researchers accepted 
our invitation to present a plenary lecture at the Modeling workshop:


Matthias Bremer (Merck KGaA, Darmstadt)
"The Role of Quantum Chemistry in the Development of Liquid Crystals for 
Display Applications"


Ruth Brenk (University of Bergen)
"Structure-based design of riboswitch ligands and selective NMT inhibitors"

Bernd Meyer (University of Erlangen)
"Chemistry at the solid-liquid interface"

Rochus Schmid (Ruhr-University Bochum)
"Force fields for porous coordination polymers - a tricky business"

### Poster and Lecture Awards ###
As in the past years, there will be two poster awards of EUR 100 each 
and three lecture awards for the best contributed oral presentations:


1st winner: Travel bursary to attend the Young Modeller's Forum in 
London, UK, plus a speaker slot option at YMF (travel expenses are 
reimbursed up to EUR 500)


2nd winner: EUR 200 travel expenses reimbursement

3rd winner: EUR 100 travel expenses reimbursement

Only undergraduate and graduate research students qualify for the poster 
and lecture awards.


### Registration and poster/talk submission ###
Submit talks and/or poster titles via the registration form accessible 
on the workshop website https://mmws2019.mgms-ds.de/index.php?m=register

The deadline for all submissions is March 22nd, 2019.

### General information ###
Website http://mmws2019.mgms-ds.de will provide all necessary 
information about the meeting.


We are looking forward to meeting you in Erlangen!
- Paul Czodrowski, Scientific Committee Workshop Organisation 2019
- Harald Lanig, Chairman of the MGMS-DS e.V. (http://www.mgms-ds.de)

--

 PD Dr. Harald Lanig   Universitaet Erlangen/Nuernberg
 Zentralinstitut fuer Scientific Computing (ZISC)
 Geschaeftsfuehrer Martensstrasse 5a, 91058 Erlangen

 Fon   +49 9131-85 20781   harald.la...@fau.de
 Fax   +49 9131-85 20785   http://www.zisc.uni-erlangen.de


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.