Re: [gmx-users] (no subject)

2019-01-30 Thread Saumyak Mukherjee
Hi,

There is a detailed online tutorial.
http://www.mdtutorials.com/gmx/

It has everything you need.

Best,
Saumyak

On Thu, 31 Jan 2019 at 10:51, Satya Ranjan Sahoo 
wrote:

> Sir,
> I am a beginner to GROMACS. I was unable to understand how to create all
> the ions.mdp , md.mdp , mout.mdp , minim.mdp , newbox.mdp , npt.mdp ,
> nvt.mdp , porse.itp , topol.top input files for molecular simulation of my
> molecule. Please teach me how can I generate or create all the above
> mentioned input files for my molecule.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
===
*Saumyak Mukherjee*
Senior Research Fellow
Solid State and Structural Chemistry Unit
Indian Institute of Science
Bengaluru - 560012
===
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] (no subject)

2019-01-30 Thread Satya Ranjan Sahoo
Sir,
I am a beginner to GROMACS. I was unable to understand how to create all
the ions.mdp , md.mdp , mout.mdp , minim.mdp , newbox.mdp , npt.mdp ,
nvt.mdp , porse.itp , topol.top input files for molecular simulation of my
molecule. Please teach me how can I generate or create all the above
mentioned input files for my molecule.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] how to calculate potential energy or short-range and long-range energy and enthalpy per residue

2019-01-30 Thread milad bagheri
I performed MD  for apoprotein after this I want to calculate potential
energy or short-range and long-range energy and enthalpy "per residue"
please help me how can I do this?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Benson Muite
The redmine ticket you mention gives output obtained from an execution with 
verbose output:

gmx mdrun -deffnm md200ns -v

and

gmx mdrun -deffnm md200ns -v -nb gpu -pme cpu

First option fails, but gives a source code line number where to check things. 
Second one runs. Build uses
-DCMAKE_BUILD_TYPE=RelWithDebug. Is it possible to get more information using 
these run options?

On 1/30/19 7:29 PM, Tafelmeier, Stefanie wrote:

To your question:
For the trials with newer Gromacs >2016 versions we simply use (as I understood 
it is not necessary to target the 6.1 with this versions)
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on

for the trials with older gromacs < 2018 Versions we used:
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on GMX_CUDA_TARGET_SM=6.1 
GMX_CUDA_TARGET_COMPUTE=6.1

Could it be, that the combination of CUDA 9.2 (or 10), gromacs 2019 and gcc 
7.3.0 is making some trouble?
Does anyone have experience with this?

Many thanks in advance for the answer,
Steffi



-Ursprüngliche Nachricht-
Von: 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Benson Muite
Gesendet: Mittwoch, 30. Januar 2019 18:13
An: gmx-us...@gromacs.org
Betreff: Re: [gmx-users] WG: Issue with CUDA and gromacs

What is your cmake build command?

Have you tried to specify compute capabilities?

http://manual.gromacs.org/documentation/2019/install-guide/index.html#cuda-gpu-acceleration

GMX_CUDA_TARGET_SM=6.1

GMX_CUDA_TARGET_COMPUTE=6.1

References:

https://developer.nvidia.com/cuda-gpus

https://www.myzhar.com/blog/tutorials/tutorial-nvidia-gpu-cuda-compute-capability/

On 1/30/19 6:20 PM, Tafelmeier, Stefanie wrote:

Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Abse

[gmx-users] gmx_clusterByFeatures - Features Based Conformational Clustering of MD trajectories

2019-01-30 Thread rajendra kumar
Dear GROMACS users,

I have developed a tool (hybrid C++ and Python) for features-based
conformational clustering of MD trajectories. Its source code is available
here: https://github.com/rjdkmr/gmx_clusterByFeatures. For
more detail about the tool, please visit here:
https://gmx-clusterbyfeatures.readthedocs.io. It is very easy to install
(sudo pip3 install gmx-clusterByFeatrues).

I have also presented an example to cluster ligand conformations with
respect to receptor (
https://gmx-clusterbyfeatures.readthedocs.io/en/latest/examples/ligand_cluster.html
).I am also working on other examples.

Feedback are welcome.

Thanks and regards,
Rajendra

--

|==|
|* Dr. Rajendra Kumar   *|
| Post-Doctoral Researcher |
| Department of Chemistry  |
| Umeå University , |
| Umeå, Sweden.|
|==|
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread pbuscemi
Steffi,

I had to use g++ -6  with CUDA 9 and 10,g++7 did not work in my hands  ( 
Dec 2018 builds ) .  Also suggest going for the latest Gromacs build and CUDA 10

Best,
Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Tafelmeier, 
Stefanie
Sent: Wednesday, January 30, 2019 11:30 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] WG: Issue with CUDA and gromacs

To your question:
For the trials with newer Gromacs >2016 versions we simply use (as I understood 
it is not necessary to target the 6.1 with this versions) cmake .. 
-DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on

for the trials with older gromacs < 2018 Versions we used:
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on GMX_CUDA_TARGET_SM=6.1 
GMX_CUDA_TARGET_COMPUTE=6.1

Could it be, that the combination of CUDA 9.2 (or 10), gromacs 2019 and gcc 
7.3.0 is making some trouble?
Does anyone have experience with this?

Many thanks in advance for the answer,
Steffi



-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Benson Muite
Gesendet: Mittwoch, 30. Januar 2019 18:13
An: gmx-us...@gromacs.org
Betreff: Re: [gmx-users] WG: Issue with CUDA and gromacs

What is your cmake build command?

Have you tried to specify compute capabilities?

http://manual.gromacs.org/documentation/2019/install-guide/index.html#cuda-gpu-acceleration

GMX_CUDA_TARGET_SM=6.1

GMX_CUDA_TARGET_COMPUTE=6.1

References:

https://developer.nvidia.com/cuda-gpus

https://www.myzhar.com/blog/tutorials/tutorial-nvidia-gpu-cuda-compute-capability/

On 1/30/19 6:20 PM, Tafelmeier, Stefanie wrote:

Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage Thermische 
Energiespeicher/Thermal Energy Storage Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman), Prof. Dr. Vladimir 
Dyakonov Sitz/Registered Office: Würzburg Registergericht/Register Court: 
Amtsgericht Würzburg Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Absender und löschen Sie diese E-Mail.

Any declarations of intent, such as quotations, orders, applications and 
contracts, are legally binding for ZAE Bayern only if expressed in a written 
and duly signed form. This e-mail is intended solely for use by the 
recipient(s) named above. Any unauthorised disclosure,

Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Tafelmeier, Stefanie
To your question:
For the trials with newer Gromacs >2016 versions we simply use (as I understood 
it is not necessary to target the 6.1 with this versions) 
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on

for the trials with older gromacs < 2018 Versions we used:
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on GMX_CUDA_TARGET_SM=6.1 
GMX_CUDA_TARGET_COMPUTE=6.1

Could it be, that the combination of CUDA 9.2 (or 10), gromacs 2019 and gcc 
7.3.0 is making some trouble?
Does anyone have experience with this?

Many thanks in advance for the answer,
Steffi



-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Benson Muite
Gesendet: Mittwoch, 30. Januar 2019 18:13
An: gmx-us...@gromacs.org
Betreff: Re: [gmx-users] WG: Issue with CUDA and gromacs

What is your cmake build command?

Have you tried to specify compute capabilities?

http://manual.gromacs.org/documentation/2019/install-guide/index.html#cuda-gpu-acceleration

GMX_CUDA_TARGET_SM=6.1

GMX_CUDA_TARGET_COMPUTE=6.1

References:

https://developer.nvidia.com/cuda-gpus

https://www.myzhar.com/blog/tutorials/tutorial-nvidia-gpu-cuda-compute-capability/

On 1/30/19 6:20 PM, Tafelmeier, Stefanie wrote:

Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Absender und löschen Sie diese E-Mail.

Any declarations of intent, such as quotations, orders, applications and 
contracts, are legally binding for ZAE Bayern only if expressed in a written 
and duly signed form. This e-mail is intended solely for use by the 
recipient(s) named above. Any unauthorised disclosure, use or dissemination, 
whether in whole or in part, is prohibited. If you have received this e-mail in 
error, please notify the sender immediately and delete this e-mail.




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maill

Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Benson Muite
What is your cmake build command?

Have you tried to specify compute capabilities?

http://manual.gromacs.org/documentation/2019/install-guide/index.html#cuda-gpu-acceleration

GMX_CUDA_TARGET_SM=6.1

GMX_CUDA_TARGET_COMPUTE=6.1

References:

https://developer.nvidia.com/cuda-gpus

https://www.myzhar.com/blog/tutorials/tutorial-nvidia-gpu-cuda-compute-capability/

On 1/30/19 6:20 PM, Tafelmeier, Stefanie wrote:

Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Absender und löschen Sie diese E-Mail.

Any declarations of intent, such as quotations, orders, applications and 
contracts, are legally binding for ZAE Bayern only if expressed in a written 
and duly signed form. This e-mail is intended solely for use by the 
recipient(s) named above. Any unauthorised disclosure, use or dissemination, 
whether in whole or in part, is prohibited. If you have received this e-mail in 
error, please notify the sender immediately and delete this e-mail.




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Tafelmeier, Stefanie
Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Absender und löschen Sie diese E-Mail.

Any declarations of intent, such as quotations, orders, applications and 
contracts, are legally binding for ZAE Bayern only if expressed in a written 
and duly signed form. This e-mail is intended solely for use by the 
recipient(s) named above. Any unauthorised disclosure, use or dissemination, 
whether in whole or in part, is prohibited. If you have received this e-mail in 
error, please notify the sender immediately and delete this e-mail.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Tafelmeier, Stefanie
Dear all,

We are facing an issue with the CUDA toolkit.
We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.
Gromacs

CUDA

Error message

2019

10.0

gmx mdrun:
Assertion failed:
Condition: stat == cudaSuccess
Asynchronous H2D copy failed

2019

9.2

gmx mdrun:
Assertion failed:
Condition: stat == cudaSuccess
Asynchronous H2D copy failed

2018.5

9.2

gmx mdrun: Fatal error:
HtoD cudaMemcpyAsync failed: invalid argument

5.1.5

9.2

Installation make: nvcc fatal   : Unsupported gpu architecture 'compute_20'*

2016.2

9.2

Installation make: nvcc fatal   : Unsupported gpu architecture 'compute_20'*


*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.
Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.
The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.
Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?

Many thanks in advance.
Best regards,
Stefanie Tafelmeier


Further details if necessary:
The workstation:
2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512)
Nvidia Quadro P6000 with 3840 Cuda-Cores

The simulations system:
Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)




ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Absender und löschen Sie diese E-Mail.

Any declarations of intent, such as quotations, orders, applications and 
contracts, are legally binding for ZAE Bayern only if expressed in a written 
and duly signed form. This e-mail is intended solely for use by the 
recipient(s) named above. Any unauthorised disclosure, use or dissemination, 
whether in whole or in part, is prohibited. If you have received this e-mail in 
error, please notify the sender immediately and delete this e-mail.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-30 Thread pbuscemi
Vlad,

390 is an 'old' driver now.  Try something simple like installing CUDA 410.x 
see if that resolves the issue.  if you need to update the compiler, g++ -7 may 
not work, but g++ -6 does.

Do NOT  install the video driver from the CUDA toolkit however.  If necessary, 
do that separately from the PPA repository.

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Benson Muite
Sent: Wednesday, January 30, 2019 10:05 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA

Hi,

Do you get the same build errors with Gromacs 2019?

What operating system are you using?

What GPU do you have?

Do  you have a newer version of version of GCC?

Benson

On 1/30/19 5:56 PM, Владимир Богданов wrote:
HI,

Yes, I think, because it seems to be working with nam-cuda right now:

Wed Jan 30 10:39:34 2019
+-+
| NVIDIA-SMI 390.77 Driver Version: 390.77|
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  TITAN XpOff  | :65:00.0  On |  N/A |
| 53%   83CP2   175W / 250W |   2411MiB / 12194MiB | 47%  Default |
+---+--+--+

+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|0  1258  G   /usr/lib/xorg/Xorg40MiB |
|0  1378  G   /usr/bin/gnome-shell  15MiB |
|0  7315  G   /usr/lib/xorg/Xorg   403MiB |
|0  7416  G   /usr/bin/gnome-shell 284MiB |
|0 12510  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12651  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12696  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12737  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12810  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12868  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 20688  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |
+-+

After unsuccesful gromacs run, I ran namd

Best,

Vlad


30.01.2019, 10:59, "Mark Abraham" 
:

Hi,

Does nvidia-smi report that your GPUs are available to use?

Mark

On Wed, 30 Jan 2019 at 07:37 Владимир Богданов 
mailto:bogdanov-vladi...@yandex.ru>>
wrote:


 Hey everyone!

 I need help, please. When I try to run MD with GPU I get the next error:

 Command line:

 gmx_mpi mdrun -deffnm md -nb auto



 Back Off! I just backed up md.log to ./#md  
.log.4#

 NOTE: Detection of GPUs failed. The API reported:

 GROMACS cannot run tasks on a GPU.

 Reading file md.tpr, VERSION 2018.2 (single precision)

 Changing nstlist from 20 to 80, rlist from 1.224 to 1.32



 Using 1 MPI process

 Using 16 OpenMP threads



 Back Off! I just backed up md.xtc to ./#md  
.xtc.2#



 Back Off! I just backed up md.trr to ./#md  
.trr.2#



 Back Off! I just backed up md.edr to ./#md  
.edr.2#

 starting mdrun 'Protein in water'

 3000 steps, 6.0 ps.

 I built gromacs with MPI=on and CUDA=on and the compilation process looked  
good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it  
doesn't work.

 Information from *.log file:

 GROMACS version: 2018.2

 Precision: single

 Memory model: 64 bit

 MPI library: MPI

 OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)

 GPU support: CUDA

 SIMD instructions: AVX_512

 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512

 RDTSCP usage: enabled

 TNG support: enabled

 Hwloc support: disabled

 Tracing support: disabled

 Built on: 2018-06-24 02:55:16

 Built by: vlad@vlad [CMAKE]

 Build OS/arch: Linux 4.13.0-45-generic x86_64

 Build CPU vendor: Intel

 Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz

 Build CPU family: 6 Model: 85 Stepping: 4

 Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl  
clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid  
pclmuldq pdcm pdpe1gb popc

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-30 Thread Benson Muite
Hi,

Do you get the same build errors with Gromacs 2019?

What operating system are you using?

What GPU do you have?

Do  you have a newer version of version of GCC?

Benson

On 1/30/19 5:56 PM, Владимир Богданов wrote:
HI,

Yes, I think, because it seems to be working with nam-cuda right now:

Wed Jan 30 10:39:34 2019
+-+
| NVIDIA-SMI 390.77 Driver Version: 390.77|
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  TITAN XpOff  | :65:00.0  On |  N/A |
| 53%   83CP2   175W / 250W |   2411MiB / 12194MiB | 47%  Default |
+---+--+--+

+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|0  1258  G   /usr/lib/xorg/Xorg40MiB |
|0  1378  G   /usr/bin/gnome-shell  15MiB |
|0  7315  G   /usr/lib/xorg/Xorg   403MiB |
|0  7416  G   /usr/bin/gnome-shell 284MiB |
|0 12510  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12651  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12696  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12737  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12810  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12868  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 20688  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |
+-+

After unsuccesful gromacs run, I ran namd

Best,

Vlad


30.01.2019, 10:59, "Mark Abraham" 
:

Hi,

Does nvidia-smi report that your GPUs are available to use?

Mark

On Wed, 30 Jan 2019 at 07:37 Владимир Богданов 
mailto:bogdanov-vladi...@yandex.ru>>
wrote:


 Hey everyone!

 I need help, please. When I try to run MD with GPU I get the next error:

 Command line:

 gmx_mpi mdrun -deffnm md -nb auto



 Back Off! I just backed up md.log to ./#md
 .log.4#

 NOTE: Detection of GPUs failed. The API reported:

 GROMACS cannot run tasks on a GPU.

 Reading file md.tpr, VERSION 2018.2 (single precision)

 Changing nstlist from 20 to 80, rlist from 1.224 to 1.32



 Using 1 MPI process

 Using 16 OpenMP threads



 Back Off! I just backed up md.xtc to ./#md
 .xtc.2#



 Back Off! I just backed up md.trr to ./#md
 .trr.2#



 Back Off! I just backed up md.edr to ./#md
 .edr.2#

 starting mdrun 'Protein in water'

 3000 steps, 6.0 ps.

 I built gromacs with MPI=on and CUDA=on and the compilation process looked
 good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it
 doesn't work.

 Information from *.log file:

 GROMACS version: 2018.2

 Precision: single

 Memory model: 64 bit

 MPI library: MPI

 OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)

 GPU support: CUDA

 SIMD instructions: AVX_512

 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512

 RDTSCP usage: enabled

 TNG support: enabled

 Hwloc support: disabled

 Tracing support: disabled

 Built on: 2018-06-24 02:55:16

 Built by: vlad@vlad [CMAKE]

 Build OS/arch: Linux 4.13.0-45-generic x86_64

 Build CPU vendor: Intel

 Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz

 Build CPU family: 6 Model: 85 Stepping: 4

 Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl
 clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid
 pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2
 ssse3 tdt x2apic

 C compiler: /usr/bin/cc GNU 5.4.0

 C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops
 -fexcess-precision=fast

 C++ compiler: /usr/bin/c++ GNU 5.4.0

 C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG
 -funroll-all-loops -fexcess-precision=fast

 CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
 driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on
 Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88

 CUDA compi

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-30 Thread Владимир Богданов
HI, Yes, I think, because it seems to be working with nam-cuda right now: Wed Jan 30 10:39:34 2019   +-+| NVIDIA-SMI 390.77 Driver Version: 390.77    ||---+--+--+| GPU  Name    Persistence-M| Bus-Id    Disp.A | Volatile Uncorr. ECC || Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. ||===+==+==||   0  TITAN Xp    Off  | :65:00.0  On |  N/A || 53%   83C    P2   175W / 250W |   2411MiB / 12194MiB | 47%  Default |+---+--+--+   +-+| Processes:   GPU Memory ||  GPU   PID   Type   Process name Usage  ||=||    0  1258  G   /usr/lib/xorg/Xorg    40MiB ||    0  1378  G   /usr/bin/gnome-shell  15MiB ||    0  7315  G   /usr/lib/xorg/Xorg   403MiB ||    0  7416  G   /usr/bin/gnome-shell 284MiB ||    0 12510  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB ||    0 12651  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB ||    0 12696  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB ||    0 12737  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB ||    0 12810  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB ||    0 12868  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB ||    0 20688  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |+-+ After unsuccesful gromacs run, I ran namdBest, Vlad  30.01.2019, 10:59, "Mark Abraham" :Hi,Does nvidia-smi report that your GPUs are available to use?MarkOn Wed, 30 Jan 2019 at 07:37 Владимир Богданов wrote:  Hey everyone! I need help, please. When I try to run MD with GPU I get the next error: Command line: gmx_mpi mdrun -deffnm md -nb auto Back Off! I just backed up md.log to ./#md .log.4# NOTE: Detection of GPUs failed. The API reported: GROMACS cannot run tasks on a GPU. Reading file md.tpr, VERSION 2018.2 (single precision) Changing nstlist from 20 to 80, rlist from 1.224 to 1.32 Using 1 MPI process Using 16 OpenMP threads Back Off! I just backed up md.xtc to ./#md .xtc.2# Back Off! I just backed up md.trr to ./#md .trr.2# Back Off! I just backed up md.edr to ./#md .edr.2# starting mdrun 'Protein in water' 3000 steps, 6.0 ps. I built gromacs with MPI=on and CUDA=on and the compilation process looked good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it doesn't work. Information from *.log file: GROMACS version: 2018.2 Precision: single Memory model: 64 bit MPI library: MPI OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64) GPU support: CUDA SIMD instructions: AVX_512 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG support: enabled Hwloc support: disabled Tracing support: disabled Built on: 2018-06-24 02:55:16 Built by: vlad@vlad [CMAKE] Build OS/arch: Linux 4.13.0-45-generic x86_64 Build CPU vendor: Intel Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz Build CPU family: 6 Model: 85 Stepping: 4 Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic C compiler: /usr/bin/cc GNU 5.4.0 C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast C++ compiler: /usr/bin/c++ GNU 5.4.0 C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88 CUDA compiler flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-30 Thread Mark Abraham
Hi,

Does nvidia-smi report that your GPUs are available to use?

Mark

On Wed, 30 Jan 2019 at 07:37 Владимир Богданов 
wrote:

> Hey everyone!
>
> I need help, please. When I try to run MD with GPU I get the next error:
>
> Command line:
>
> gmx_mpi mdrun -deffnm md -nb auto
>
>
>
> Back Off! I just backed up md.log to ./#md
> .log.4#
>
> NOTE: Detection of GPUs failed. The API reported:
>
> GROMACS cannot run tasks on a GPU.
>
> Reading file md.tpr, VERSION 2018.2 (single precision)
>
> Changing nstlist from 20 to 80, rlist from 1.224 to 1.32
>
>
>
> Using 1 MPI process
>
> Using 16 OpenMP threads
>
>
>
> Back Off! I just backed up md.xtc to ./#md
> .xtc.2#
>
>
>
> Back Off! I just backed up md.trr to ./#md
> .trr.2#
>
>
>
> Back Off! I just backed up md.edr to ./#md
> .edr.2#
>
> starting mdrun 'Protein in water'
>
> 3000 steps, 6.0 ps.
>
> I built gromacs with MPI=on and CUDA=on and the compilation process looked
> good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it
> doesn't work.
>
> Information from  *.log file:
>
> GROMACS version: 2018.2
>
> Precision: single
>
> Memory model: 64 bit
>
> MPI library: MPI
>
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
>
> GPU support: CUDA
>
> SIMD instructions: AVX_512
>
> FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
>
> RDTSCP usage: enabled
>
> TNG support: enabled
>
> Hwloc support: disabled
>
> Tracing support: disabled
>
> Built on: 2018-06-24 02:55:16
>
> Built by: vlad@vlad [CMAKE]
>
> Build OS/arch: Linux 4.13.0-45-generic x86_64
>
> Build CPU vendor: Intel
>
> Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz
>
> Build CPU family: 6 Model: 85 Stepping: 4
>
> Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl
> clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid
> pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2
> ssse3 tdt x2apic
>
> C compiler: /usr/bin/cc GNU 5.4.0
>
> C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
>
> C++ compiler: /usr/bin/c++ GNU 5.4.0
>
> C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
>
> CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
> driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on
> Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88
>
> CUDA compiler
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;
> ;-mavx512f;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
>
> CUDA driver: 9.10
>
> CUDA runtime: 32.64
>
>
>
> NOTE: Detection of GPUs failed. The API reported:
>
> GROMACS cannot run tasks on a GPU.
>
>
> Any idea what I am doing wrong?
>
>
> Best,
> Vlad
>
> --
> C уважением, Владимир А. Богданов
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.