Re: [gmx-users] gmx 2019 running problems

2019-01-15 Thread Tamas Hegedus

Hi,

Thanks for the feed-backs.
Yes, I could build gmx 2019 on a hosts running with nvidia driver 4.10

Tamas

On 01/14/2019 09:06 PM, Tamas Hegedus wrote:

Hi,

I tried to install and use gmx 2019 on a single node computer with 4 GPUs.

I think that the build was ok, but the running is...
There is only workload on 4 cores (-nt 16) and
there is no workload on the GPUs at all.

gmx 2018 was deployed on the same computer with the same tools and 
libraries.


CPU 16cores + 16threads
GPU 1080Ti

cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 
-DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON 
-DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20 
-DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so 
-DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake


-- Looking for NVIDIA GPUs present in the system
-- Number of NVIDIA GPUs detected: 4
-- Found CUDA: /usr (found suitable version "9.1", minimum required is 
"7.0")


make -j16
make -j16 install # note: a lot of building happened also in this step

**
gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu 
-npme 1 -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend


+-+ 


| NVIDIA-SMI 390.48 Driver Version: 390.48  |
|---+--+--+ 

| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile 
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |
|===+==+==| 


|   0  GeForce GTX 108...  Off  | :02:00.0 Off |  N/A |
|  0%   28CP819W / 250W |179MiB / 11178MiB |  0% Default |
+---+--+--+ 


|   1  GeForce GTX 108...  Off  | :03:00.0 Off |  N/A |
|  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0% Default |
+---+--+--+ 


|   2  GeForce GTX 108...  Off  | :83:00.0 Off |  N/A |
|  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0% Default |
+---+--+--+ 


|   3  GeForce GTX 108...  Off  | :84:00.0 Off |  N/A |
|  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0% Default |
+---+--+--+ 




+-+ 

| Processes:   GPU 
Memory |
|  GPU   PID   Type   Process name Usage 
  |
|=| 


|0 20243  C   gmx 161MiB |
|1 20243  C   gmx 161MiB |
|2 20243  C   gmx 161MiB |
|3 20243  C   gmx 219MiB |
+-+ 



Thanks for your suggestions,
Tamas




--
Tamas Hegedus, PhD
Senior Research Fellow
Department of Biophysics and Radiation Biology
Semmelweis University | phone: (36) 1-459 1500/60233
Tuzolto utca 37-47| mailto:ta...@hegelab.org
Budapest, 1094, Hungary   | http://www.hegelab.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx 2019 running problems

2019-01-14 Thread Mark Abraham
Hi,

On Mon, Jan 14, 2019 at 9:06 PM Tamas Hegedus  wrote:

> Hi,
>
> I tried to install and use gmx 2019 on a single node computer with 4 GPUs.
>
> I think that the build was ok, but the running is...
> There is only workload on 4 cores (-nt 16) and
> there is no workload on the GPUs at all.
>

Sounds like your driver might be incompatible with either the GPUs or the
CUDA version found. Check out the log file written by mdrun, and probably
plan to reinstall CUDA 10 and get the latest drivers.

Mark


> gmx 2018 was deployed on the same computer with the same tools and
> libraries.
>
> CPU 16cores + 16threads
> GPU 1080Ti
>
> cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6
> -DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON
> -DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20
> -DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so
> -DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake
>
> -- Looking for NVIDIA GPUs present in the system
> -- Number of NVIDIA GPUs detected: 4
> -- Found CUDA: /usr (found suitable version "9.1", minimum required is
> "7.0")
>
> make -j16
> make -j16 install # note: a lot of building happened also in this step
>
> **
> gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu
> -npme 1 -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend
>
>
> +-+
> | NVIDIA-SMI 390.48 Driver Version: 390.48
>   |
>
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
> Uncorr. ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute M. |
>
> |===+==+==|
> |   0  GeForce GTX 108...  Off  | :02:00.0 Off |
>   N/A |
> |  0%   28CP819W / 250W |179MiB / 11178MiB |  0%
> Default |
>
> +---+--+--+
> |   1  GeForce GTX 108...  Off  | :03:00.0 Off |
>   N/A |
> |  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0%
> Default |
>
> +---+--+--+
> |   2  GeForce GTX 108...  Off  | :83:00.0 Off |
>   N/A |
> |  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0%
> Default |
>
> +---+--+--+
> |   3  GeForce GTX 108...  Off  | :84:00.0 Off |
>   N/A |
> |  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0%
> Default |
>
> +---+--+--+
>
>
>
> +-+
> | Processes:   GPU
> Memory |
> |  GPU   PID   Type   Process name Usage
>   |
>
> |=|
> |0 20243  C   gmx
> 161MiB |
> |1 20243  C   gmx
> 161MiB |
> |2 20243  C   gmx
> 161MiB |
> |3 20243  C   gmx
> 219MiB |
>
> +-+
>
> Thanks for your suggestions,
> Tamas
>
> --
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 1094, Hungary| http://www.hegelab.org
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx 2019 running problems

2019-01-14 Thread paul buscemi
One  other  suggestion:

 from the PPA repository, install the Nvidia 410 driver.   Your Cuda 9 install  
may work with the 410 driver but more likely you will  need to reinstall. cuda

 If so, install the CUDA 10 toolkit, but DO NOT use the toolkit to install the 
driver when asked,  it will revert to the Nvidia  v384 driver

Hope it works out for your monster 

Paul

> On Jan 14, 2019, at 2:06 PM, Tamas Hegedus  wrote:
> 
> Hi,
> 
> I tried to install and use gmx 2019 on a single node computer with 4 GPUs.
> 
> I think that the build was ok, but the running is...
> There is only workload on 4 cores (-nt 16) and
> there is no workload on the GPUs at all.
> 
> gmx 2018 was deployed on the same computer with the same tools and libraries.
> 
> CPU 16cores + 16threads
> GPU 1080Ti
> 
> cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 
> -DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON 
> -DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20 
> -DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so 
> -DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake
> 
> -- Looking for NVIDIA GPUs present in the system
> -- Number of NVIDIA GPUs detected: 4
> -- Found CUDA: /usr (found suitable version "9.1", minimum required is "7.0")
> 
> make -j16
> make -j16 install # note: a lot of building happened also in this step
> 
> **
> gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu -npme 1 
> -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend
> 
> +-+
> | NVIDIA-SMI 390.48 Driver Version: 390.48  |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC 
> |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
> |===+==+==|
> |   0  GeForce GTX 108...  Off  | :02:00.0 Off |  N/A |
> |  0%   28CP819W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   1  GeForce GTX 108...  Off  | :03:00.0 Off |  N/A |
> |  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   2  GeForce GTX 108...  Off  | :83:00.0 Off |  N/A |
> |  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   3  GeForce GTX 108...  Off  | :84:00.0 Off |  N/A |
> |  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0% Default |
> +---+--+--+
> 
> +-+
> | Processes:   GPU Memory 
> |
> |  GPU   PID   Type   Process name Usage  
> |
> |=|
> |0 20243  C   gmx 161MiB |
> |1 20243  C   gmx 161MiB |
> |2 20243  C   gmx 161MiB |
> |3 20243  C   gmx 219MiB |
> +-+
> 
> Thanks for your suggestions,
> Tamas
> 
> -- 
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 1094, Hungary| http://www.hegelab.org
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx 2019 running problems

2019-01-14 Thread paul buscemi
Tamas

I’ve the same build as you (almost… 2 gpus )  I found good  results  using one 
change :   from  -nt 16   to-ntomp 4  which should map the GPU’s and tasks 
in the same manner, but may be handled differently by mdrun.  These two 
versions run with different efficiency on my rig.

Paul

> On Jan 14, 2019, at 2:06 PM, Tamas Hegedus  wrote:
> 
> Hi,
> 
> I tried to install and use gmx 2019 on a single node computer with 4 GPUs.
> 
> I think that the build was ok, but the running is...
> There is only workload on 4 cores (-nt 16) and
> there is no workload on the GPUs at all.
> 
> gmx 2018 was deployed on the same computer with the same tools and libraries.
> 
> CPU 16cores + 16threads
> GPU 1080Ti
> 
> cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 
> -DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON 
> -DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20 
> -DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so 
> -DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake
> 
> -- Looking for NVIDIA GPUs present in the system
> -- Number of NVIDIA GPUs detected: 4
> -- Found CUDA: /usr (found suitable version "9.1", minimum required is "7.0")
> 
> make -j16
> make -j16 install # note: a lot of building happened also in this step
> 
> **
> gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu -npme 1 
> -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend
> 
> +-+
> | NVIDIA-SMI 390.48 Driver Version: 390.48  |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC 
> |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
> |===+==+==|
> |   0  GeForce GTX 108...  Off  | :02:00.0 Off |  N/A |
> |  0%   28CP819W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   1  GeForce GTX 108...  Off  | :03:00.0 Off |  N/A |
> |  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   2  GeForce GTX 108...  Off  | :83:00.0 Off |  N/A |
> |  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   3  GeForce GTX 108...  Off  | :84:00.0 Off |  N/A |
> |  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0% Default |
> +---+--+--+
> 
> +-+
> | Processes:   GPU Memory 
> |
> |  GPU   PID   Type   Process name Usage  
> |
> |=|
> |0 20243  C   gmx 161MiB |
> |1 20243  C   gmx 161MiB |
> |2 20243  C   gmx 161MiB |
> |3 20243  C   gmx 219MiB |
> +-+
> 
> Thanks for your suggestions,
> Tamas
> 
> -- 
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 1094, Hungary| http://www.hegelab.org
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gmx 2019 running problems

2019-01-14 Thread Tamas Hegedus

Hi,

I tried to install and use gmx 2019 on a single node computer with 4 GPUs.

I think that the build was ok, but the running is...
There is only workload on 4 cores (-nt 16) and
there is no workload on the GPUs at all.

gmx 2018 was deployed on the same computer with the same tools and 
libraries.


CPU 16cores + 16threads
GPU 1080Ti

cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 
-DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON 
-DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20 
-DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so 
-DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake


-- Looking for NVIDIA GPUs present in the system
-- Number of NVIDIA GPUs detected: 4
-- Found CUDA: /usr (found suitable version "9.1", minimum required is 
"7.0")


make -j16
make -j16 install # note: a lot of building happened also in this step

**
gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu 
-npme 1 -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend


+-+
| NVIDIA-SMI 390.48 Driver Version: 390.48 
 |

|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile 
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |

|===+==+==|
|   0  GeForce GTX 108...  Off  | :02:00.0 Off | 
 N/A |
|  0%   28CP819W / 250W |179MiB / 11178MiB |  0% 
Default |

+---+--+--+
|   1  GeForce GTX 108...  Off  | :03:00.0 Off | 
 N/A |
|  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0% 
Default |

+---+--+--+
|   2  GeForce GTX 108...  Off  | :83:00.0 Off | 
 N/A |
|  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0% 
Default |

+---+--+--+
|   3  GeForce GTX 108...  Off  | :84:00.0 Off | 
 N/A |
|  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0% 
Default |

+---+--+--+


+-+
| Processes:   GPU 
Memory |
|  GPU   PID   Type   Process name Usage 
 |

|=|
|0 20243  C   gmx 
161MiB |
|1 20243  C   gmx 
161MiB |
|2 20243  C   gmx 
161MiB |
|3 20243  C   gmx 
219MiB |

+-+

Thanks for your suggestions,
Tamas

--
Tamas Hegedus, PhD
Senior Research Fellow
MTA-SE Molecular Biophysics Research Group
Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
Semmelweis University  | fax:   (36) 1-266 6656
Tuzolto utca 37-47 | mailto:ta...@hegelab.org
Budapest, 1094, Hungary| http://www.hegelab.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.