Re: [gmx-users] gromacs 2018 for GPU install on cluster with truly static libraries?

2018-07-23 Thread Mark Abraham
Hi, Most of those are going to have the same problem and solution for every installed piece of software that uses CMake, so perhaps you already have some local knowledge to exploit or share? You can use -DCMAKE_EXE_LINKER_FLAGS="-static" and probably also -static-intel (see Intel's docs) in the

Re: [gmx-users] gromacs 2018 for GPU install on cluster with truly static libraries?

2018-07-22 Thread Shayna Hilburg
Thank you! I see these: linux-vdso.so.1 => (0x7ffc4f0df000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f10b6b51000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f10b694d000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f10b6745000)

Re: [gmx-users] gromacs 2018 for GPU install on cluster with truly static libraries?

2018-07-22 Thread Mark Abraham
Hi, CMake will link to whatever it is allowed to find. What does ldd on the executable report as the libraries being dynamically linked? Those are the ones that cmake found for which there were apparently no static equivalents. Mark On Sun, Jul 22, 2018, 18:16 Shayna Hilburg wrote: > Hi all,

[gmx-users] gromacs 2018 for GPU install on cluster with truly static libraries?

2018-07-22 Thread Shayna Hilburg
Hi all, I'm trying to install GROMACS 2018 for use on GPUs. We typically keep the software on the master node and just call it through a mounted drive on the compute nodes. However, despite using static library tags, it appears there are still dependencies. It works fine on our master node but

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-06 Thread paul buscemi
Mark Yes, it was your suggestions that finally set me on the right $PATH. The examples and analyses work as intended. Thanks Paul > On May 6, 2018, at 2:24 PM, Mark Abraham wrote: > > Hi, > > I already referred you to the install guide for ideas on how to access

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-06 Thread Mark Abraham
Hi, I already referred you to the install guide for ideas on how to access the version of GROMACS that you want. Did you look there? Mark On Sun, May 6, 2018, 02:52 paul buscemi wrote: > Mark, Justin > > I was able to access the GPU using simply : > > cmake ..

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-05 Thread Justin Lemkul
On 5/5/18 8:51 PM, paul buscemi wrote: Mark, Justin I was able to access the GPU using simply : cmake .. -DGMX_BUILD_OWN_FFTW=ON \ -DREGRESSIONTEST_DOWNLOAD=ON \ -DGMX_MPI=on \ -DGMX_GPU=on the result for the lysozyme MD run ( with the appropriate quote ) was:

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-05 Thread paul buscemi
Mark, Justin I was able to access the GPU using simply : cmake .. -DGMX_BUILD_OWN_FFTW=ON \ -DREGRESSIONTEST_DOWNLOAD=ON \ -DGMX_MPI=on \ -DGMX_GPU=on the result for the lysozyme MD run ( with the appropriate quote ) was: Core t (s) Wall

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-05 Thread Mark Abraham
Hi, It's also GROMACS 5.1.2 not the 2018 you reported trying to install. You need to make sure your terminal has been given access to the GROMACS that you want to use (see that part of the install guode.). Also, your CMake line tried to use OpenCL which is not what you want for running on an

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread Justin Lemkul
On 5/4/18 6:53 PM, paul buscemi wrote: Justin, Here is the install script and a snippit from the log file . Gromacs runs normally with this ( fresh ) install but without GPU use Paul cmake .. -DGMX_BUILD_OWN_FFTW=ON \ -DGMX_GPU=on \

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
Justin, Here is the install script and a snippit from the log file . Gromacs runs normally with this ( fresh ) install but without GPU use Paul cmake .. -DGMX_BUILD_OWN_FFTW=ON \ -DGMX_GPU=on \ -DCUDA_TOOLKIT_ROOT_DIR=/usr/lib/nvidia-cuda-toolkit \

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread Justin Lemkul
On 5/4/18 2:11 PM, paul buscemi wrote: Thank you Justin. Not at the linux system at the moment, but is there anything in particular I should look for in the log file ? Just look for "GPU" and you'll find it. -Justin Paul On 4,May 2018, at 12:55 PM, Justin Lemkul

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
Thank you Justin. Not at the linux system at the moment, but is there anything in particular I should look for in the log file ? Paul > On 4,May 2018, at 12:55 PM, Justin Lemkul wrote: > > > > On 5/4/18 1:48 PM, paul buscemi wrote: >> thank you for the prompt response. I

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread Justin Lemkul
On 5/4/18 1:48 PM, paul buscemi wrote: thank you for the prompt response. I will check out the link. Regarding 2) One can immediately determine if the GPU is/is running since Gromacs tells you at the beginning - in my case - that the CPU’s being used and linux system monitor tells you

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
thank you for the prompt response. I will check out the link. Regarding 2) One can immediately determine if the GPU is/is running since Gromacs tells you at the beginning - in my case - that the CPU’s being used and linux system monitor tells you that all CPU’s are running, - not to mention

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread Mark Abraham
Hi, On Fri, May 4, 2018 at 6:43 PM paul buscemi wrote: > > > I’ve been struggling for a for several days to get Gromacs-2018 to use my > GPU. I followed the INSTALL instructions ( several times ! ) that are > provided in the 2018 tarball > > I know that the GPU ( GTX1080) is

[gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
I’ve been struggling for a for several days to get Gromacs-2018 to use my GPU. I followed the INSTALL instructions ( several times ! ) that are provided in the 2018 tarball I know that the GPU ( GTX1080) is installed properly in that it works with Schrodinger and the Nvidia self tests.

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-13 Thread Gmx QA
Hi Szilard Thank you for answering. It did indeed show a significant improvement with, in particular, $ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1 -gputasks 0001 I also now understand better how to control each individual simulation. Your point on maximizing

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-10 Thread Alex
Mark, You're right, sorry. I actually mixed up -nt (total number of threads) with -ntomp (per rank),  lack of sleep will do that to me sometimes. Yes. The -ntomp, -nt, and -ntmpi options have always been different from each other, and still work as they always did. See

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Mark Abraham
Hi, On Fri, Feb 9, 2018, 18:05 Alex wrote: > Just to quickly jump in, because Mark suggested taken a look at the > latest doc and unfortunately I must admit that I didn't understand what > I read. I appear to be especially struggling with the idea of gputasks. > Szilard's

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Mark Abraham
On Fri, Feb 9, 2018, 16:57 Daniel Kozuch wrote: > Szilárd, > > If I may jump in on this conversation, I am having the reverse problem > (which I assume others may encounter also) where I am attempting a large > REMD run (84 replicas) and I have access to say 12 GPUs and 84

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Alex
Just to quickly jump in, because Mark suggested taken a look at the latest doc and unfortunately I must admit that I didn't understand what I read. I appear to be especially struggling with the idea of gputasks. Can you please explain what is happening in this line? -pme gpu -nb gpu -ntmpi 8

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Daniel Kozuch
Szilárd, If I may jump in on this conversation, I am having the reverse problem (which I assume others may encounter also) where I am attempting a large REMD run (84 replicas) and I have access to say 12 GPUs and 84 CPUs. Basically I have less GPUs than simulations. Is there a logical approach

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Szilárd Páll
On Fri, Feb 9, 2018 at 4:25 PM, Szilárd Páll wrote: > Hi, > > First of all,have you read the docs (admittedly somewhat brief): > http://manual.gromacs.org/documentation/2018/user-guide/ > mdrun-performance.html#types-of-gpu-tasks > > The current PME GPU was optimized for

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Szilárd Páll
Hi, First of all,have you read the docs (admittedly somewhat brief): http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html#types-of-gpu-tasks The current PME GPU was optimized for single-GPU runs. Using multiple GPUs with PME offloaded works, but this mode hasn't been an

[gmx-users] Gromacs 2018 and GPU PME

2018-02-08 Thread Gmx QA
Hi list, I am trying out the new gromacs 2018 (really nice so far), but have a few questions about what command line options I should specify, specifically with the new gnu pme implementation. My computer has two CPUs (with 12 cores each, 24 with hyper threading) and two GPUs, and I currently