Hi,
Most of those are going to have the same problem and solution for every
installed piece of software that uses CMake, so perhaps you already have
some local knowledge to exploit or share?
You can use -DCMAKE_EXE_LINKER_FLAGS="-static" and probably also
-static-intel (see Intel's docs) in the
Thank you!
I see these:
linux-vdso.so.1 => (0x7ffc4f0df000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f10b6b51000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f10b694d000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f10b6745000)
Hi,
CMake will link to whatever it is allowed to find. What does ldd on the
executable report as the libraries being dynamically linked? Those are the
ones that cmake found for which there were apparently no static equivalents.
Mark
On Sun, Jul 22, 2018, 18:16 Shayna Hilburg wrote:
> Hi all,
Hi all,
I'm trying to install GROMACS 2018 for use on GPUs. We typically keep the
software on the master node and just call it through a mounted drive on the
compute nodes. However, despite using static library tags, it appears there
are still dependencies. It works fine on our master node but
Mark
Yes, it was your suggestions that finally set me on the right $PATH. The
examples and analyses work as intended.
Thanks
Paul
> On May 6, 2018, at 2:24 PM, Mark Abraham wrote:
>
> Hi,
>
> I already referred you to the install guide for ideas on how to access
Hi,
I already referred you to the install guide for ideas on how to access the
version of GROMACS that you want. Did you look there?
Mark
On Sun, May 6, 2018, 02:52 paul buscemi wrote:
> Mark, Justin
>
> I was able to access the GPU using simply :
>
> cmake ..
On 5/5/18 8:51 PM, paul buscemi wrote:
Mark, Justin
I was able to access the GPU using simply :
cmake .. -DGMX_BUILD_OWN_FFTW=ON \
-DREGRESSIONTEST_DOWNLOAD=ON \
-DGMX_MPI=on \
-DGMX_GPU=on
the result for the lysozyme MD run ( with the appropriate quote ) was:
Mark, Justin
I was able to access the GPU using simply :
cmake .. -DGMX_BUILD_OWN_FFTW=ON \
-DREGRESSIONTEST_DOWNLOAD=ON \
-DGMX_MPI=on \
-DGMX_GPU=on
the result for the lysozyme MD run ( with the appropriate quote ) was:
Core t (s) Wall
Hi,
It's also GROMACS 5.1.2 not the 2018 you reported trying to install. You
need to make sure your terminal has been given access to the GROMACS that
you want to use (see that part of the install guode.).
Also, your CMake line tried to use OpenCL which is not what you want for
running on an
On 5/4/18 6:53 PM, paul buscemi wrote:
Justin,
Here is the install script and a snippit from the log file .
Gromacs runs normally with this ( fresh ) install but without GPU use
Paul
cmake .. -DGMX_BUILD_OWN_FFTW=ON \
-DGMX_GPU=on \
Justin,
Here is the install script and a snippit from the log file .
Gromacs runs normally with this ( fresh ) install but without GPU use
Paul
cmake .. -DGMX_BUILD_OWN_FFTW=ON \
-DGMX_GPU=on \
-DCUDA_TOOLKIT_ROOT_DIR=/usr/lib/nvidia-cuda-toolkit \
On 5/4/18 2:11 PM, paul buscemi wrote:
Thank you Justin.
Not at the linux system at the moment, but is there anything in particular I
should look for in the log file ?
Just look for "GPU" and you'll find it.
-Justin
Paul
On 4,May 2018, at 12:55 PM, Justin Lemkul
Thank you Justin.
Not at the linux system at the moment, but is there anything in particular I
should look for in the log file ?
Paul
> On 4,May 2018, at 12:55 PM, Justin Lemkul wrote:
>
>
>
> On 5/4/18 1:48 PM, paul buscemi wrote:
>> thank you for the prompt response. I
On 5/4/18 1:48 PM, paul buscemi wrote:
thank you for the prompt response. I will check out the link. Regarding 2)
One can immediately determine if the GPU is/is running since Gromacs tells you
at the beginning - in my case - that the CPU’s being used and linux system
monitor tells you
thank you for the prompt response. I will check out the link. Regarding 2)
One can immediately determine if the GPU is/is running since Gromacs tells you
at the beginning - in my case - that the CPU’s being used and linux system
monitor tells you that all CPU’s are running, - not to mention
Hi,
On Fri, May 4, 2018 at 6:43 PM paul buscemi wrote:
>
>
> I’ve been struggling for a for several days to get Gromacs-2018 to use my
> GPU. I followed the INSTALL instructions ( several times ! ) that are
> provided in the 2018 tarball
>
> I know that the GPU ( GTX1080) is
I’ve been struggling for a for several days to get Gromacs-2018 to use my GPU.
I followed the INSTALL instructions ( several times ! ) that are provided in
the 2018 tarball
I know that the GPU ( GTX1080) is installed properly in that it works with
Schrodinger and the Nvidia self tests.
Hi Szilard
Thank you for answering. It did indeed show a significant improvement
with, in particular,
$ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
-gputasks 0001
I also now understand better how to control each individual
simulation. Your point on maximizing
Mark,
You're right, sorry. I actually mixed up -nt (total number of threads)
with -ntomp (per rank), lack of sleep will do that to me sometimes.
Yes. The -ntomp, -nt, and -ntmpi options have always been different from
each other, and still work as they always did. See
Hi,
On Fri, Feb 9, 2018, 18:05 Alex wrote:
> Just to quickly jump in, because Mark suggested taken a look at the
> latest doc and unfortunately I must admit that I didn't understand what
> I read. I appear to be especially struggling with the idea of gputasks.
>
Szilard's
On Fri, Feb 9, 2018, 16:57 Daniel Kozuch wrote:
> Szilárd,
>
> If I may jump in on this conversation, I am having the reverse problem
> (which I assume others may encounter also) where I am attempting a large
> REMD run (84 replicas) and I have access to say 12 GPUs and 84
Just to quickly jump in, because Mark suggested taken a look at the
latest doc and unfortunately I must admit that I didn't understand what
I read. I appear to be especially struggling with the idea of gputasks.
Can you please explain what is happening in this line?
-pme gpu -nb gpu -ntmpi 8
Szilárd,
If I may jump in on this conversation, I am having the reverse problem
(which I assume others may encounter also) where I am attempting a large
REMD run (84 replicas) and I have access to say 12 GPUs and 84 CPUs.
Basically I have less GPUs than simulations. Is there a logical approach
On Fri, Feb 9, 2018 at 4:25 PM, Szilárd Páll wrote:
> Hi,
>
> First of all,have you read the docs (admittedly somewhat brief):
> http://manual.gromacs.org/documentation/2018/user-guide/
> mdrun-performance.html#types-of-gpu-tasks
>
> The current PME GPU was optimized for
Hi,
First of all,have you read the docs (admittedly somewhat brief):
http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html#types-of-gpu-tasks
The current PME GPU was optimized for single-GPU runs. Using multiple GPUs
with PME offloaded works, but this mode hasn't been an
Hi list,
I am trying out the new gromacs 2018 (really nice so far), but have a few
questions about what command line options I should specify, specifically
with the new gnu pme implementation.
My computer has two CPUs (with 12 cores each, 24 with hyper threading) and
two GPUs, and I currently
26 matches
Mail list logo