On Wed, 12 Dec 2018, 02:03 Mark Abraham Hi,
>
> I would check the documentation of gmx gangle for how it works,
> particularly for how to define a plane.
Thank you so much Mark. As a GROMACS lover, I like to make a suggestion; I
think sth it would be REALLY helpful if you provide some proper
Hi,
In your case the slow down was in part because with a single GPU the PME
work by default went to that GPU. But with two GPUs the default is to leave
the PME work on the CPU (which for your test was very weak), because the
alternative is often not a good idea. You can try it out with the
Szilard,
Thank you vey much for the information and I apologize how the text appeared -
internet demons at work.
The computer described in the log files is a basic test rig which we use to
iron out models. The workhorse is a many core AMD with now one and hopefully
soon to be two 2080ti’s,
Hi,
I would check the documentation of gmx gangle for how it works,
particularly for how to define a plane. Also, 4.5.4 is prehistoric, please
do yourself a favor and use a version with the seven years of improvements
since then :-)
Mark
On Tue., 11 Dec. 2018, 10:14 rose rahmani, wrote:
> Hi,
Hi,
Unfortunately, you can't attach files to the mailing list. Please use a
file sharing service and share the link.
Mark
On Wed., 12 Dec. 2018, 02:20 Tommaso D'Agostino,
wrote:
> Dear all,
>
> I have a system of 27000 atoms, that I am simulating on both local and
> Marconi-KNL (cineca)
AFAIK the right way to control RPATH using cmake is:
https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
no need to poke the binary.
If you still need to turn off static cudart linking the way to do that
is also via a CMake feature:
Without having read all details (partly due to the hard to read log
files), what I can certainly recommend is: unless you really need to,
avoid running single simulations with only a few 10s of thousands of
atoms across multiple GPUs. You'll be _much_ better off using your
limited resources by
Hi all,
I have a weird, probably very basic question to ask and I hope it is
appropriate for the mailing list.
I am trying to reproduce the pure DPPC bilayer data found in J. Chem.
Theory Comput., 2016, 12 (1), pp 405–413 (10.1021/acs.jctc.5b00935) using
the recommended protocol given in the
Dear all,
I have a system of 27000 atoms, that I am simulating on both local and
Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
that has a graphene sheet attached to it, surrounded by water. I have
already simulated with success this molecule in a system of 6500 atoms,
Hi Tushar,
all parameters from GROMOS 53A6_OXY along with other improvements have
been merged into a new parameter set called 2016H66 (see
https://pubs.acs.org/doi/abs/10.1021/acs.jctc.6b00187).
Philippe Hünenberger put some files for GROMACS on his website:
I'm trying to rewrite the RPATH because shared libraries paths used by
GROMACS are hardcoded in the binary.
ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
linux-vdso.so.1 => (0x7ffddf1d3000)
libgromacs.so.2 =>
11 matches
Mail list logo