Hi list,
I am trying out the new gromacs 2018 (really nice so far), but have a few
questions about what command line options I should specify, specifically
with the new gnu pme implementation.
My computer has two CPUs (with 12 cores each, 24 with hyper threading) and
two GPUs, and I currently (wi
>
> Do the science, cite the papers, spread the word, help others, make quality
> bug reports :-) Glad you like it!
>
>
>
Oh, I do all of that (except maybe for bug reports), but when people read
my papers, they say "that is a VERY unusual use of Gromacs -- go learn
LAMMPS." So yes, I love GMX to
Hi,
On Thu, Feb 8, 2018 at 9:11 PM Alex wrote:
> Mark, a question about the input parameters you suggested:
>
> > gmx mdrun -ntmpi 3 -npme 1 -nb gpu -pme gpu
>
> Where does this specify on which GPUs we're running? Or is this in addition
> -gpu_id key?
>
It's rather different now. -gputasks is
Hi,
On Thu, Feb 8, 2018 at 8:50 PM Alex wrote:
> Got it, thanks. Even with the old style input I now have a 42% speed up
> with PME on GPU. How, how can I express my enormous gratitude?!
>
Do the science, cite the papers, spread the word, help others, make quality
bug reports :-) Glad you like
Hello,
I am using g_mdmat to compute the number of contacts formed by the C-alphas
of each residue with other residue.
The manual says: "*A**lso a count of the number of different atomic
contacts between residues over the whole trajectory can be made*."
The title of the .xvg file says: title "I
Mark, a question about the input parameters you suggested:
> gmx mdrun -ntmpi 3 -npme 1 -nb gpu -pme gpu
Where does this specify on which GPUs we're running? Or is this in addition
-gpu_id key?
Also, I thought that -ntmpi was by default set to the number of GPUs in the
system -- is this no longe
Got it, thanks. Even with the old style input I now have a 42% speed up
with PME on GPU. How, how can I express my enormous gratitude?!
On Thu, Feb 8, 2018 at 12:44 PM, Mark Abraham
wrote:
> Hi,
>
> Yes. Note the new use of -gputasks. And perhaps check out
> http://manual.gromacs.org/documentati
Hi,
Yes. Note the new use of -gputasks. And perhaps check out
http://manual.gromacs.org/documentation/2018-latest/user-guide/mdrun-performance.html#types-of-gpu-tasks
because
things are now different.
gmx mdrun -ntmpi 3 -npme 1 -nb gpu -pme gpu is more like what you want.
Mark
On Thu, Feb 8, 20
With -pme gpu, I am reporting 383.032 ns/day vs 270 ns/day with the 2016.4
version. I _did not_ mistype. The system is close to a cubic box of water
with some ions.
Incredible.
Alex
On Thu, Feb 8, 2018 at 12:27 PM, Szilárd Páll
wrote:
> Note that the actual mdrun performance need not be affect
I think this should be a separate question, given all the recent mess with
the utils tests...
I am testing mdrun (v 2018) on a system that's trivial and close to a 5 x 5
x 5 box filled with water and some ions. We have three GPUs and the run is
with -nt 18 -gpu_id 012 -pme -gpu.
nvidia-smi report
Note that the actual mdrun performance need not be affected both of it's
it's a driver persistence issue (you'll just see a few seconds lag at mdrun
startup) or some other CUDA application startup-related lag (an mdrun run
does mostly very different kind of things than this set of particular unit
t
I keep getting bounce messages from the list, so in case things didn't get
posted...
1. We enabled PM -- still times out.
2. 3-4 days ago we had very fast runs with GPU (2016.4), so I don't know if
we miraculously broke everything to the point where our $25K box performs
worse than Mark's laptop.
On Thu, Feb 8, 2018 at 6:54 PM Szilárd Páll wrote:
> BTW, do you have persistence mode (PM) set (see in the nvidia-smi output)?
> If you do not have PM it set nor is there an X server that keeps the driver
> loaded, the driver gets loaded every time a CUDA application is started.
> This could be
Got it. Given all the messing around, I am rebuilding GMX and if make check
results are the same, will install. We have an angry postdoc here demanding
tools.
Thank you gentlemen.
Alex
On Thu, Feb 8, 2018 at 10:50 AM, Szilárd Páll
wrote:
> On Thu, Feb 8, 2018 at 6:46 PM, Alex wrote:
>
> > Are
BTW, do you have persistence mode (PM) set (see in the nvidia-smi output)?
If you do not have PM it set nor is there an X server that keeps the driver
loaded, the driver gets loaded every time a CUDA application is started.
This could be causing the lag which shows up as long execution time for our
Hi,
Assuming the other test binary has the same behaviour (succeeds when run
manually), then the build is working correctly and you could install it for
general use. But I suspect its performance will suffer from whatever is
causing the slowdown (e.g. compare with old numbers). That's not really a
On Thu, Feb 8, 2018 at 6:46 PM, Alex wrote:
> Are you suggesting that i should accept these results and install the 2018
> version?
>
Yes, your GROMACS build seems fine.
make check simply runs the test that I suggested you to run manually (and
which successfully finished). The 30 s timeout on C
Are you suggesting that i should accept these results and install the 2018
version?
Thanks,
Alex
On Thu, Feb 8, 2018 at 10:43 AM, Mark Abraham
wrote:
> Hi,
>
> PATH doesn't matter, only what ldd thinks matters.
>
> I have opened https://redmine.gromacs.org/issues/2405 to address that the
> imp
Hi,
PATH doesn't matter, only what ldd thinks matters.
I have opened https://redmine.gromacs.org/issues/2405 to address that the
implementation of these tests are perhaps proving more pain than usefulness
(from this thread and others I have seen).
Mark
On Thu, Feb 8, 2018 at 6:41 PM Alex wrote
That is quite weird. We found that I have PATH values pointing to the old
gmx installation while running these tests. Do you think that could cause
issues?
Alex
On Thu, Feb 8, 2018 at 10:36 AM, Mark Abraham
wrote:
> Hi,
>
> Great. The manual run took 74.5 seconds, failing the 30 second timeout.
Hi,
Great. The manual run took 74.5 seconds, failing the 30 second timeout. So
the code is fine.
But you have some crazy large overhead going on - gpu_utils-test runs in 7s
on my 2013 desktop with CUDA 9.1.
Mark
On Thu, Feb 8, 2018 at 6:29 PM Alex wrote:
> uh, no sir.
>
> > 9/39 Test #9: Gp
uh, no sir.
> 9/39 Test #9: GpuUtilsUnitTests ***Timeout 30.43 sec
On Thu, Feb 8, 2018 at 10:25 AM, Mark Abraham
wrote:
> Hi,
>
> Those all succeeded. Does make check now also succeed?
>
> Mark
>
> On Thu, Feb 8, 2018 at 6:24 PM Alex wrote:
>
> > Here you are:
> >
> > [
Hi,
Those all succeeded. Does make check now also succeed?
Mark
On Thu, Feb 8, 2018 at 6:24 PM Alex wrote:
> Here you are:
>
> [==] Running 35 tests from 7 test cases.
> [--] Global test environment set-up.
> [--] 7 tests from HostAllocatorTest/0, where TypeParam = int
Here you are:
[==] Running 35 tests from 7 test cases.
[--] Global test environment set-up.
[--] 7 tests from HostAllocatorTest/0, where TypeParam = int
[ RUN ] HostAllocatorTest/0.EmptyMemoryAlwaysWorks
[ OK ] HostAllocatorTest/0.EmptyMemoryAlwaysWorks (5457 ms)
It might help to know which of the unit test(s) in that group stall? Can
you run it manually (bin/gpu_utils-test) and report back the standard
output?
--
Szilárd
On Thu, Feb 8, 2018 at 3:56 PM, Alex wrote:
> Nope, still persists after reboot and no other jobs running:
> 9/39 Test #9: GpuUtil
Here's some additional info:
#
# cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.12 Wed Dec 20 07:19:16
PST 2017
GCC version: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.6)
Forwarding colleague's email below, any suggestions highly appreciated.
Thanks!
Alex
***
I ran the minimal tests suggested in the cuda installation guide.
(bandwidthTest,
deviceQuery) and then I individually ran 10 of the samples provided.
However, many of the samples require a graphics interfa
The details are in the article you linked and it will do a much better job
of explaining it than I can.
Let me clarify what I have said in the previous message: Gromacs is using
the correction described in the paper.
On Thu, Feb 8, 2018 at 9:43 AM, Ben Tam wrote:
> Hi Dan,
>
> Thank you for you
I did hear yesterday that CUDA's own tests passed, but will update on
that in more detail as soon as people start showing up -- it's 8 am
right now... :)
Thanks Mark,
Alex
On 2/8/2018 7:59 AM, Mark Abraham wrote:
Hi,
OK, but not clear to me if followed the other advice - cleaned out all th
Hi,
OK, but not clear to me if followed the other advice - cleaned out all the
NVIDIA stuff (CUDA, runtime, drivers), nor if CUDA's own tests work.
Mark
On Thu, Feb 8, 2018 at 3:57 PM Alex wrote:
> Nope, still persists after reboot and no other jobs running:
> 9/39 Test #9: GpuUtilsUnitTest
OK, sounds suspicious.
Hopefully you have some of these already, but I suggest you prepare two run
inputs
* macromolecule in a T-coupling group and water in a T-coupling group
* macromolecule in a T-coupling group and water+sugar in a T-coupling group
and run each .tpr at the two different dump ra
Nope, still persists after reboot and no other jobs running:
9/39 Test #9: GpuUtilsUnitTests ***Timeout 30.59 sec
Any additional suggestions?
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before postin
Hi Mark,
Thank you for the swift reply.
Sorry for not being clear enough. As I tried to explain in my previous
mail, the problem comes from the behavior of the sugar molecules regarding
the macromolecule, not just from the box size (which I only used as an
example on the first mail to explain we
Hi Dan,
Thank you for your answer. If gromacs' pbc =xy still calculate periodic in the
z-direction, how does gromacs take care of the z-direction Ewald?
Best regards,
Ben
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
on behalf of Dan Gil
Sent: Tues
I am rebooting the box and kicking out all the jobs until we figure this
out.
Thanks!
Alex
On 2/8/2018 7:27 AM, Szilárd Páll wrote:
BTW, timeouts can be caused by contention from stupid number of ranks/tMPI
threads hammering a single GPU (especially with 2 threads/core with HT),
but I'm not
Mark, Peter -- thanks. Your comments make sense.
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://m
BTW, timeouts can be caused by contention from stupid number of ranks/tMPI
threads hammering a single GPU (especially with 2 threads/core with HT),
but I'm not sure if the tests are ever executed with such a huge rank count.
--
Szilárd
On Thu, Feb 8, 2018 at 2:40 PM, Mark Abraham
wrote:
> Hi,
>
Hi,
One simulation at each dump rate doesn't lead to a reliable conclusion that
the change in volume may be related to the dump rate, because if you run
replicates at each dump rate, they would not reproduce each other. If you
run such replicates and there is, or is not, a consistent trend, then t
Hi Peter,
Thank you for the swift reply. Here are some details about your answers. I
hope the problem will appear more clear with them.
> Increase/decrease by how much? Is it converged?
The results completely diverge, that is the problem. The interactions we
are observing through the simulations
Hi,
On Thu, Feb 8, 2018 at 2:15 PM Alex wrote:
> Mark and Peter,
>
> Thanks for commenting. I was told that all CUDA tests passed, but I will
> double check on how many of those were actually run. Also, we never
> rebooted the box after CUDA install, and finally we had a bunch of
> gromacs (2016
Jup, start with rebooting before trying anything else. There's probably
still old drivers loaded in the kernel.
Peter
On 08-02-18 14:14, Alex wrote:
> Mark and Peter,
>
> Thanks for commenting. I was told that all CUDA tests passed, but I
> will double check on how many of those were actually r
Mark and Peter,
Thanks for commenting. I was told that all CUDA tests passed, but I will
double check on how many of those were actually run. Also, we never
rebooted the box after CUDA install, and finally we had a bunch of
gromacs (2016.4) jobs running, because we didn't want to interrupt
po
Hi Vivien,
My answers are inline
On 08-02-18 12:28, Vivien WALTER wrote:
> Dear all,
>
> We are experiencing a strange problem with all-atom simulations using
> Gromacs and Charmm force field, and we have trouble to sort it out.
>
> We are simulating a single-ring sugar and we observe radically
Dear all,
We are experiencing a strange problem with all-atom simulations using
Gromacs and Charmm force field, and we have trouble to sort it out.
We are simulating a single-ring sugar and we observe radically different
behaviors of the molecule as a function of the simulation dump rate.
Startin
Hi
Is there a way to calculate rms for 20 trajectories without joining them? Kind
Regards,Ahmed
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Li
Hi,
Or leftovers of the drivers that are now mismatching. That has caused
timeouts for us.
Mark
On Thu, Feb 8, 2018 at 10:55 AM Peter Kroon wrote:
> Hi,
>
>
> with changing failures like this I would start to suspect the hardware
> as well. Mark's suggestion of looking at simpler test programs
Hi,
with changing failures like this I would start to suspect the hardware
as well. Mark's suggestion of looking at simpler test programs than GMX
is a good one :)
Peter
On 08-02-18 09:10, Mark Abraham wrote:
> Hi,
>
> That suggests that your new CUDA installation is differently incomplete. D
Sorry, I see the point of ndx now ... I thought there was no default group for
water_and_ions.
However, the second question is still running through my head Kind
Regards,Ahmed
From: Ahmed Mashaly
To: "gmx-us...@gromacs.org"
Sent: Thursday, February 8, 2018 9:49 AM
Subject: Re: [gm
And why don't we modify the .mdp file to be protein_and_JZ4 as the same way you
did for water_and_ions instead of making a new index for the list protein_JZ4?
And why do we have some groups duplicated in the default index? for example in
the tut, JZ4 was No. 13 and 19, Ion and CL the same, water
Hi,
That suggests that your new CUDA installation is differently incomplete. Do
its samples or test programs run?
Mark
On Thu, Feb 8, 2018 at 1:20 AM Alex wrote:
> Update: we seem to have had a hiccup with an orphan CUDA install and that
> was causing issues. After wiping everything off and re
Hi,
You should start with the original literature and/or CHARMM forcefield
distribution for its documentation. That wasn't ported to the force field
files one can use with GROMACS.
Mark
On Thu, Feb 8, 2018 at 7:19 AM Dilip H N wrote:
> Hello,
> I want to simulate beta-alanine amino-acid. But i
51 matches
Mail list logo