might be that you where affected by this before.
Please let us know if the issue shows up again.
Cheers
Paul
On 06/03/2020 18:58, Daniel Kozuch wrote:
> Additional (good) news, the problem appears to be resolved in the 2020.1
> update (at least for the membrane only system). I'll c
On Behalf Of Justin Lemkul
Sent: Friday, March 6, 2020 11:02 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] GMX 2020 - COMM Removal Issue
On 3/6/20 10:00 AM, Daniel Kozuch wrote:
> [Somehow my response got put in a different thread - hopefully this
> works]
>
> Justin,
>
>
] GMX 2020 - COMM Removal Issue
On 3/6/20 10:00 AM, Daniel Kozuch wrote:
> [Somehow my response got put in a different thread - hopefully this
> works]
>
> Justin,
>
> Thanks for your reply. I agree that some COM motion is normal.
> However, this was a very short simulation (
Sent: Tuesday, March 3, 2020 3:02 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] GMX 2020 - COMM Removal Issue
On 3/2/20 9:53 PM, Daniel Kozuch wrote:
> Hello,
>
> I am experimenting with GROMACS 2020. I have compiled the mpi threaded
> version and am using the new settings
don't see the same drift.
Best,
Dan
On Tue, Mar 3, 2020 at 3:03 PM Justin Lemkul wrote:
>
>
> On 3/2/20 9:53 PM, Daniel Kozuch wrote:
> > Hello,
> >
> > I am experimenting with GROMACS 2020. I have compiled the mpi threaded
> > version and am using
Hello,
I am experimenting with GROMACS 2020. I have compiled the mpi threaded
version and am using the new settings
(GMX_GPU_DD_COMMS, GMX_GPU_PME_PP_COMMS, GMX_FORCE_UPDATE_DEFAULT_GPU) as
suggested on at the following link:
https://devblogs.nvidia.com/creating-faster-molecular-dynamics-simulatio
n, but clearly they aren't. If you file a redmine
> issue, I may be able to take a look, but it might take a while to address.
>
> On Wed, Jan 15, 2020 at 8:52 PM Daniel Kozuch
> wrote:
>
> > Hello,
> >
> > I am interested in using simulated tempering in GROMAC
Hello,
I am interested in using simulated tempering in GROMACS (2019.5) under the
expanded ensemble options. Is there a way to monitor the ensemble weights
as the simulation progresses? I think in theory they are supposed to be
printed out in the log file, but it is only printing 0, -nan, and inf:
Hello,
I am running GROMACS 2019.4 (with GPUs) using the following command on two
nodes (each with 28 processors, and 4 GPUs).
srun -n 56 gmx mdrun -s sim -cpi sim -append no -deffnm sim -plumed
plumed.dat -multidir $mydirs -replex 500 -ntomp 1
It starts fine, but when I restart I get the incomp
I am having a problem similar to that mentioned in a previous thread (
https://www.mail-archive.com/gromacs.org_gmx-users@maillist.sys.kth.se/msg35369.html),
but I could not find a solution from that discussion. I am using pdb2gmx
with the flag -inter and the OPLS AA/M force field:
> gmx pdb2gmx -
at 8:32 AM, Kutzner, Carsten wrote:
> Hi Dan,
>
> > On 11. Feb 2018, at 20:13, Daniel Kozuch wrote:
> >
> > Hello,
> >
> > I was recently trying to use the tune_pme tool with GROMACS 2018 with the
> > following command:
> >
> > gmx tune_pme -n
Hello,
I was recently trying to use the tune_pme tool with GROMACS 2018 with the
following command:
gmx tune_pme -np 84 -s my_tpr.tpr -mdrun 'gmx mdrun'
but I'm getting the following error:
"Fatal error:
Cannot execute mdrun. Please check benchtest.log for problems!"
Unfortunately benchtest.lo
Szilárd,
If I may jump in on this conversation, I am having the reverse problem
(which I assume others may encounter also) where I am attempting a large
REMD run (84 replicas) and I have access to say 12 GPUs and 84 CPUs.
Basically I have less GPUs than simulations. Is there a logical approach to
Hello,
I am performing constant pressure replica exchange across a phase
transition, and as one might expect the associated change in volume is
causing exchange issues and many of my replicas are not efficiently
crossing the phase transition.
I noticed some papers that claim volume-temperature re
Hello,
I recently started experiencing a error with GROMACS 2016.3 during a
replica exchange simulation with 80 replicas, 480 cpus, and 40 GPUs:
Assertion failed:
Condition: comm->cycl_n[ddCyclStep] > 0
When we turned on DLB, we should have measured cycles
The simulation then crashes. I turned o
Hello all,
Is there a way to use the free energy code with position restraints
(similar to the way that the free energy code interacts with the pull
code)? From the manual all I can see that might be relevant is
"restraint-lambdas" but that is apparently only for "dihedral restraints,
and the pull
Hello,
I am attempting to restrain an ice layer in a system with liquid water. I
initially considered using position restraints, but it seems like GROMACS
has a few quirks that make that difficult: you have to create a new .itp
and define the crystal water as different from the liquid water, then
Hello,
I am attempting to restrain an ice sheet in a system with liquid water. I
initially considered using position restraints, but it seems like GROMACS
has a few quirks that make that difficult: you have to create a new .itp
and define the crystal water as different from the liquid water and th
Thanks for the quick reply, I was worried that was the case.
Best,
Dan
On Fri, Aug 11, 2017 at 5:05 PM, Justin Lemkul wrote:
>
>
> On 8/11/17 5:02 PM, Daniel Kozuch wrote:
> > Hello,
> >
> > I am using a pull code to increase the end-to-end distance of a protein
Hello,
I am using a pull code to increase the end-to-end distance of a protein
(included below). I am using direction-periodic and would like the distance
between the COM groups to be calculated in three dimensions. However,
setting pull_coord1_dim = Y Y Y appears to have no effect and the distanc
Hello,
I am using a pull code with geometry=direction-periodic and attempting to
use gmx wham to construct the free energy. The pulling code is doing what I
would like it to, but as might be expected from direction-periodic, when
the pull distance is more than half the box length the distance is w
Hello,
I am using a pull code with geometry=direction-periodic and attempting to
use gmx wham to construct the free energy. I believe the pulling code is
doing what I would like it to, but as might be expected from
direction-periodic, when the pull distance is more than half the box length
the dis
7/12/17 3:05 PM, Daniel Kozuch wrote:
> > Hello,
> >
> > Is it possible to do non-periodic COM pulling using the distance function
> > in GMX 5.14 (i.e. where the distance between the two groups is calculated
> > ignoring pbc)?
> >
>
> No, but this is what
Hello,
Is it possible to do non-periodic COM pulling using the distance function
in GMX 5.14 (i.e. where the distance between the two groups is calculated
ignoring pbc)?
In the tutorials/online the solution seems to be to simply use a box twice
the size of the largest pulling distance, but that w
Hello,
I am recompiling GROMACS on a new compute node and I am getting a unit test
failure (shown below). I am compiling with GNU 4.8.5 and the following
cmake commands:
cmake .. -DCMAKE_INSTALL_PREFIX=[redacted] -DGMX_MPI=on -DGMX_GPU=off
-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX
Hello,
I recently changed the number of cpus I was pairing with each gpu and I
noticed a significant slowdown, more than I would have expected simply due
to a reduction in the number of cpus.
>From the log file it appears that the GPU is resting for a large amount of
time. Is there something I ca
It would be helpful if you include the output of the em run and the log
file for the nvt run.
Best,
Dan
On Thu, May 25, 2017 at 7:12 AM, Kashif wrote:
> Hi
> Whenever I tried to simulate one of my docked complex, the energy
> minimization step converged very fast and complete at 112 steps. And
Hi Marcelo,
That sounds reasonable depending on your time-step and other factors, but I
have not attempted to run with more than one job for GPU.
Maybe Mark can comment more.
Best,
Dan
On Thu, May 25, 2017 at 8:09 AM, Marcelo Depólo
wrote:
> Hi,
>
>
> I had the same struggle benchmarking a sim
, May 24, 2017 at 9:48 PM, Daniel Kozuch
wrote:
> Szilárd,
>
> I think I must be misunderstanding your advice. If I remove the domain
> decomposition and set pin on as suggested by Mark, using:
>
> gmx_gpu mdrun -deffnm my_tpr -dd 1 -pin on
>
> Then I get very poor perfo
Szilárd,
I think I must be misunderstanding your advice. If I remove the domain
decomposition and set pin on as suggested by Mark, using:
gmx_gpu mdrun -deffnm my_tpr -dd 1 -pin on
Then I get very poor performance and the following error:
NOTE: Affinity setting for 6/6 threads failed. This can
toral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
>
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on be
Hello,
I'm using GROMACS 5.1.4 on 8 CPUs and 1 GPU for a system of ~8000 atoms in
a dodecahedron box, and I'm having trouble getting good performance out of
the GPU. Specifically it appears that there is significant performance loss
to wait times ("Wait + Comm. F" and "Wait GPU nonlocal"). I have
32 matches
Mail list logo