If you mean [1], then yes I read that and that recommends to use Verlet for the
new algorithm depicted in figures. At least that is my understanding about
offloading. If I read the wrong document or you mean there is also some other
options, please let me know.
[1] http://www.gromacs.org/GPU_a
Hi,
I am using the parmbsc1 force-field (http://www.gromacs.org/@api/
deki/files/260/=amber99bsc1.ff.tgz) in GROMACS. I am looking for the
original paper where the Na+ and Cl- ion 12-6 Lennard-Jones are coming
from, but I am having trouble finding them.
The Amber17 manual suggests that this paper
I was not aware that the way I set it up temperature and pressure from NVT
and NPT equilibrations were not taken over. After supplying the checkpoint
file to grompp via grompp -t the simulation now runs smoothly.
Thanks for your help.
-Marc
On Thu, Mar 1, 2018 at 4:26 PM, Mark Abraham
wrote:
Hi,
Your temperature before step 0 is effectively zero, so it looks like you
ran grompp from a coordinate file with no velocities, and didn't have a
checkpoint file available when you started the mdrun, so mdrun guessed that
you were OK with no checkpoint file. Thus your equlibration didn't have a
Hi Justing,
I followed your advice and compiled GROMACS 2016.4 version and used this
vanilla version (no Plumed patch applied).
I see a very similar behavior as described beofre, in that when starting
the simulation on a CPU only node, the system behaves well (simulation is
now running for 2ns+
Hi,
It would be easier to offer an opinion if we knew what was in the
simulation system (does load imbalance even make sense?) and what the
performance characteristics are without PLUMED. Because of the way PLUMED
works, we can make no promises as to performance or correctness...
Mark
On Thu, Ma
Hi,I ran a few MD runs with identical input files (the SAME tpr file. mdp
included below) on the same computer
with gmx 2018 and observed rather large performance variations (~50%) as in:
grep Performance */mcz1.log7/mcz1.log:Performance: 98.510 0.244
7d/mcz1.log:Performance: 140
Hi gromacs-users,
I am currently running a set of TMD simulations with gromacs 5.1.4 +
plumed_2.4.0. The system for all four runs is identical, the only
difference is the force constant for bias applied by plumed.
I am using gromacs with mpi support, starting it with 32 processes on my
32core pro
You could do something similar in a Top file. Or make unique names for the
parameters to correspond directly the the specific molecule you are interested
in.
-Micholas
===
Micholas Dean Smith, PhD. MRSC
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National
Hello,
is it possible to include all needed bonded parameters (bond lenght,
bond angle,...) in the .rtp file and ignore the ffbonded.itp completely?
Such case would be if a molecules structure has specified parameters
which only fit said structure. According to the manual bond length and
angl
Have you read the "Types of GPU tasks" section of the user guide?
--
Szilárd
On Thu, Mar 1, 2018 at 3:34 PM, Mahmood Naderan
wrote:
> >Again, first and foremost, try running PME on the CPU, your 8-core Ryzen
> will be plenty fast for that.
>
>
> Since I am a computer guy and not a chemist, the
On 3/1/18 9:35 AM, neelam wafa wrote:
Hi!
Dear all I am running pdb2gmx command to create the protein topology but
getting this error. please guide me how to fix this problem.
WARNING: WARNING: Residue 1 named TRP of a molecule in the input file was
mapped
to an entry in the topology database,
Hi,
It is possible, but only using gromacs' internal optizers: SD(steep), CG, or
BFGS.
And you can only optimise minima, not transition states.
Best,
Gerrit
Message: 3
Date: Thu, 1 Mar 2018 12:44:53 +0300
From: nikol...@spbau.ru
To: gmx-us...@gromacs.org
Subject: [gmx-users] QM/MM optim
Hi!
Dear all I am running pdb2gmx command to create the protein topology but
getting this error. please guide me how to fix this problem.
WARNING: WARNING: Residue 1 named TRP of a molecule in the input file was
mapped
to an entry in the topology database, but the atom H used in
an interaction of
>Again, first and foremost, try running PME on the CPU, your 8-core Ryzen will
>be plenty fast for that.
Since I am a computer guy and not a chemist, the question may be noob!
What do you mean exactly by running pme on cpu?
You mean "-nb cpu"? or you mean setting cut-off to Group instead of Ver
No, that does not seem to help much because the GPU is rather slow at
getting the PME Spread done (there's still 12.6% wait for the GPU to finish
that), and there are slight overheads that end up hurting performance.
Again, first and foremost, try running PME on the CPU, your 8-core Ryzen
will be
>- as noted above try offloading only the nonbondeds (or possibly the hybrid
>PME mode -pmefft cpu)
So, with "-pmefft cpu", I don't see any good impact!See the log at
https://pastebin.com/RTYaKSne
I will use other options to see the effect.
Regards,
Mahmood
--
Gromacs Users mailing list
*
doing some test runs to optimize the mdrun settings for my hardwarei noticed a
couple of things i fail to understand (everything below is gmx-2018on an intel
CPU 6 cores, 2 threads each, and GTX 1060)
1) when i start a run as in, e.g.: prompt> gmx mdrun -v -nt 12 -ntmpi 1 -ntomp
12 -deffnm mc
On 3/1/18 8:03 AM, neelam wafa wrote:
Dear gmx users
I am trying to run a protein ligand simmulation. How can i create topolgy
for ligand. prodrg topology is not reliable then which server or software
can be used? can topology be created by T LEEP off ambertools package for
gromacs??
The me
On 2/28/18 3:28 PM, Abramyan, Tigran wrote:
Dear Gromacs Users,
I am using a solvation shell (-shell command) around a large nucleosome system
to accelerate the simulation speed, and was wondering what ensemble is
suggested to run such a system in a solvation shell?
After a few trial runs
Dear gmx users
I am trying to run a protein ligand simmulation. How can i create topolgy
for ligand. prodrg topology is not reliable then which server or software
can be used? can topology be created by T LEEP off ambertools package for
gromacs??
secondly how to select the boxtype as I am new to
On Thu, Mar 1, 2018 at 8:25 AM, Mahmood Naderan
wrote:
> >(try the other parallel modes)
>
> Do you mean OpenMP and MPI?
>
No, I meant different offload modes.
>
> >- as noted above try offloading only the nonbondeds (or possibly the
> hybrid PME mode -pmefft cpu)
>
> May I know how? Which par
Dear all!
I need to perform the QM/MM optimization in the Gromacs/Gaussian
interface. However, I know that in 2015 this was not possible.
The question: is there such an opportunity nowadays (I use gromacs 5.1.2)
and which kind of parameters I need to write in the .mdp file in order to
obtain such
23 matches
Mail list logo