Hi Grita,
Yes it is. You need to re-compile a GPU version of Gromacs from source
codes. You also need to use Verlet cutoff-scheme. That is, place a new
line like
cutoff-scheme = Verlet
in your mdp file.
Finally, run the GPU version of mdrun, adding a parameter -gpu_id 0if
you have
Hi Dallas and Justin,
Thanks for the reply. Yes, I did plot pressure changes over time by
g_energy and I have been aware of the note at
http://www.gromacs.org/Documentation/Terminology/Pressure
I am concerned about the average pressure is because our experiment shows
that our target
True, but thermostats allow temperatures to oscillate on the order of a few
K,
and that doesn't happen on the macroscopic level either. Hence the small
disconnect between a system that has thousands of atoms and one that has
millions or trillions. Pressure fluctuations decrease on the order
Justin Lemkul wrote
On 9/11/13 12:12 AM, Dwey Kauffman wrote:
True, but thermostats allow temperatures to oscillate on the order of a
few
K,
and that doesn't happen on the macroscopic level either. Hence the
small
disconnect between a system that has thousands of atoms and one that has
I carried out independent NPT processes with different tau_p values =
1.5,
1.0 and 0.5
## tau_p 1.5
Energy Average Err.Est. RMSD Tot-Drift
---
Pressure2.62859
Hi Timo,
Can you provide a benchmark with 1 Xeon E5-2680 with 1 Nvidia
k20x GPGPU on the same test of 29420 atoms ?
Are these two GPU cards (within the same node) connected by a SLI (Scalable
Link Interface) ?
Thanks,
Dwey
--
View this message in context:
Hi Szilard,
Thanks for your suggestions. I am indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See below. My
intention is to obtain a even better one because we have multiple nodes.
### 8 core AMD with 1 GPU,
Force evaluation time GPU/CPU: 4.006
Hi Szilard,
Thanks.
From Timo's benchmark,
1 node142 ns/day
2 nodes FDR14 218 ns/day
4 nodes FDR14 257 ns/day
8 nodes FDR14 326 ns/day
It looks like a infiniband network is required in order to scale up when
running a task across nodes. Is it correct ?
Dwey
--
View
Hi Szilard,
Thank you very much for your suggestions.
Actually, I was jumping to conclusions too early, as you mentioned AMD
cluster, I assumed you must have 12-16-core Opteron CPUs. If you
have an 8-core (desktop?) AMD CPU, than you may not need to run more
than one rank per GPU.
Yes, we do
Hi Mark and Szilard
Thanks for your both suggestions. They are very helpful.
Neither run had a PP-PME work distribution suitable for the hardware it
was
running on (and fixing that for each run requires opposite changes).
Adding
a GPU and hoping to see scaling requires that there be
10 matches
Mail list logo