Hi,
most likely the Ethernet is the problem here. I compiled some numbers
for the DPPC
benchmark in the paper "Speeding up parallel GROMACS on high-latency
networks",
http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1&SRETRY=0
which are for version 3.3, but PME will behave similarly. If you did
not already use
separate PME nodes, this is worth a try, since on Ethernet the
performance will drastically
depend on the number of nodes involved in the FFT. I also have a tool
which finds the
optimal PME settings for a given number of nodes, by varying the
number of PME nodes
and the fourier grid settings. I can send it to you if you want.
Carsten
On Nov 9, 2008, at 10:30 PM, Yawar JQ wrote:
I was wondering if anyone could comment on these benchmark results
for the d.dppc benchmark?
Nodes Cutoff (ns/day) PME (ns/day)
4 1.331 0.797
8 2.564 1.497
16 4.5 1.92
32 8.308 0.575
64 13.5 0.275
128 20.093 -
192 21.6 -
It seems to scale relatively well up to 32-64 nodes without PME.
This seems slightly better than the benchmark results for Gromacs 3
on www.gromacs.org.
Can someone comment on the magnitude of the performance hit and lack
of scaling with PME is worrying me.
For the PME runs, I set rlist,rvdw,rouloumb=1.2 and the rest set to
the defaults. I can try it with some other settings, larger spacing
for the grid, but I'm not sure how much more that would help. Is
there a more standardized system I should use for testing PME scaling?
This is with GNU compilers and parallelization with OpenMPI 1.2. I'm
not sure what we're using for the FFTW The compute nodes are Dell
m600 blades w/ 16GB of RAM and dual quad core Intel Xeon 3GHz
processors. I believe it's all ethernet interconnects.
Thanks,
YQ
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php