Hi Vivek,

if you use separate PME nodes (-npme) then one group of the processors will calculate the long-range (reciprocal space) part while the remaining processors do the short-range (direct space) part of the Coulomb forces. The goal is to choose the number of nodes in both groups such that the LR calculation takes the same amount of time as the SR calculation - otherwise one of the groups just does nothing for part of the time step.

Also it is advisable not to have too many nodes in the LR group - a quarter or a third is a good number as pointed out by Justin and Berk. This you can achieve by shifting work between LR and SR part by enlarging the grid spacing and at the same time enlarging the Coulomb radius.

Carsten

On Nov 11, 2008, at 12:06 PM, vivek sharma wrote:

Hi Carsten,
I have also tried scaling gromacs for a number of nodes ....but was not able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per node.. I am not getting the point of optimizing PME for the number of nodes, is it like we can change the parameters for PME for MDS or using some other coulomb type for the same, please explain.
and suggest me the way to do it.

With Thanks,
Vivek

2008/11/10 Carsten Kutzner <[EMAIL PROTECTED]>
Hi,

most likely the Ethernet is the problem here. I compiled some numbers for the DPPC benchmark in the paper "Speeding up parallel GROMACS on high-latency networks",
http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1&SRETRY=0
which are for version 3.3, but PME will behave similarly. If you did not already use separate PME nodes, this is worth a try, since on Ethernet the performance will drastically depend on the number of nodes involved in the FFT. I also have a tool which finds the optimal PME settings for a given number of nodes, by varying the number of PME nodes
and the fourier grid settings. I can send it to you if you want.

Carsten


On Nov 9, 2008, at 10:30 PM, Yawar JQ wrote:

I was wondering if anyone could comment on these benchmark results for the d.dppc benchmark?

Nodes   Cutoff (ns/day) PME (ns/day)
4       1.331   0.797
8       2.564   1.497
16      4.5     1.92
32      8.308   0.575
64      13.5    0.275
128     20.093  -
192     21.6    -

It seems to scale relatively well up to 32-64 nodes without PME. This seems slightly better than the benchmark results for Gromacs 3 on www.gromacs.org.

Can someone comment on the magnitude of the performance hit and lack of scaling with PME is worrying me.

For the PME runs, I set rlist,rvdw,rouloumb=1.2 and the rest set to the defaults. I can try it with some other settings, larger spacing for the grid, but I'm not sure how much more that would help. Is there a more standardized system I should use for testing PME scaling?

This is with GNU compilers and parallelization with OpenMPI 1.2. I'm not sure what we're using for the FFTW The compute nodes are Dell m600 blades w/ 16GB of RAM and dual quad core Intel Xeon 3GHz processors. I believe it's all ethernet interconnects.

Thanks,
YQ
_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to