vivek sharma wrote:
Hi Carsten,
I have also tried scaling gromacs for a number of nodes ....but was not able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per node.. I am not getting the point of optimizing PME for the number of nodes, is it like we can change the parameters for PME for MDS or using some other coulomb type for the same, please explain.

This is something I played with for a while; see the thread I started here:

http://www.gromacs.org/pipermail/gmx-users/2008-October/036856.html

I got some great advice there. A big factor is the PME/PP balance, which grompp will estimate for you. For simple rectangular boxes, the goal is to shoot for 0.25 for the PME load (this is printed out by grompp). In the thread above, Berk shared with me some tips on how to get this to happen.

Then you should be able to set -npme using mdrun to however many processors is appropriate. I believe mdrun will try to guess, but I'm in the habit of specifying it myself just for my own satisfaction :)

-Justin

and suggest me the way to do it.

With Thanks,
Vivek

2008/11/10 Carsten Kutzner <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>

Hi,
    most likely the Ethernet is the problem here. I compiled some
numbers for the DPPC benchmark in the paper "Speeding up parallel GROMACS on high-latency networks", http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1&SRETRY=0
    
<http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1&SRETRY=0>
    which are for version 3.3, but PME will behave similarly. If you did
not already use separate PME nodes, this is worth a try, since on Ethernet the
    performance will drastically
    depend on the number of nodes involved in the FFT. I also have a
    tool which finds the
    optimal PME settings for a given number of nodes, by varying the
    number of PME nodes
    and the fourier grid settings. I can send it to you if you want.

    Carsten


    On Nov 9, 2008, at 10:30 PM, Yawar JQ wrote:

    I was wondering if anyone could comment on these benchmark results
    for the d.dppc benchmark?
Nodes Cutoff (ns/day) PME (ns/day)
    4   1.331   0.797
    8   2.564   1.497
    16  4.5     1.92
    32  8.308   0.575
    64  13.5    0.275
    128         20.093  -
    192         21.6    -

It seems to scale relatively well up to 32-64 nodes without PME.
    This seems slightly better than the benchmark results for Gromacs
3 on www.gromacs.org <http://www.gromacs.org/>. Can someone comment on the magnitude of the performance hit and
    lack of scaling with PME is worrying me.
For the PME runs, I set rlist,rvdw,rouloumb=1.2 and the rest set
    to the defaults. I can try it with some other settings, larger
    spacing for the grid, but I'm not sure how much more that would
    help. Is there a more standardized system I should use for testing
    PME scaling?
This is with GNU compilers and parallelization with OpenMPI 1.2.
    I'm not sure what we're using for the FFTW The compute nodes are
    Dell m600 blades w/ 16GB of RAM and dual quad core Intel Xeon 3GHz
    processors. I believe it's all ethernet interconnects.
Thanks,
    YQ
    _______________________________________________
    gmx-users mailing list    gmx-users@gromacs.org
    <mailto:gmx-users@gromacs.org>
    http://www.gromacs.org/mailman/listinfo/gmx-users
    Please search the archive at http://www.gromacs.org/search before
    posting!
    Please don't post (un)subscribe requests to the list. Use the
    www interface or send it to [EMAIL PROTECTED]
    <mailto:[EMAIL PROTECTED]>.
    Can't post? Read http://www.gromacs.org/mailing_lists/users.php


    _______________________________________________
    gmx-users mailing list    gmx-users@gromacs.org
    <mailto:gmx-users@gromacs.org>
    http://www.gromacs.org/mailman/listinfo/gmx-users
    Please search the archive at http://www.gromacs.org/search before
    posting!
    Please don't post (un)subscribe requests to the list. Use the
    www interface or send it to [EMAIL PROTECTED]
    <mailto:[EMAIL PROTECTED]>.
    Can't post? Read http://www.gromacs.org/mailing_lists/users.php



------------------------------------------------------------------------

_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

--
========================================

Justin A. Lemkul
Graduate Research Assistant
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================
_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to