Re: [gmx-users] No scale up beyond 4 processors for 240000 atom system

2007-10-09 Thread Diego Enry
Low cost tip:
Ask your cluster administrator if it is possible apply channel bonding
to the Gigabit interfaces. You need two network switches for that to
be efficient (a cut-through switch may also help). It increases
network bandwidth by 70%. It also helps to use Cat6 cables.

You may try this MPICH compilation:
./configure --with-device=ch3:nemesis --enable-fast --disable-cxx
--enable-error-checking=no CFLAGS=-O3 FFLAGS=-O3

There is patch to run PME faster over ethernet... maybe that paper
from Dr. Carsten that Dr. Christian suggested.

High cost tip:
Migrate to Infiniband.


Anyone tried GotoBLAS ? Does it work well with GMX ?


On 10/9/07, Christian Burisch <[EMAIL PROTECTED]> wrote:
> Berk Hess schrieb:
>
> Hi all,
>
> > So this is 4 cores sharing one ethernet connection?
>
> perhaps the two Gigabit NICs were bundled somehow. But I guess this
> doesn't work out-of-the-box&plug'n'play. And latency and not bandwidth
> may be limiting in this case.
>
> > With such a setup you will never get good scaling.
> > You need something like an Infiniband network.
>
> Or check:
>
> Carsten Kutzner, David van der Spoel, Martin Fechner, Erik Lindahl, Udo
> W. Schmitt, Bert L. de Groot and Helmut Grubmuller: Speeding up parallel
> GROMACS on high-latency networks J. Comp. Chem 28 pp. 2075-2084 (2007)
>
> Haven't tried it yet but sounds good!
>
> Regards
>
> Christian
>
> --
> Dr. Christian Burisch
> Lehrstuhl für Biophysik
> PG Theoretische Biophysik
> Ruhr-Universität Bochum
> D-44780 Bochum
> Raum ND04/67
> Fon: +49 234 32 28363
> Fax: +49 234 32 14626
> ___
> gmx-users mailing listgmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [EMAIL PROTECTED]
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>


-- 
Diego Enry B. Gomes
Laboratório de Modelagem e Dinamica Molecular
Universidade Federal do Rio de Janeiro - Brasil.
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] No scale up beyond 4 processors for 240000 atom system

2007-10-09 Thread Christian Burisch

Berk Hess schrieb:

Hi all,


So this is 4 cores sharing one ethernet connection?


perhaps the two Gigabit NICs were bundled somehow. But I guess this 
doesn't work out-of-the-box&plug'n'play. And latency and not bandwidth 
may be limiting in this case.



With such a setup you will never get good scaling.
You need something like an Infiniband network.


Or check:

Carsten Kutzner, David van der Spoel, Martin Fechner, Erik Lindahl, Udo 
W. Schmitt, Bert L. de Groot and Helmut Grubmuller: Speeding up parallel 
GROMACS on high-latency networks J. Comp. Chem 28 pp. 2075-2084 (2007)


Haven't tried it yet but sounds good!

Regards

Christian

--
Dr. Christian Burisch
Lehrstuhl für Biophysik
PG Theoretische Biophysik
Ruhr-Universität Bochum
D-44780 Bochum
Raum ND04/67
Fon: +49 234 32 28363
Fax: +49 234 32 14626
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] No scale up beyond 4 processors for 240000 atom system

2007-10-09 Thread Berk Hess





From: "maria goranovic" <[EMAIL PROTECTED]>
Reply-To: Discussion list for GROMACS users 
To: gmx-users@gromacs.org
Subject: [gmx-users] No scale up beyond 4 processors for 24 atom system
Date: Tue, 9 Oct 2007 12:09:50 +0200

Hello,

I was wondering what the scale up was with GROMACS 3.3.1 on 8 or 16
processors. Here are my benchmarks:

Hardware: Dell PowerEdge 2950, 2x 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 
2x

Gigabit Ethernet

GROMACS 3.3.1: 24 atoms, PME, 1.0 nm real cutoff

processors   ns/day
1   0.141
2   0.253
4   0.409
8   0.232
16  0.107


Do I need to recompile with some other options ? Or is this the best I can
get ? Will another type of network improve performance ? It seems
communication is the rate limiting step for 8 or 16 cores ?

Thank you for the help,

-Maria


So this is 4 cores sharing one ethernet connection?
With such a setup you will never get good scaling.
You need something like an Infiniband network.

Gromacs 4.0 will give a big improvement, but I don't know how much.

Berk.

_
Play online games with your friends with Messenger 
http://www.join.msn.com/messenger/overview


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] No scale up beyond 4 processors for 240000 atom system

2007-10-09 Thread maria goranovic
Hello,

I was wondering what the scale up was with GROMACS 3.3.1 on 8 or 16
processors. Here are my benchmarks:

Hardware: Dell PowerEdge 2950, 2x 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 2x
Gigabit Ethernet

GROMACS 3.3.1: 24 atoms, PME, 1.0 nm real cutoff

processors   ns/day
1   0.141
2   0.253
4   0.409
8   0.232
16  0.107


Do I need to recompile with some other options ? Or is this the best I can
get ? Will another type of network improve performance ? It seems
communication is the rate limiting step for 8 or 16 cores ?

Thank you for the help,

-Maria


-- 
Maria G.
Technical University of Denmark
Copenhagen
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php