Hi,
Oops, I meant 72000, which is only a factor of 10.
I guess it might be faster one two nodes then, but probably not 2 times.
If you use PME you can also experiment with putting all the PME nodes
on one machine and the non-PME nodes on the other,
probably with mdrun -ddorder pp_pme
Gromacs supports near to maxint atoms.
The question is much more what kind of system size you are scientifically
interested in.
Ethernet will never scale very well for small numbers of atoms per core.
Infiniband will scale very well.
Berk
Date: Wed, 18 Feb 2009 12:56:16 -0800
From: lastexile...@yahoo.de
Subject: RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core
xeon machines
To: gmx-users@gromacs.org
Hello,
thank you for your answer. I just wondering though. How am I supposed to have a
system with more than 99999 atoms, while the gro file has a fixed format giving
up to 5 digits in the number of atoms?
What else should I change in order to succeed better performance from my
hardware if I can succeed having a much bigger system? You say so that ethernet
has reached its limits..
I was concidering using a supercomputing center in Europe and as far as I know
they are using nodes which are using the Cell 9 core processors technology in
each node. How someone there can accomplish a better performance using gromacs
4 using more nodes? Which might be the limit there in such machines.
Thank you once again,
Nikos
--- Berk Hess <g...@hotmail.com> schrieb am Mi,
18.2.2009:
Von: Berk Hess <g...@hotmail.com>
Betreff: RE: [gmx-users] gromacs-4.0.2, parallel performance in two quad core
xeon machines
An: lastexile...@yahoo.de
Datum: Mittwoch, 18. Februar 2009, 19:16
Hi,
You can not scale a system of just 7200 atoms
to 16 cores which are connected by ethernet.
400 atoms per core is already the scaling limit of Gromacs
on current hardware with the fastest available network.
On ethernet a system 100 times as large might scale well to two nodes.
Berk
Date: Wed, 18 Feb 2009 09:40:28 -0800
From: lastexile...@yahoo.de
To: gmx-users@gromacs.org
Subject: [gmx-users] gromacs-4.0.2, parallel performance in two quad core
xeon machines
Hello,
we have built a cluster with nodes that are comprised by the following: dual
core Intel(R) Xeon(R) CPU E3110 @ 3.00GHz. The memory of each node has 16Gb of
memory. The switch that we use is a dell power connect model. Each node has a
Gigabyte ethernet card..
I tested the performance for a system of 7200 atoms in 4cores of one node, in 8
cores of one node and in 16 cores of two nodes. In one node the performance is
getting better.
The problem I get is that moving from one node to two, the performance
decreases dramatically (almost two days for a run that finishes in less than 3
hours!).
I have compiled gromacs with --enable-mpi option. I also have read previous
archives from Mr Kurtzner, yet from what I saw is that they are focused on
errors in gromacs 4 or on problems that previous versions of gromacs had. I get
no errors, just low
performance.
Is there any option that I must enable in order to succeed better performance
in more than one nodes? Or do you think according to your experience that the
switch we use might be the problem? Or maybe should we have to activate
anything from the nodes?
Thank you in advance,
Nikos
Express yourself instantly with MSN Messenger! MSN Messenger
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php