Erik Lindahl wrote:
Hi,
I read at Gmx site that the DPPC system
composed of 121,856 atoms. I saw the gmx topology files, it
seems that Gmx makes data decomposition on input data to run in parallel
(in our simulation case using "-np 12" for execution
on 3 nodes, the data space for every process is about 10156 atoms).
I think that the DPPC system's size is not so big enough that someone
can sense the scalability of parallel execution in the existence of
Gigabit Eth. I mean to see the Cluster scalability in our configuration,
we should setup a bigger simulation. Pls correct me, if I'm in mistake.
Well, you can always try different systems (genconf + edit topology),
but the fact that you see low user CPU usage likely means the nodes are
busy waiting for the communication (which probably counts as
kernel/system usage).
Remember - compared to the benchmark numbers at www.gromacs.org, your
bandwidth is 1/4 and the latency 4 times higher, since you have four
cores sharing a single network connection.
Gromacs 4 should scale better on any hardware (significantly better with
PME), but you'll probably never see great scaling with only 4-way shared
gigabit ethernet. It's available in the head branch of CVS for expert
users/voluntary guinea-pigs, but entirely unsupported until we release it.
in addition with gromacs 3.3 you want to use the -shuffle option for grompp.
--
David van der Spoel, Ph.D.
Molec. Biophys. group, Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone: +46184714205. Fax: +4618511755.
[EMAIL PROTECTED] [EMAIL PROTECTED] http://folding.bmc.uu.se
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php