Low cost tip:
Ask your cluster administrator if it is possible apply channel bonding
to the Gigabit interfaces. You need two network switches for that to
be efficient (a cut-through switch may also help). It increases
network bandwidth by 70%. It also helps to use Cat6 cables.
You may try this MPI
Berk Hess schrieb:
Hi all,
So this is 4 cores sharing one ethernet connection?
perhaps the two Gigabit NICs were bundled somehow. But I guess this
doesn't work out-of-the-box&plug'n'play. And latency and not bandwidth
may be limiting in this case.
With such a setup you will never get goo
From: "maria goranovic" <[EMAIL PROTECTED]>
Reply-To: Discussion list for GROMACS users
To: gmx-users@gromacs.org
Subject: [gmx-users] No scale up beyond 4 processors for 24 atom system
Date: Tue, 9 Oct 2007 12:09:50 +0200
Hello,
I was wondering what the scale up was wi
Hello,
I was wondering what the scale up was with GROMACS 3.3.1 on 8 or 16
processors. Here are my benchmarks:
Hardware: Dell PowerEdge 2950, 2x 2,66Ghz Intel Woodcrest CPUs, 8 GB Ram, 2x
Gigabit Ethernet
GROMACS 3.3.1: 24 atoms, PME, 1.0 nm real cutoff
processors ns/day
1 0.141
2
4 matches
Mail list logo