quot;Kazem Jahanbakhsh" <[EMAIL PROTECTED]>
To: "Discussion list for GROMACS users"
Sent: Friday, July 27, 2007 1:11 AM
Subject: Re: [gmx-users] Parallel Gromacs Benchmarking with Opteron
Dual-Core & Gigabit Ethernet
Erik Lindahl wrote:
Built-in network cards are u
Erik Lindahl wrote:
>Built-in network cards are usually of lower quality, so there's
>probably only a single processor controlling both ports, and since
>the card probably only has a single driver requests might even be
>serialized.
>
My cluster nodes have two on-board Intel i82541PI GbE LAN contr
On 7/24/2007 5:16 PM, Erik Lindahl wrote:
Hi,
I agree with you about the GbE sharing between 4 cores degardes the
performance. Fortunately, every Cluster node has two GbE ports. I
want to know, can I configure lamd in such a manner that every processor
on every node (with two cores) uses one of
Kazem Jahanbakhsh wrote:
> Dear Erik,
>
>> Remember - compared to the benchmark numbers at www.gromacs.org, your
>> bandwidth is 1/4 and the latency 4 times higher, since you have four
>> cores sharing a single network connection.
>>
>
> I agree with you about the GbE sharing between 4 cores dega
Hi,
I agree with you about the GbE sharing between 4 cores degardes the
performance. Fortunately, every Cluster node has two GbE ports. I
want to know, can I configure lamd in such a manner that every
processor
on every node (with two cores) uses one of these ports for its
communication purpo
Dear Erik,
>
> Remember - compared to the benchmark numbers at www.gromacs.org, your
> bandwidth is 1/4 and the latency 4 times higher, since you have four
> cores sharing a single network connection.
>
I agree with you about the GbE sharing between 4 cores degardes the
performance. Fortunately,
Erik Lindahl wrote:
Hi,
I read at Gmx site that the DPPC system
composed of 121,856 atoms. I saw the gmx topology files, it
seems that Gmx makes data decomposition on input data to run in parallel
(in our simulation case using "-np 12" for execution
on 3 nodes, the data space for every process
Hi,
I read at Gmx site that the DPPC system
composed of 121,856 atoms. I saw the gmx topology files, it
seems that Gmx makes data decomposition on input data to run in
parallel
(in our simulation case using "-np 12" for execution
on 3 nodes, the data space for every process is about 10156 ato
Hi
First of all, thanks for your reply.
On Sun, 22 Jul 2007 18:34, Erik Lindahl wrote:
>Yes, ethernet is definitely limiting you. Not only because the
>latency is high, but since 4 processors share a single network card
>they will only get 1/4 of the bandwidth each (and gigabit ethernet is
>often
Hi,
On Jul 22, 2007, at 6:08 PM, Kazem Jahanbakhsh wrote:
mpirun -np 8 mdrun_d -v -deffnm grompp
First, when you run in double precision you will communicate exactly
twice as much data. Since gigabit ethernet is usually both latency
and bandwidth-limiting, you might get better scaling (a
Dear gmx users,
I have accommodated a Linux Cluster consisting of 8 nodes with the
following specification:
Node HW: Two Dual-Core Opteron 2212 (2GHz + 1 MB cache every core), which
means totally 4 cores on every node + 2GByte RAM + Gigabit Eth NICs.
Network Infrastructure: Gi
11 matches
Mail list logo