Looking VERY briefly at the GAMMA API here:
http://www.disi.unige.it/project/gamma/gamma_api.html
It looks like one could create a GAMMA BTL with a minimal amount of
trouble.
I would encourage your group to do this!
There is quite a bit of information regarding the BTL interface, and
for
Durga
I guess we have strayed a bit from the original post. My personal opinion is
that a number of codes can run in HPC-like mode over Gigabit ethernet, not
just the trivially parallelizable. The hardware components are one key;
PCI-X, low hardware latency NIC (Intel PRO 1000 is 6.6 microsecs vs
Very interesting, indeed! Message passing running over raw Ethernet using
cheap COTS PCs is indeed the need of the hours for people like me who has a
very shallow pocket. Great work! What would make this effort *really* cool
is to have a one-to-one mapping of APIs from MPI domain to GAMMA domain,
On 10/23/06, Tony Ladd wrote:
A couple of comments regarding issues raised by this thread.
1) In my opinion Netpipe is not such a great network benchmarking tool for
HPC applications. It measures timings based on the completion of the send
call on the transmitter not the
A couple of comments regarding issues raised by this thread.
1) In my opinion Netpipe is not such a great network benchmarking tool for
HPC applications. It measures timings based on the completion of the send
call on the transmitter not the completion of the receive. Thus, if there is
a delay in
We manage to get 900+ Mbps on a broadcom, 570x chip. We run jumbo
frames and use a force10 switch. This is with also openmpi-1.0.2
(have not tried rebuilding netpipe with 1.1.2) Also see great
results with netpipe (mpi) on infiniband. Great work so far guys.
120: 6291459 bytes 3
What I think is happening is this:
The initial transfer rate you are seeing is the burst rate; after a long
time average, your sustained transfer rate emerges. Like George said, you
should use a proven tool to measure your bandwidth. We use netperf, a
freeware from HP.
That said, the ethernet
Hi George,
Yes, it is duplex BW. The BW benchmark is a simple timing call around
MPI_Alltoall call. Then you estimate the network traffic from the sending
buffer size and get the rate.
Regards,
Jayanta
On Mon, 23 Oct 2006, George Bosilca wrote:
I don't know what your bandwidth tester look
I don't know what your bandwidth tester look like, but 140MB/s it's
way too much for a single Gige card, except if it's a bidirectional
bandwidth. Usually, on a new generation Gige card (Broadcom
Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express) with a
AMD processor (AMD
Hello,
On 10/23/06, Jayanta Roy wrote:
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
Thats impressive, since its _more_ than the threotetical limit of
Did you try channel bonding? If your OS is Linux, there are plenty of
"howto" on the internet which will tell you how to do it.
However, your CPU might be the bottleneck in this case. How much of CPU
horsepower is available at 140MB/s?
If the CPU *is* the bottleneck, changing your network
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile
Remember that the kernel uses a fair amount of cpu time in the case
of TCP,
and the network portion of the transfer (where you would gain in
parallelism)
is relatively small. One sees most of the advantages of network
parallelism,
with respect to performance, when the on-host network
Hi,
In between two nodes I have dual Gigabit ethernet full duplex links. I was
doing benchmarking using non-blocking MPI send and receive. But I am
getting only speed corresponds to one Gigabit ethernet full duplex link
(< 2Gbps). I have checked using ifconfig, this transfer is using both the
14 matches
Mail list logo