On Wed, 29 Aug 2012, Atchley, Scott wrote:
I am benchmarking a sockets based application and I want a sanity check
on IPoIB performance expectations when using connected mode (65520 MTU).
I am using the tuning tips in Documentation/infiniband/ipoib.txt. The
machines have Mellanox QDR cards
On Sep 5, 2012, at 11:51 AM, Christoph Lameter wrote:
On Wed, 29 Aug 2012, Atchley, Scott wrote:
I am benchmarking a sockets based application and I want a sanity check
on IPoIB performance expectations when using connected mode (65520 MTU).
I am using the tuning tips in Documentation
On 08/29/12 21:35, Atchley, Scott wrote:
Hi all,
I am benchmarking a sockets based application and I want a sanity check on
IPoIB performance expectations when using connected mode (65520 MTU).
I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than
On 09/05/12 17:51, Christoph Lameter wrote:
PCI-E on PCI 2.0 should give you up to about 2.3 Gbytes/sec with these
nics. So there is like something that the network layer does to you that
limits the bandwidth.
I think those are 8 lane PCI-e 2.0 so that would be 500MB/sec x 8 that's
4
On Sep 5, 2012, at 1:50 PM, Reeted wrote:
On 08/29/12 21:35, Atchley, Scott wrote:
Hi all,
I am benchmarking a sockets based application and I want a sanity check on
IPoIB performance expectations when using connected mode (65520 MTU).
I have read that with newer cards the datagram
On Wed, 5 Sep 2012, Atchley, Scott wrote:
# ethtool -k ib0
Offload parameters for ib0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation offload: on
generic-receive-offload: off
There is no
On Sep 5, 2012, at 2:20 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
# ethtool -k ib0
Offload parameters for ib0:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation
On 09/05/12 19:59, Atchley, Scott wrote:
On Sep 5, 2012, at 1:50 PM, Reeted wrote:
I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than connected mode. Do you want to check?
I have read that the latency is lower (better) but the bandwidth is lower.
Using
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any
40G Ethernet NICs, but I hope that they will get close to line rate.
On Wed, 5 Sep 2012, Atchley, Scott wrote:
These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of
ibv_devinfo is in my original post.
Hmmm... You are running an old kernel. What version of OFED do you use?
--
To unsubscribe from this list: send the line unsubscribe
On Sep 5, 2012, at 3:04 PM, Reeted wrote:
On 09/05/12 19:59, Atchley, Scott wrote:
On Sep 5, 2012, at 1:50 PM, Reeted wrote:
I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than connected mode. Do you want to check?
I have read that the latency is
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any
40G Ethernet
On Sep 5, 2012, at 3:13 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of
ibv_devinfo is in my original post.
Hmmm... You are running an old kernel. What version of OFED do you use?
Hah, if
On Wed, 5 Sep 2012, Atchley, Scott wrote:
With Myricom 10G NICs, for example, you just need one core and it can do
line rate with 1500 byte MTU. Do you count the stateless offloads as
band-aids? Or something else?
The stateless aids also have certain limitations. Its a grey zone if you
want
On 9/5/2012 3:48 PM, Atchley, Scott wrote:
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hmm, many 10G Ethernet NICs can reach line rate. I
On Wed, 5 Sep 2012, Atchley, Scott wrote:
Hmmm... You are running an old kernel. What version of OFED do you
use?
Hah, if you think my kernel is old, you should see my userland
(RHEL5.5). ;-)
My condolences.
Does the version of OFED impact the kernel modules? I am using the
modules
On Sep 5, 2012, at 4:12 PM, Ezra Kissel wrote:
On 9/5/2012 3:48 PM, Atchley, Scott wrote:
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote:
On Wed, 5 Sep 2012, Atchley, Scott wrote:
AFAICT the network stack is useful up to 1Gbps and
after that more and more band-aid comes into play.
Hi all,
I am benchmarking a sockets based application and I want a sanity check on
IPoIB performance expectations when using connected mode (65520 MTU). I am
using the tuning tips in Documentation/infiniband/ipoib.txt. The machines have
Mellanox QDR cards (see below for the verbose ibv_devinfo
= 8323.22 Mbit/sec
1000 iters in 0.13 seconds = 125.98 usec/iter
Is there something that I am not understanding, here? Is there any way
to make single-stream TCP IPoIB performance better than 4.5Gb/s on a DDR
network? Am I just not using the benchmarking tools correctly?
Thanks,
Tom
is supposed to improve IPoIB performance, but I'm not seeing as
much performance as I'd like.
Tom
On 04/12/2010 02:19 PM, Dave Olson wrote:
On Mon, 12 Apr 2010, Tom Ammon wrote:
| I'm trying to do some performance benchmarking of IPoIB on a DDR IB
| cluster, and I am having a hard time understanding
On Mon, 12 Apr 2010, Tom Ammon wrote:
| Thanks for the pointer. I thought it was running in connected mode, and
| looking at that variable that you mentioned confirms it:
| [r...@gateway3 ~]# ifconfig ib0
| ib0 Link encap:InfiniBand HWaddr
|
21 matches
Mail list logo