Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 4:12 PM, Ezra Kissel wrote: > On 9/5/2012 3:48 PM, Atchley, Scott wrote: >> On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote: >> >>> On Wed, 5 Sep 2012, Atchley, Scott wrote: >>> > AFAICT the network stack is useful up to 1Gbps and > after that more and more band-ai

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: > > Hmmm... You are running an old kernel. What version of OFED do you > > use? > > Hah, if you think my kernel is old, you should see my userland > (RHEL5.5). ;-) My condolences. > Does the version of OFED impact the kernel modules? I am using the > mod

Re: IPoIB performance

2012-09-05 Thread Ezra Kissel
On 9/5/2012 3:48 PM, Atchley, Scott wrote: On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote: On Wed, 5 Sep 2012, Atchley, Scott wrote: AFAICT the network stack is useful up to 1Gbps and after that more and more band-aid comes into play. Hmm, many 10G Ethernet NICs can reach line rate. I

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: > With Myricom 10G NICs, for example, you just need one core and it can do > line rate with 1500 byte MTU. Do you count the stateless offloads as > band-aids? Or something else? The stateless aids also have certain limitations. Its a grey zone if you want

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 3:13 PM, Christoph Lameter wrote: > On Wed, 5 Sep 2012, Atchley, Scott wrote: > >> These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of >> ibv_devinfo is in my original post. > > Hmmm... You are running an old kernel. What version of OFED do you use?

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 3:06 PM, Christoph Lameter wrote: > On Wed, 5 Sep 2012, Atchley, Scott wrote: > >>> AFAICT the network stack is useful up to 1Gbps and >>> after that more and more band-aid comes into play. >> >> Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any >> 40

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 3:04 PM, Reeted wrote: > On 09/05/12 19:59, Atchley, Scott wrote: >> On Sep 5, 2012, at 1:50 PM, Reeted wrote: >> >>> >>> I have read that with newer cards the datagram (unconnected) mode is >>> faster at IPoIB than connected mode. Do you want to check? >> I have read that the

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: > These are Mellanox QDR HCAs (board id is MT_0D90110009). The full output of > ibv_devinfo is in my original post. Hmmm... You are running an old kernel. What version of OFED do you use? -- To unsubscribe from this list: send the line "unsubscribe lin

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: > > AFAICT the network stack is useful up to 1Gbps and > > after that more and more band-aid comes into play. > > Hmm, many 10G Ethernet NICs can reach line rate. I have not yet tested any > 40G Ethernet NICs, but I hope that they will get close to line r

Re: IPoIB performance

2012-09-05 Thread Reeted
On 09/05/12 19:59, Atchley, Scott wrote: On Sep 5, 2012, at 1:50 PM, Reeted wrote: I have read that with newer cards the datagram (unconnected) mode is faster at IPoIB than connected mode. Do you want to check? I have read that the latency is lower (better) but the bandwidth is lower. Using

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 2:20 PM, Christoph Lameter wrote: > On Wed, 5 Sep 2012, Atchley, Scott wrote: > >> # ethtool -k ib0 >> Offload parameters for ib0: >> rx-checksumming: off >> tx-checksumming: off >> scatter-gather: off >> tcp segmentation offload: off >> udp fragmentation offload: off >> generi

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 5 Sep 2012, Atchley, Scott wrote: > # ethtool -k ib0 > Offload parameters for ib0: > rx-checksumming: off > tx-checksumming: off > scatter-gather: off > tcp segmentation offload: off > udp fragmentation offload: off > generic segmentation offload: on > generic-receive-offload: off > > Ther

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 1:50 PM, Reeted wrote: > On 08/29/12 21:35, Atchley, Scott wrote: >> Hi all, >> >> I am benchmarking a sockets based application and I want a sanity check on >> IPoIB performance expectations when using connected mode (65520 MTU). > > I have read that with newer cards the

Re: IPoIB performance

2012-09-05 Thread Reeted
On 09/05/12 17:51, Christoph Lameter wrote: PCI-E on PCI 2.0 should give you up to about 2.3 Gbytes/sec with these nics. So there is like something that the network layer does to you that limits the bandwidth. I think those are 8 lane PCI-e 2.0 so that would be 500MB/sec x 8 that's 4 GBytes/se

Re: IPoIB performance

2012-09-05 Thread Reeted
On 08/29/12 21:35, Atchley, Scott wrote: Hi all, I am benchmarking a sockets based application and I want a sanity check on IPoIB performance expectations when using connected mode (65520 MTU). I have read that with newer cards the datagram (unconnected) mode is faster at IPoIB than conn

Re: IPoIB performance

2012-09-05 Thread Atchley, Scott
On Sep 5, 2012, at 11:51 AM, Christoph Lameter wrote: > On Wed, 29 Aug 2012, Atchley, Scott wrote: > >> I am benchmarking a sockets based application and I want a sanity check >> on IPoIB performance expectations when using connected mode (65520 MTU). >> I am using the tuning tips in Documentatio

Re: IPoIB performance

2012-09-05 Thread Christoph Lameter
On Wed, 29 Aug 2012, Atchley, Scott wrote: > I am benchmarking a sockets based application and I want a sanity check > on IPoIB performance expectations when using connected mode (65520 MTU). > I am using the tuning tips in Documentation/infiniband/ipoib.txt. The > machines have Mellanox QDR cards

Re: IPoIB performance benchmarking

2010-04-12 Thread Dave Olson
On Mon, 12 Apr 2010, Tom Ammon wrote: | Thanks for the pointer. I thought it was running in connected mode, and | looking at that variable that you mentioned confirms it: | [r...@gateway3 ~]# ifconfig ib0 | ib0 Link encap:InfiniBand HWaddr | 80:00:00:02:FE:80:00:00:00:00:00:00:00:00:00:0

Re: IPoIB performance benchmarking

2010-04-12 Thread Tom Ammon
Dave, Thanks for the pointer. I thought it was running in connected mode, and looking at that variable that you mentioned confirms it: [r...@gateway3 ~]# cat /sys/class/net/ib0/mode connected And the IP MTU shows up as: [r...@gateway3 ~]# ifconfig ib0 ib0 Link encap:InfiniBand HWaddr

Re: IPoIB performance benchmarking

2010-04-12 Thread Dave Olson
On Mon, 12 Apr 2010, Tom Ammon wrote: | I'm trying to do some performance benchmarking of IPoIB on a DDR IB | cluster, and I am having a hard time understanding what I am seeing. | | When I do a simple netperf, I get results like these: | | [r...@gateway3 ~]# netperf -H 192.168.23.252 | TCP STRE