Re: IPoIB performance

2012-09-05 Thread Reeted

On 08/29/12 21:35, Atchley, Scott wrote:

Hi all,

I am benchmarking a sockets based application and I want a sanity check on 
IPoIB performance expectations when using connected mode (65520 MTU).


I have read that with newer cards the datagram (unconnected) mode is 
faster at IPoIB than connected mode. Do you want to check?


What benchmark program are you using?
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IPoIB performance

2012-09-05 Thread Reeted

On 09/05/12 17:51, Christoph Lameter wrote:

PCI-E on PCI 2.0 should give you up to about 2.3 Gbytes/sec with these
nics. So there is like something that the network layer does to you that
limits the bandwidth.


I think those are 8 lane PCI-e 2.0 so that would be 500MB/sec x 8 that's 
4 GBytes/sec. Or you really mean there is almost 50% overhead?

--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: IPoIB performance

2012-09-05 Thread Reeted

On 09/05/12 19:59, Atchley, Scott wrote:

On Sep 5, 2012, at 1:50 PM, Reeted wrote:



I have read that with newer cards the datagram (unconnected) mode is
faster at IPoIB than connected mode. Do you want to check?

I have read that the latency is lower (better) but the bandwidth is lower.

Using datagram mode limits the MTU to 2044 and the throughput to ~3 Gb/s on 
these machines/cards. Connected mode at the same MTU performs roughly the same. 
The win in connected mode comes with larger MTUs. With a 9000 MTU, I see ~6 
Gb/s. Pushing the MTU to 655120 (the maximum for ipoib), I can get ~13 Gb/s.



Have a look at an old thread in this ML by Sebastien Dugue IPoIB to 
Ethernet routing performance
He had numbers much higher than yours on similar hardware, and was 
suggested to use datagram to achieve offloading and even higher speeds.
Keep me informed if you can fix this, I am interested but can't test 
infiniband myself right now.

--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ewg] IPoIB to Ethernet routing performance

2010-12-28 Thread Reeted

On 12/28/2010 01:06 AM, Ali Ayoub wrote:

EoIB primary use is not virtualization, although it can support it in
better ways than other ULPs.
FYI, today running full/para virtualized driver in the Guest OS is
needed also for IPoIB.
Only when platform-virtualization solution is available, the GOS will
run IB stack (for any ULP).
   


You and Richard seem to have good experience of infiniband in 
virtualized environments. May I ask one thing?
We were thinking about buying some Mellanox Connectx-2 for use with 
SR-IOV (hardware virtualization for PCI bypass, supposedly supported by 
connectx-2) in KVM (also supposedly supports SR-IOV and PCI bypass).

Do you have info if this will work, in KVM or other hypervisors?
I asked in KVM mailing list but they have not tried this card (which is 
the only SR-IOV card among Infiniband ones, so they have not tried 
infiniband).

We can be interested in both true infiniband and IPoIB support.
Thank you.
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html