On Mon, 6 Dec 2010 12:08:43 -0000
"Richard Croucher" <rich...@informatix-sol.com> wrote:

> Unfortunately, the 4036E only has two 10G Ethernet ports which will 
> ultimately limit the throughput.

  I'll need to look into this option.

> 
> The Mellanox BridgeX looks a better hardware solution with 12x 10Ge ports but 
> when I tested this they could only provide vNIC functionality and would not 
> commit to adding IPoIB gateway on their roadmap.

  Right, we did some evaluation on it and this was really a show stopper.

  Thanks,

  Sébastien.

> 
> Qlogic also offer the 12400 Gateway.  This has 6x 10ge ports.   However, like 
> the Mellanox, I understand they only provide host vNIC support.
> 
> I'll leave it to representatives from Voltaire, Mellanox and Qlogic to update 
> us. Particularly on support for InfiniBand to Ethernet Gateway for RoCEE.  
> This is needed so that RDMA sessions can be run between InfiniBand and RoCEE 
> connected hosts.  I don't believe this will work over any of the today's 
> available products.
> 
> Richard
> 
> -----Original Message-----
> From: sebastien dugue [mailto:sebastien.du...@bull.net] 
> Sent: 06 December 2010 11:40
> To: Richard Croucher
> Cc: 'OF EWG'; 'linux-rdma'
> Subject: Re: [ewg] IPoIB to Ethernet routing performance
> 
> On Mon, 6 Dec 2010 10:49:58 -0000
> "Richard Croucher" <rich...@informatix-sol.com> wrote:
> 
> > You may be able to improve by doing some OS tuning.
> 
>   Right, I tried a few things concerning the TCP/IP stack tuning but nothing
> really came out of it.
> 
> >  All this data should stay in kernel mode but there are lots of bottlenecks 
> > in
> > the TCP/IP stack that limit scalability.
> 
>   That may be my problem in fact.
> 
> >  The IPoIB code has not been optimized for this use case.
> 
>   I don't think IPoIB to be the bottleneck. In this case as I managed to feed
> 2 IPoIB streams between the client and the router yielding about 40 Gbits/s 
> bandwidth.
> 
> > 
> > You don't mention what Server, kernel and OFED distro you are running.
> 
>   Right, sorry. The router is one of our 4 sockets Nehalem-EX box with 2 IOHs 
> which
> is running an OFED 1.5.2.
> 
> > 
> > The best performance is achieved using InfiniBand/Ethernet hardware 
> > gateways.
> > Most of these provide virtual Ethernet NICs to InfiniBand hosts, but the 
> > Voltaire
> > 4036E does provide a  IPoIB to Ethernet gateway capability.  This is FPGA 
> > based
> > so does provide much higher performance than you will achieve using a 
> > standard server solution.
> 
>   That may be a solution indeed. Are there any real world figures out there
> concerning the 4036E performance?
> 
>   Thanks Richard,
> 
>   Sébastien.
> 
> 
> > 
> > -----Original Message-----
> > From: ewg-boun...@lists.openfabrics.org 
> > [mailto:ewg-boun...@lists.openfabrics.org] On Behalf Of sebastien dugue
> > Sent: 06 December 2010 10:25
> > To: OF EWG
> > Cc: linux-rdma
> > Subject: [ewg] IPoIB to Ethernet routing performance
> > 
> > 
> >   Hi,
> > 
> >   I know this might be off topic, but somebody may have already run into 
> > the same
> > problem before.
> > 
> >   I'm trying to use a server as a router between an IB fabric and an 
> > Ethernet network.
> > 
> >   The router is fitted with one ConnectX2 QDR HCA and one dual port Myricom 
> > 10G
> > Ethernet adapter.
> > 
> >   I did some bandwidth measurements using iperf with the following setup:
> > 
> >   +---------+               +---------+               +---------+
> >   |         |               |         |   10G Eth     |         |
> >   |         |    QDR IB     |         +---------------+         |
> >   | client  +---------------+  Router |   10G Eth     |  Server |
> >   |         |               |         +---------------+         |
> >   |         |               |         |               |         |
> >   +---------+               +---------+               +---------+
> > 
> >   
> >   However, the routing performance is far from what I would have expected.
> > 
> >   Here are some numbers:
> > 
> >   - 1 IPoIB stream between client and router: 20 Gbits/sec
> > 
> >     Looks OK.
> > 
> >   - 2 Ethernet streams between router and server: 19.5 Gbits/sec
> > 
> >     Looks OK.
> > 
> >   - routing 1 IPoIB stream to 1 Ethernet stream from client to server: 9.8 
> > Gbits/sec
> > 
> >     We manage to saturate the Ethernet link, looks good so far.
> > 
> >   - routing 2 IPoIB streams to 2 Ethernet streams from client to server: 
> > 9.3 Gbits/sec
> > 
> >     Argh, even less that when routing a single stream. I would have expected
> >     a bit more than this.
> > 
> > 
> >   Has anybody ever tried to do some routing between an IB fabric and an 
> > Ethernet
> > network and achieved some sensible bandwidth figures?
> > 
> >   Are there some known limitations in what I try to achieve?
> > 
> > 
> >   Thanks,
> > 
> >   Sébastien.
> > 
> > 
> > 
> > 
> > _______________________________________________
> > ewg mailing list
> > e...@lists.openfabrics.org
> > http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
> > 
> > _______________________________________________
> > ewg mailing list
> > e...@lists.openfabrics.org
> > http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
> 
> _______________________________________________
> ewg mailing list
> e...@lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to