IPoIB is far easier to use and does not carry out the additional management
burden of vNICS.

With vNICs you have to manage the MAC address mapping to Ethernet g/w port.
In some situations, such as when multiple G/w's are used for resiliency this
can amount to a lot of separate vNICs on each server to manage.  In a small
configuration I had, we ended up with 6 vNICS per server to manage.  On a
large configuration this additional management would be a big burden.

My experience with IPoIB has always been very positive. All my existing
socket programs have worked, even some esoteric ioctls I use for multicast
and buffer  management.
Performance could always be better, but in my experience it's not great for
the vNICS either.   Latency in particular was very disappointing when I
tested.  
If you want high performance you have to avoid TCP/IP.

-----Original Message-----
From: Jabe [mailto:jabe.chap...@shiftmail.org] 
Sent: 27 December 2010 11:51
To: richard.crouc...@informatix-sol.com
Cc: Richard Croucher; 'Ali Ayoub'; 'Christoph Lameter'; 'linux-rdma';
'sebastien dugue'; 'OF EWG'
Subject: Re: [ewg] IPoIB to Ethernet routing performance

On 12/26/2010 11:57 AM, Richard Croucher wrote:
> The vNIC driver only works when you have Ethernet/InfiniBand hardware
> gateways in your environment.   It is useful when you have external hosts
to
> communicate with which do not have direct InfiniBand connectivity.
> IPoIB is still heavily used in these environments to provide TCP/IP
> connectivity within the InfiniBand fabric.
> The primary Use Case for vNICs is probably for virtualization servers, so
> that individual Guests can be presented with a virtual Ethernet NIC and do
> not lead to load any InfiniBand drivers.  Only the hypervisor needs to
have
> the InfiniBand software stack loaded.
> I've also applied vNICs in the Financial Services arena, for connectivity
to
> external TCP/IP services but there the IPoIB gateway function is arguably
> more useful.
>
> The whole vNIC arena is complicated by different, incompatible
> implementations from each of Qlogic and Mellanox.
>
> Richard
>    


Richard, with your explanation I understand why vNIC / EoIB is used in 
the case you cite, but I don't understand why it is NOT used in the 
other cases (like Ali says).

I can *guess* it's probably because with a virtual ethernet fabric you 
have to do all IP stack in software, probably without even having the 
stateless offloads (so it would be a performance reason). Is that the 
reason?

Thank you

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to