Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread chetan bhasin
Sounds good. Thanks Ben for the response!



On Tue, Dec 10, 2019 at 5:00 PM Benoit Ganne (bganne) 
wrote:

> Hi,
>
> > I have used below CLI's to create rdma interfaces over Mellanox , Can you
> > suggest what set of CLi's I should use so that packets from rdma will
> also
> > have mbuff fields set properly , so that we can directly right on KNI?
>
> You do not have to. Just create a KNI interface in VPP with the DPDK
> plugin and switch packets between KNI and rdma interfaces.
> VPP never use DPDK mbuf internally, when you get packets from/to DPDK in
> VPP you have a buffer metadata translation anyway. From our PoV this is not
> different than switching packets between a vhost interface and a DPDK
> hardware interface (eg. VIC).
>
> Best
> ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14861): https://lists.fd.io/g/vpp-dev/message/14861
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi,

> I have used below CLI's to create rdma interfaces over Mellanox , Can you
> suggest what set of CLi's I should use so that packets from rdma will also
> have mbuff fields set properly , so that we can directly right on KNI?

You do not have to. Just create a KNI interface in VPP with the DPDK plugin and 
switch packets between KNI and rdma interfaces.
VPP never use DPDK mbuf internally, when you get packets from/to DPDK in VPP 
you have a buffer metadata translation anyway. From our PoV this is not 
different than switching packets between a vhost interface and a DPDK hardware 
interface (eg. VIC).

Best
ben 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14846): https://lists.fd.io/g/vpp-dev/message/14846
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-10 Thread chetan bhasin
Hi Damjan,

I have used below CLI's to create rdma interfaces over Mellanox , Can you
suggest what set of CLi's I should use so that packets from rdma will also
have mbuff fields set properly , so that we can directly right on KNI?

create interface rdma host-if ens2f0 name device_9/0/0
create interface rdma host-if ens2f1 name device_9/0/1

Thanks,
Chetan Bhasin

On Fri, Dec 6, 2019 at 9:32 PM Damjan Marion via Lists.Fd.Io  wrote:

>
>
> > On 6 Dec 2019, at 07:16, Prashant Upadhyaya 
> wrote:
> >
> > Hi,
> >
> > I use VPP with DPDK driver for I/O with NIC.
> > For high speed switching of packets to and from kernel, I use DPDK KNI
> > (kernel module and user space API's provided by DPDK)
> > This works well because the vlib buffer is backed by the DPDK mbuf
> > (KNI uses DPDK mbuf's)
> >
> > Now, if I choose to use a native driver of VPP for I/O with NIC, is
> > there a native equivalent in VPP to replace KNI as well ? The native
> > equivalent should not lose out on performance as compared to KNI so I
> > believe the tap interface can be ruled out here.
> >
> > If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> > would have to do a data copy between the vlib buffer and an mbuf  in
> > addition to doing all the DPDK pool maintenance etc. The copies would
> > be destructive for performance surely.
> >
> > So I believe, the question is -- in presence of native drivers in VPP,
> > what is the high speed equivalent of DPDK KNI.
>
> You can use dpdk and native drivers on the same time.
> How KNI performance compares to tap with vhost-net backend?
>
>
> --
> Damjan
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14826): https://lists.fd.io/g/vpp-dev/message/14826
> Mute This Topic: https://lists.fd.io/mt/67470059/856484
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [
> chetan.bhasin...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14845): https://lists.fd.io/g/vpp-dev/message/14845
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-07 Thread Damjan Marion via Lists.Fd.Io


> On 7 Dec 2019, at 04:46, Prashant Upadhyaya  wrote:
> 
> On Fri, Dec 6, 2019 at 9:32 PM Damjan Marion  wrote:
>> 
>> 
>> 
 On 6 Dec 2019, at 07:16, Prashant Upadhyaya  wrote:
>>> 
>>> Hi,
>>> 
>>> I use VPP with DPDK driver for I/O with NIC.
>>> For high speed switching of packets to and from kernel, I use DPDK KNI
>>> (kernel module and user space API's provided by DPDK)
>>> This works well because the vlib buffer is backed by the DPDK mbuf
>>> (KNI uses DPDK mbuf's)
>>> 
>>> Now, if I choose to use a native driver of VPP for I/O with NIC, is
>>> there a native equivalent in VPP to replace KNI as well ? The native
>>> equivalent should not lose out on performance as compared to KNI so I
>>> believe the tap interface can be ruled out here.
>>> 
>>> If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
>>> would have to do a data copy between the vlib buffer and an mbuf  in
>>> addition to doing all the DPDK pool maintenance etc. The copies would
>>> be destructive for performance surely.
>>> 
>>> So I believe, the question is -- in presence of native drivers in VPP,
>>> what is the high speed equivalent of DPDK KNI.
>> 
>> You can use dpdk and native drivers on the same time.
>> How KNI performance compares to tap with vhost-net backend?
>> 
>> 
>> --
>> Damjan
>> 
> 
> Thanks Damjan.
> If I use the native driver for NIC, would I get the vlib buffer still
> backed by a DPDK mbuf ?

yes

> I don't know the perf difference between KNI and tap with vhost-net backend.
> I would need to poll the tap to pick the packets from kernel side; and
> write into the tap to send the packets to the kernel in VPP workers.
> I suppose the mere copies from user space into kernel space and vice
> versa would make it more expensive than KNI where just an exchange of
> mbuf's takes place in both directions. Plus, I wonder whether system
> call usage is a good idea at all in a worker which is also
> multiplexing the packet I/O with the NIC.

my question was about vhost-net backed tap interface, not plain one ...

— 
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14836): https://lists.fd.io/g/vpp-dev/message/14836
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-07 Thread Prashant Upadhyaya
On Fri, Dec 6, 2019 at 9:32 PM Damjan Marion  wrote:
>
>
>
> > On 6 Dec 2019, at 07:16, Prashant Upadhyaya  wrote:
> >
> > Hi,
> >
> > I use VPP with DPDK driver for I/O with NIC.
> > For high speed switching of packets to and from kernel, I use DPDK KNI
> > (kernel module and user space API's provided by DPDK)
> > This works well because the vlib buffer is backed by the DPDK mbuf
> > (KNI uses DPDK mbuf's)
> >
> > Now, if I choose to use a native driver of VPP for I/O with NIC, is
> > there a native equivalent in VPP to replace KNI as well ? The native
> > equivalent should not lose out on performance as compared to KNI so I
> > believe the tap interface can be ruled out here.
> >
> > If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> > would have to do a data copy between the vlib buffer and an mbuf  in
> > addition to doing all the DPDK pool maintenance etc. The copies would
> > be destructive for performance surely.
> >
> > So I believe, the question is -- in presence of native drivers in VPP,
> > what is the high speed equivalent of DPDK KNI.
>
> You can use dpdk and native drivers on the same time.
> How KNI performance compares to tap with vhost-net backend?
>
>
> --
> Damjan
>

Thanks Damjan.
If I use the native driver for NIC, would I get the vlib buffer still
backed by a DPDK mbuf ?
I don't know the perf difference between KNI and tap with vhost-net backend.
I would need to poll the tap to pick the packets from kernel side; and
write into the tap to send the packets to the kernel in VPP workers.
I suppose the mere copies from user space into kernel space and vice
versa would make it more expensive than KNI where just an exchange of
mbuf's takes place in both directions. Plus, I wonder whether system
call usage is a good idea at all in a worker which is also
multiplexing the packet I/O with the NIC.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14831): https://lists.fd.io/g/vpp-dev/message/14831
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding high speed I/O with kernel

2019-12-06 Thread Damjan Marion via Lists.Fd.Io


> On 6 Dec 2019, at 07:16, Prashant Upadhyaya  wrote:
> 
> Hi,
> 
> I use VPP with DPDK driver for I/O with NIC.
> For high speed switching of packets to and from kernel, I use DPDK KNI
> (kernel module and user space API's provided by DPDK)
> This works well because the vlib buffer is backed by the DPDK mbuf
> (KNI uses DPDK mbuf's)
> 
> Now, if I choose to use a native driver of VPP for I/O with NIC, is
> there a native equivalent in VPP to replace KNI as well ? The native
> equivalent should not lose out on performance as compared to KNI so I
> believe the tap interface can be ruled out here.
> 
> If I keep using DPDK KNI and VPP native non-dpdk driver, then I fear I
> would have to do a data copy between the vlib buffer and an mbuf  in
> addition to doing all the DPDK pool maintenance etc. The copies would
> be destructive for performance surely.
> 
> So I believe, the question is -- in presence of native drivers in VPP,
> what is the high speed equivalent of DPDK KNI.

You can use dpdk and native drivers on the same time.
How KNI performance compares to tap with vhost-net backend?


-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14826): https://lists.fd.io/g/vpp-dev/message/14826
Mute This Topic: https://lists.fd.io/mt/67470059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-