> >
> > Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> > conversion and tend to be faster than when used by DPDK. I suspect VPP
> is
> not
> > the only project to report this extra cost.
> It would be good to know other projects that report this
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> conversion and tend to be faster than when used by DPDK. I suspect VPP is
not
> the
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Jerome Tollet via
> Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:33 AM
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native dr
"? Could you please elaborate what you
> meant by inline offload in VPP?
>
>
>
> Thanks,
>
> Nitin
>
>
>
> > -Original Message-
>
> > From: vpp-dev@lists.fd.io On Behalf Of Jerome
> Tollet
>
> > via Lists.Fd.
04/12/2019 16:29, Jerome Tollet (jtollet):
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
>
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at
> the cost of performance. (It's actually the opposite ie AVF driver)
I mean performance
ay, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
>
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
>
> External Email
>
> --
m: vpp-dev@lists.fd.io On Behalf Of Jerome Tollet
> via Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
>
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] e
Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
conversion and tend to be faster than when used by DPDK. I suspect VPP is not
the only project to report this extra cost.
Jerome
Le 04/12/2019 15:43, « Thomas Monjalon » a écrit :
03/12/2019 22:11, Jerome Tollet
Hi Thomas,
I strongly disagree with your conclusions from this discussion:
1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at
the cost of performance. (It's actually the opposite ie AVF driver)
2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
03/12/2019 22:11, Jerome Tollet (jtollet):
> Thomas,
> I am afraid you may be missing the point. VPP is a framework where plugins
> are first class citizens. If a plugin requires leveraging offload (inline or
> lookaside), it is more than welcome to do it.
> There are multiple examples including
04/12/2019 15:25, Ole Troan:
> Thomas,
>
> > 2/ it confirms the VPP design choice of not being DPDK-dependent (at a
> > performance cost)
>
> Do you have any examples/features where a DPDK/offload solution would be
> performing better than VPP?
> Any numbers?
No sorry, I am not benchmarking
03/12/2019 20:56, Ole Troan:
> Interesting discussion.
>
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> >
> > Anyway real performance benefits are in hardware
Thomas,
> 2/ it confirms the VPP design choice of not being DPDK-dependent (at a
> performance cost)
Do you have any examples/features where a DPDK/offload solution would be
performing better than VPP?
Any numbers?
Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent
03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > VPP has a buffer
> On Dec 3, 2019, at 12:56 PM, Ole Troan wrote:
>
> If you don't want that, wouldn't you just build something with a Trident 4?
> ;-)
Or Tofino, if you want to go that direction. Even then, the amount of
packet-processing (especially the edge/exception conditions) can overwhelm a
Thomas,
I am afraid you may be missing the point. VPP is a framework where plugins are
first class citizens. If a plugin requires leveraging offload (inline or
lookaside), it is more than welcome to do it.
There are multiple examples including hw crypto accelerators
Interesting discussion.
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
>
> Anyway real performance benefits are in hardware device offloads
> which will be hard to
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
>
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
>>> 03/12/2019 00:26, Damjan Marion:
Hi THomas!
Inline...
>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
>
> 03/12/2019 13:12, Damjan Marion:
>>> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
>>> 03/12/2019 00:26, Damjan Marion:
Hi THomas!
Inline...
>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >>
> >> Hi THomas!
> >>
> >> Inline...
> >>
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> >>>
> >>> Hi all,
> >>>
> >>> VPP has a buffer called
>
> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
>
> 03/12/2019 00:26, Damjan Marion:
>>
>> Hi THomas!
>>
>> Inline...
>>
On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>>>
>>> Hi all,
>>>
>>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
>>> Are there some
03/12/2019 00:26, Damjan Marion:
>
> Hi THomas!
>
> Inline...
>
> > On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> >
> > Hi all,
> >
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one format
> > to the
Thanks for bringing up the discussion
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Thomas
> Monjalon via Lists.Fd.Io
> Sent: Monday, December 2, 2019 4:35 PM
> To: vpp-dev@lists.fd.io
> Cc: vpp-dev@lists.fd.io
> Subject: [vpp-dev] efficient us
Hi THomas!
Inline...
> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
>
> Hi all,
>
> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> Are there some benchmarks about the cost of converting, from one format
> to the other one, during Rx/Tx operations?
We are benchmarking
Hi all,
VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
Are there some benchmarks about the cost of converting, from one format
to the other one, during Rx/Tx operations?
I'm sure there would be some benefits of switching VPP to natively use
the DPDK mbuf allocated in mempools.
25 matches
Mail list logo