Hi Nitin,
I am not necessarily speaking about Inline IPSec. I was just saying that VPP 
lets you the choice to do both inline and lookaside types of offload.
Here is a public example of inline acceleration: 
https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/wp/wp-01295-hcl-segment-routing-over-ipv6-acceleration-using-intel-fpga-programmable-acceleration-card-n3000.pdf
Jerome

Le 04/12/2019 18:19, « Nitin Saxena » <nsax...@marvell.com> a écrit :

    Hi Jerome,
    
    I have query unrelated to the original thread. 
    
    >> There are other examples (lookaside and inline)
    By inline do you mean "Inline IPSEC"? Could you please elaborate what you 
meant by inline offload in VPP?
    
    Thanks,
    Nitin
    
    > -----Original Message-----
    > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Jerome Tollet
    > via Lists.Fd.Io
    > Sent: Wednesday, December 4, 2019 9:00 PM
    > To: Thomas Monjalon <tho...@monjalon.net>; Damjan Marion
    > <dmar...@me.com>
    > Cc: vpp-dev@lists.fd.io
    > Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
    > 
    > External Email
    > 
    > ----------------------------------------------------------------------
    > Hi Thomas,
    > I strongly disagree with your conclusions from this discussion:
    > 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not
    > at the cost of performance. (It's actually the opposite ie AVF driver)
    > 2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
    > offload based on Intel QAT cards (lookaside). There are other examples
    > (lookaside and inline)
    > 3) Plugins are free to use any sort of offload (and they do).
    > 
    > Jerome
    > 
    > Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon »
    > <vpp-dev@lists.fd.io au nom de tho...@monjalon.net> a écrit :
    > 
    >     03/12/2019 20:01, Damjan Marion:
    >     > On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
    >     > > 03/12/2019 13:12, Damjan Marion:
    >     > >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
    >     > >>> 03/12/2019 00:26, Damjan Marion:
    >     > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
    >     > >>>>> VPP has a buffer called vlib_buffer_t, while DPDK has 
rte_mbuf.
    >     > >>>>> Are there some benchmarks about the cost of converting, from
    > one format
    >     > >>>>> to the other one, during Rx/Tx operations?
    >     > >>>>
    >     > >>>> We are benchmarking both dpdk i40e PMD performance and native
    > VPP AVF driver performance and we are seeing significantly better
    > performance with native AVF.
    >     > >>>> If you taake a look at [1] you will see that DPDK i40e driver 
provides
    > 18.62 Mpps and exactly the same test with native AVF driver is giving us
    > arounf 24.86 Mpps.
    >     > > [...]
    >     > >>>>
    >     > >>>>> So why not improving DPDK integration in VPP to make it 
faster?
    >     > >>>>
    >     > >>>> Yes, if we can get freedom to use parts of DPDK we want 
instead of
    > being forced to adopt whole DPDK ecosystem.
    >     > >>>> for example, you cannot use dpdk drivers without using EAL,
    > mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it 
will
    > disappear for long time...
    > 
    >     As stated below, I take this feedback, thanks.
    >     However it won't change VPP choice of not using rte_mbuf natively.
    > 
    >     [...]
    >     > >> At the moment we have good coverage of native drivers, and still
    > there is a option for people to use dpdk. It is now mainly up to driver 
vendors
    > to decide if they are happy with performance they wil get from dpdk pmd or
    > they want better...
    >     > >
    >     > > Yes it is possible to use DPDK in VPP with degraded performance.
    >     > > If an user wants best performance with VPP and a real NIC,
    >     > > a new driver must be implemented for VPP only.
    >     > >
    >     > > Anyway real performance benefits are in hardware device offloads
    >     > > which will be hard to implement in VPP native drivers.
    >     > > Support (investment) would be needed from vendors to make it
    > happen.
    >     > > About offloads, VPP is not using crypto or compression drivers
    >     > > that DPDK provides (plus regex coming).
    >     >
    >     > Nice marketing pitch for your company :)
    > 
    >     I guess you mean Mellanox has a good offloads offering.
    >     But my point is about the end of Moore's law,
    >     and the offload trending of most of device vendors.
    >     However I truly respect the choice of avoiding device offloads.
    > 
    >     > > VPP is a CPU-based packet processing software.
    >     > > If users want to leverage hardware device offloads,
    >     > > a truly DPDK-based software is required.
    >     > > If I understand well your replies, such software cannot be VPP.
    >     >
    >     > Yes, DPDK is centre of the universe/
    > 
    >     DPDK is where most of networking devices are supported in userspace.
    >     That's all.
    > 
    > 
    >     > So Dear Thomas, I can continue this discussion forever, but that is 
not
    > something I'm going to do as it started to be trolling contest.
    > 
    >     I agree
    > 
    >     > I can understand that you may be passionate about you project and 
that
    > you maybe think that it is the greatest thing after sliced bread, but 
please
    > allow that other people have different opinion. Instead of giving the 
lessons
    > to other people what they should do, if you are interested for dpdk to be
    > better consumed, please take a feedback provided to you. I assume that you
    > are interested as you showed up on this mailing list, if not there was no
    > reason for starting this thread in the first place.
    > 
    >     Thank you for the feedbacks, this discussion was required:
    >     1/ it gives more motivation to improve EAL API
    >     2/ it confirms the VPP design choice of not being DPDK-dependent (at a
    > performance cost)
    >     3/ it confirms the VPP design choice of being focused on CPU-based
    > processing
    > 
    > 
    > 
    
    

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14799): https://lists.fd.io/g/vpp-dev/message/14799
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to