inlined

Le 05/12/2019 09:03, « vpp-dev@lists.fd.io au nom de Honnappa Nagarahalli » 
<vpp-dev@lists.fd.io au nom de honnappa.nagaraha...@arm.com> a écrit :

    
    
    > -----Original Message-----
    > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Jerome 
Tollet via
    > Lists.Fd.Io
    > Sent: Wednesday, December 4, 2019 9:33 AM
    > To: tho...@monjalon.net
    > Cc: vpp-dev@lists.fd.io
    > Subject: Re: [vpp-dev] efficient use of DPDK
    >
    > Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
    > conversion and tend to be faster than when used by DPDK. I suspect VPP is 
not
    > the only project to report this extra cost.
    It would be good to know other projects that report this extra cost. It 
will help support changes to DPDK.
[JT] I may be wrong but I think there was a presentation about that last week 
during DPDK user conf in the US.
    
    > Jerome
    >
    > Le 04/12/2019 15:43, « Thomas Monjalon » <tho...@monjalon.net> a écrit :
    >
    >     03/12/2019 22:11, Jerome Tollet (jtollet):
    >     > Thomas,
    >     > I am afraid you may be missing the point. VPP is a framework where 
plugins
    > are first class citizens. If a plugin requires leveraging offload (inline 
or
    > lookaside), it is more than welcome to do it.
    >     > There are multiple examples including hw crypto accelerators
    > 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-
    > in-the-fdio-vpp-project).
    >
    >     OK I understand plugins are open.
    >     My point was about the efficiency of the plugins,
    >     given the need for buffer conversion.
    >     If some plugins are already efficient enough, great:
    >     it means there is no need for bringing effort in native VPP drivers.
    >
    >
    >     > Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon
    > » <vpp-dev@lists.fd.io au nom de tho...@monjalon.net> a écrit :
    >     >
    >     >     03/12/2019 13:12, Damjan Marion:
    >     >     > > On 3 Dec 2019, at 09:28, Thomas Monjalon 
<tho...@monjalon.net>
    > wrote:
    >     >     > > 03/12/2019 00:26, Damjan Marion:
    >     >     > >>
    >     >     > >> Hi THomas!
    >     >     > >>
    >     >     > >> Inline...
    >     >     > >>
    >     >     > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon
    > <tho...@monjalon.net> wrote:
    >     >     > >>>
    >     >     > >>> Hi all,
    >     >     > >>>
    >     >     > >>> VPP has a buffer called vlib_buffer_t, while DPDK has 
rte_mbuf.
    >     >     > >>> Are there some benchmarks about the cost of converting, 
from one
    > format
    >     >     > >>> to the other one, during Rx/Tx operations?
    >     >     > >>
    >     >     > >> We are benchmarking both dpdk i40e PMD performance and 
native
    > VPP AVF driver performance and we are seeing significantly better
    > performance with native AVF.
    >     >     > >> If you taake a look at [1] you will see that DPDK i40e 
driver provides
    > 18.62 Mpps and exactly the same test with native AVF driver is giving us 
arounf
    > 24.86 Mpps.
    >     >     [...]
    >     >     > >>
    >     >     > >>> So why not improving DPDK integration in VPP to make it 
faster?
    >     >     > >>
    >     >     > >> Yes, if we can get freedom to use parts of DPDK we want 
instead of
    > being forced to adopt whole DPDK ecosystem.
    >     >     > >> for example, you cannot use dpdk drivers without using EAL,
    > mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it 
will
    > disappear for long time...
    >     >     > >
    >     >     > > You could help to improve these parts of DPDK,
    >     >     > > instead of spending time to try implementing few drivers.
    >     >     > > Then VPP would benefit from a rich driver ecosystem.
    >     >     >
    >     >     > Thank you for letting me know what could be better use of my 
time.
    >     >
    >     >     "You" was referring to VPP developers.
    >     >     I think some other Cisco developers are also contributing to 
VPP.
    >     >
    >     >     > At the moment we have good coverage of native drivers, and 
still there
    > is a option for people to use dpdk. It is now mainly up to driver vendors 
to
    > decide if they are happy with performance they wil get from dpdk pmd or 
they
    > want better...
    >     >
    >     >     Yes it is possible to use DPDK in VPP with degraded performance.
    >     >     If an user wants best performance with VPP and a real NIC,
    >     >     a new driver must be implemented for VPP only.
    >     >
    >     >     Anyway real performance benefits are in hardware device offloads
    >     >     which will be hard to implement in VPP native drivers.
    >     >     Support (investment) would be needed from vendors to make it 
happen.
    >     >     About offloads, VPP is not using crypto or compression drivers
    >     >     that DPDK provides (plus regex coming).
    >     >
    >     >     VPP is a CPU-based packet processing software.
    >     >     If users want to leverage hardware device offloads,
    >     >     a truly DPDK-based software is required.
    >     >     If I understand well your replies, such software cannot be VPP.
    >
    >
    >
    >
    
    IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
    

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14815): https://lists.fd.io/g/vpp-dev/message/14815
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to