03/12/2019 00:26, Damjan Marion:
> 
> Hi THomas!
> 
> Inline...
> 
> > On 2 Dec 2019, at 23:35, Thomas Monjalon <tho...@monjalon.net> wrote:
> > 
> > Hi all,
> > 
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one format
> > to the other one, during Rx/Tx operations?
> 
> We are benchmarking both dpdk i40e PMD performance and native VPP AVF driver 
> performance and we are seeing significantly better performance with native 
> AVF.
> If you taake a look at [1] you will see that DPDK i40e driver provides 18.62 
> Mpps and exactly the same test with native AVF driver is giving us arounf 
> 24.86 Mpps.

Why not comparing with DPDK AVF?

> Thanks for native AVF driver and new buffer management code we managed to go 
> bellow 100 clocks per packet for the whole ipv4 routing base test. 
> 
> My understanding is that performance difference is caused by 4 factors, but i 
> cannot support each of them with number as i never conducted detailed testing.
> 
> - less work done in driver code, as we have freedom to cherrypick only data 
> we need, and in case of DPDK, PMD needs to be universal

For info, offloads are disabled by default now in DPDK.

> - no cost of medatata processing (rtr_mbuf -> vlib_buffer_t) conversion
> 
> - less pressure on cache (we touch 2 cacheline less with native driver for 
> each packet), this is specially observable on smaller devices with less cache
> 
> - faster buffer management code
> 
> 
> > I'm sure there would be some benefits of switching VPP to natively use
> > the DPDK mbuf allocated in mempools.
> 
> I dont agree with this statement, we hawe own buffer management code an we 
> are not interested in using dpdk mempools. There are many use cases where we 
> don't need DPDK and we wan't VPP not to be dependant on DPDK code.
> 
> > What would be the drawbacks?
> 
> 
> > Last time I asked this question, the answer was about compatibility with
> > other driver backends, especially ODP. What happened?
> > DPDK drivers are still the only external drivers used by VPP?
> 
> No, we still use DPDK drivers in many cases, but also we 
> have lot of native drivers in VPP those days:
> 
> - intel AVF
> - virtio
> - vmxnet3
> - rdma (for mlx4, mlx5 an other rdma capable cards), direct verbs for mlx5 
> work in progess
> - tap with virtio backend
> - memif
> - marvlel pp2
> - (af_xdp - work in progress)
> 
> > When using DPDK, more than 40 networking drivers are available:
> >     https://core.dpdk.org/supported/
> > After 4 years of Open Source VPP, there are less than 10 native drivers:
> >     - virtual drivers: virtio, vmxnet3, af_packet, netmap, memif
> >     - hardware drivers: ixge, avf, pp2
> > And if looking at ixge driver, we can read:
> > "
> >     This driver is not intended for production use and it is unsupported.
> >     It is provided for educational use only.
> >     Please use supported DPDK driver instead.
> > "
> 
> yep, ixgbe driver is not maintained for long time...
> 
> > So why not improving DPDK integration in VPP to make it faster?
> 
> Yes, if we can get freedom to use parts of DPDK we want instead of being 
> forced to adopt whole DPDK ecosystem.
> for example, you cannot use dpdk drivers without using EAL, mempool, 
> rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
> for long time...

You could help to improve these parts of DPDK,
instead of spending time to try implementing few drivers.
Then VPP would benefit from a rich driver ecosystem.


> Good example what will be good fit for us is rdma-core library, it allows you 
> to programm nic and fetch packets from it in much more lightweight way, and 
> if you really want to have super-fast datapath, there is direct verbs 
> interface which gives you access to tx/rx rings directly.
> 
> > DPDK mbuf has dynamic fields now; it can help to register metadata on 
> > demand.
> > And it is still possible to statically reserve some extra space for
> > application-specific metadata in each packet.
> 
> I don't see this s a huge benefit, you still need to call rte_eal_init, you 
> still need to use dpdk mempools. Basically it still requires adoption of the 
> whole dpdk ecosystem which we don't want...
> 
> 
> > Other improvements, like meson packaging usable with pkg-config,
> > were done during last years and may deserve to be considered.
> 
> I'm aware of that but I was not able to found good justification to invest 
> time to change existing scripting to move to meson. As typically vpp 
> developers doesn't need to compile dpdk very frequently current solution is 
> simply good enough...



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14761): https://lists.fd.io/g/vpp-dev/message/14761
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to