On Wed, Apr 17, 2019 at 9:47 AM Ben Pfaff <b...@ovn.org> wrote: > > On Wed, Apr 17, 2019 at 10:09:53AM +0200, Eelco Chaudron wrote: > > On 16 Apr 2019, at 21:55, Ben Pfaff wrote: > > > AF_XDP is a faster way to access the existing kernel devices. If we > > > take that point of view, then it would be ideal if AF_XDP were > > > automatically used when it was available, instead of adding a new > > > network device type. Is there a reason that this point of view is > > > wrong? That is, when AF_XDP is available, is there a reason not to use > > > it? > > > > This needs support by all the ingress and egress ports in the system, and > > currently, there is no API to check this. > > Do you mean for performance or for some other reason? I would suspect > that, if AF_XDP was not available, then everything would still work OK > via AF_PACKET, just slower. > > > There are also features like traffic shaping that will not work. Maybe it > > will be worth adding the table for AF_XDP in > > http://docs.openvswitch.org/en/latest/faq/releases/ > > AF_XDP is comparable to DPDK/userspace, not to the Linux kernel > datapath. > > The table currently conflates the userspace datapath with the DPDK > network device. I believe that the only entry there that depends on the > DPDK network device is the one for policing. It could be replaced by a > [*] with a note like this: > > YES - for DPDK network devices. > NO - for system or AF_XDP network devices. > > > > You said that your goal for the next version is to improve performance > > > and add optimizations. Do you think that is important before we merge > > > the series? We can continue to improve performance after it is merged. > > > > The previous patch was rather unstable and I could not get it running with > > the PVP test without crashing. I think this patchset should get some proper > > testing and reviews by others. Especially for all the features being marked > > as supported in the above-mentioned table. > > If it's unstable, we should fix that before adding it in.
Agree. My first goal is to make sure people can at least run $ make check-afxdp This uses the virtual device, veth XDP skb-mode, to run various OVS test case. The performance will be bad, but it makes sure the correctness. Regards, William > > However, the bar is lower for new features that don't break existing > features, especially optional ones and ones that can be easily be > removed if they don't work out in the end. DPDK support was considered > "experimental" for a long time, it's possible that AF_XDP would be in > the same boat for a while. > > > > If we set performance aside, do you have a reason to want to wait to > > > merge this? (I wasn't able to easily apply this series to current > > > master, so it'll need at least a rebase before we apply it. And I have > > > only skimmed it, not fully reviewed it.) > > > > Other than the items above, do we really need another datapath? > > It's less than a new datapath. It's a new network device > implementation. > > > With this, we use two or more cores for processing packets. If we poll > > two physical ports it could be 300%, which is a typical use case with > > bonding. What about multiple queue support, does it work? Both in > > kernel and DPDK mode we use multiple queues to distribute the load, > > with this scenario does it double the number of CPUs used? Can we use > > the poll() mode as explained here, > > https://linuxplumbersconf.org/event/2/contributions/99/, and how will > > it work with multiple queues/pmd threads? What about any latency > > tests, is it worse or better than kernel/dpdk? Also with the AF_XDP > > datapath, there is no to leverage hardware offload, like DPDK and > > TC. And then there is the part that it only works on the most recent > > kernels. > > These are good questions. William will have some of the answers. > > > To me looking at this I would say it’s far from being ready to be merged > > into OVS. However, if others decide to go ahead I think it should be > > disabled, not compiled in by default. > > Yes, that seems reasonable to me. _______________________________________________ dev mailing list d...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-dev