On 17 Apr 2019, at 10:09, Eelco Chaudron wrote:

On 16 Apr 2019, at 21:55, Ben Pfaff wrote:

On Mon, Apr 01, 2019 at 03:46:48PM -0700, William Tu wrote:
The patch series introduces AF_XDP support for OVS netdev.
AF_XDP is a new address family working together with eBPF.
In short, a socket with AF_XDP family can receive and send
packets from an eBPF/XDP program attached to the netdev.
For more details about AF_XDP, please see linux kernel's
Documentation/networking/af_xdp.rst

I'm glad to see some more revisions of this series!

I’m planning on reviewing and testing this patch, I’ll try to start it this week, or else when I get back from PTO.

AF_XDP is a faster way to access the existing kernel devices.  If we
take that point of view, then it would be ideal if AF_XDP were
automatically used when it was available, instead of adding a new
network device type.  Is there a reason that this point of view is
wrong? That is, when AF_XDP is available, is there a reason not to use
it?

This needs support by all the ingress and egress ports in the system, and currently, there is no API to check this.

There are also features like traffic shaping that will not work. Maybe it will be worth adding the table for AF_XDP in http://docs.openvswitch.org/en/latest/faq/releases/

You said that your goal for the next version is to improve performance and add optimizations. Do you think that is important before we merge the series? We can continue to improve performance after it is merged.

The previous patch was rather unstable and I could not get it running with the PVP test without crashing. I think this patchset should get some proper testing and reviews by others. Especially for all the features being marked as supported in the above-mentioned table.

If we set performance aside, do you have a reason to want to wait to
merge this?  (I wasn't able to easily apply this series to current
master, so it'll need at least a rebase before we apply it. And I have
only skimmed it, not fully reviewed it.)

Other than the items above, do we really need another datapath? With this, we use two or more cores for processing packets. If we poll two physical ports it could be 300%, which is a typical use case with bonding. What about multiple queue support, does it work? Both in kernel and DPDK mode we use multiple queues to distribute the load, with this scenario does it double the number of CPUs used? Can we use the poll() mode as explained here, https://linuxplumbersconf.org/event/2/contributions/99/, and how will it work with multiple queues/pmd threads? What about any latency tests, is it worse or better than kernel/dpdk? Also with the AF_XDP datapath, there is no to leverage hardware offload, like DPDK and TC. And then there is the part that it only works on the most recent kernels.

One other thing that popped up in my head is how (will) it work together with DPDK enabled on the same system?

To me looking at this I would say it’s far from being ready to be merged into OVS. However, if others decide to go ahead I think it should be disabled, not compiled in by default.

It might make sense to squash all of these into a single patch.  I am
not sure that they are really distinct conceptually.
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to