On 17 Apr 2019, at 19:09, William Tu wrote:

On Wed, Apr 17, 2019 at 1:09 AM Eelco Chaudron <echau...@redhat.com> wrote:



On 16 Apr 2019, at 21:55, Ben Pfaff wrote:

On Mon, Apr 01, 2019 at 03:46:48PM -0700, William Tu wrote:
The patch series introduces AF_XDP support for OVS netdev.
AF_XDP is a new address family working together with eBPF.
In short, a socket with AF_XDP family can receive and send
packets from an eBPF/XDP program attached to the netdev.
For more details about AF_XDP, please see linux kernel's
Documentation/networking/af_xdp.rst

I'm glad to see some more revisions of this series!

I’m planning on reviewing and testing this patch, I’ll try to start
it this week, or else when I get back from PTO.

AF_XDP is a faster way to access the existing kernel devices.  If we
take that point of view, then it would be ideal if AF_XDP were
automatically used when it was available, instead of adding a new
network device type.  Is there a reason that this point of view is
wrong?  That is, when AF_XDP is available, is there a reason not to
use
it?

This needs support by all the ingress and egress ports in the system,
and currently, there is no API to check this.

Not necessary all ports.
On a OVS switch, you can have some ports supporting AF_XDP,
and some ports are other types, ex: DPDK vhost, or tap.

But I’m wondering how would you deal with ports not supporting this at driver level? Will you fall back to skb style, will you report this (as it’s interesting to know from a performance level).
Guess I just need to look at your code :)


There are also features like traffic shaping that will not work. Maybe
it will be worth adding the table for AF_XDP in
http://docs.openvswitch.org/en/latest/faq/releases/

Right, when using AF_XDP, we don't have QoS support.
If people want to do rate limiting on a AF_XDP port, another
way is to use OpenFlow meter actions.

That for me was the only thing that stood out, but just want to make sure no other things were abstracted in the DPDK APIs…

Guess you could use the DPDK meters framework to support the same as DPDK, the only thing is that you need enablement of DPDK also.


You said that your goal for the next version is to improve performance and add optimizations. Do you think that is important before we merge
the series?  We can continue to improve performance after it is
merged.

The previous patch was rather unstable and I could not get it running
with the PVP test without crashing. I think this patchset should get
some proper testing and reviews by others. Especially for all the
features being marked as supported in the above-mentioned table.


Yes, Tim has been helping a lot to test this and I have a couple of
new fixes. I will incorporate into next version.

Cool, I’ll talk to Tim offline, in addition, copy me on the next patch and I’ll check it out.
Do you have a time frame, so I can do the review based on that revision?

If we set performance aside, do you have a reason to want to wait to
merge this?  (I wasn't able to easily apply this series to current
master, so it'll need at least a rebase before we apply it.  And I
have
only skimmed it, not fully reviewed it.)

Other than the items above, do we really need another datapath? With

This is using the same datapath, the userspace datapath, as OVS-DPDK.
So we don't introduce another datapath, we introduce a new netdev type.

My fault, I was not referring to the OVS data path definition ;)

this, we use two or more cores for processing packets. If we poll two
physical ports it could be 300%, which is a typical use case with
bonding. What about multiple queue support, does it work? Both in kernel

Yes, this patchset only allows 1 pmd and 1 queue.
I'm adding the multiqueue support.

We need some alignment here on how we add threads for PMDs XDP vs DPDK. If there are not enough cores for both the system will not start (EMERGENCY exit). And user also might want to control which cores run DPDK and which XDP.

and DPDK mode we use multiple queues to distribute the load, with this scenario does it double the number of CPUs used? Can we use the poll()
mode as explained here,
https://linuxplumbersconf.org/event/2/contributions/99/, and how will it work with multiple queues/pmd threads? What about any latency tests, is
it worse or better than kernel/dpdk? Also with the AF_XDP datapath,
there is no to leverage hardware offload, like DPDK and TC. And then
there is the part that it only works on the most recent kernels.

You have lots of good points here.
My experiments show that it's slower than DPDK, but much faster than
kernel.

Looking for your improvement patch as for me it’s about 10x slower for the kernel with a single queue (see other email).


To me looking at this I would say it’s far from being ready to be
merged into OVS. However, if others decide to go ahead I think it should
be disabled, not compiled in by default.

I agree. This should be experimental feature and we're adding s.t like
#./configure --enable-afxdp
so not compiled in by default

Thanks
William
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to