On Thu, Feb 4, 2021 at 3:17 PM Gregory Rose <gvrose8...@gmail.com> wrote:
>
>
>
> On 2/3/2021 1:21 PM, William Tu wrote:
> > Mellanox card has different XSK design. It requires users to create
> > dedicated queues for XSK. Unlike Intel's NIC which loads XDP program
> > to all queues, Mellanox only loads XDP program to a subset of its queue.
> >
> > When OVS uses AF_XDP with mlx5, it doesn't replace the existing RX and TX
> > queues in the channel with XSK RX and XSK TX queues, but it creates an
> > additional pair of queues for XSK in that channel. To distinguish
> > regular and XSK queues, mlx5 uses a different range of qids.
> > That means, if the card has 24 queues, queues 0..11 correspond to
> > regular queues, and queues 12..23 are XSK queues.
> > In this case, we should attach the netdev-afxdp with 'start-qid=12'.
> >
> > I tested using Mellanox Connect-X 6Dx, by setting 'start-qid=1', and:
> >    $ ethtool -L enp2s0f0np0 combined 1
> >    # queue 0 is for non-XDP traffic, queue 1 is for XSK
> >    $ ethtool -N enp2s0f0np0 flow-type udp4 action 1
> > note: we need additionally add flow-redirect rule to queue 1
>
> Seems awfully hardware dependent.  Is this just for Mellanox or does
> it have general usefulness?
>
It is just Mellanox's design which requires pre-configure the flow-director.
I only have cards from Intel and Mellanox so I don't know about other vendors.

Thanks,
William
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to