Hi Ciara,

Thank you for your quick response and useful tips.
That's a good idea to change the rx flow, I will test it later.

Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The 
performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is 
3Gbps.
I also checked some statistics, it shows drops on xdp recv and app internel 
transfer, it seems xdp recv and send take times, since there is no difference 
on app side bettween the two tests(dpdk/xdp).

Are there any extra configuration is required for AF_XDP PMD ?
The XDP PMD should have similar performance as DPDK PMD under 10Gbps ?

Br,
Christian
________________________________
发件人: Loftus, Ciara <ciara.lof...@intel.com>
发送时间: 2021年11月4日 10:19
收件人: Hong Christian <hongguoc...@hotmail.com>
抄送: users@dpdk.org <users@dpdk.org>; xiaolong...@intel.com 
<xiaolong...@intel.com>
主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration

>
> Hello DPDK users,
>
> Sorry to disturb.
>
> I am currently testing net_af_xdp device.
> But I found the device configure always failed if I configure my rx queue != 
> tx
> queue.
> In my project, I use pipeline mode, and require 1 rx and several tx queues.
>
> Example:
> I run my app with paramter: "--no-pci --vdev
> net_af_xdp0,iface=ens12,queue_count=2 --vdev
> net_af_xdp1,iface=ens13,queue_count=2"
> And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> dev_configure = -22"
>
> After checking some xdp docs, I found the rx and tx always bind to use,
> which connected to filling and completing ring.
> But I still want to comfirm this with you ? Could you please share your
> comments ?
> Thanks in advance.

Hi Christian,

Thanks for your question. Yes, at the moment this configuration is forbidden 
for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
However maybe this is an unnecessary restriction of the PMD. It is indeed 
possible to create a socket with either one rxq or one txq. I will put looking 
into the feasibility of enabling this in the PMD on my backlog.
In the meantime, one workaround you could try would be to create an even number 
of rxq and txqs but steer all traffic to the first rxq using some NIC filtering 
eg. tc.

Thanks,
Ciara

>
> Br,
> Christian

Reply via email to