> 
> Hi Ciara,
> 
> Thank you for your quick response and useful tips.
> That's a good idea to change the rx flow, I will test it later.
> 
> Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The
> performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is
> 3Gbps.
> I also checked some statistics, it shows drops on xdp recv and app internel
> transfer, it seems xdp recv and send take times, since there is no difference
> on app side bettween the two tests(dpdk/xdp).
> 
> Are there any extra configuration is required for AF_XDP PMD ?
> The XDP PMD should have similar performance as DPDK PMD under 10Gbps
> ?

Hi Christian,

You're welcome. I have some suggestions for improving the performance.
1. Preferred busy polling
If you are willing to upgrade your kernel to >=5.11 and your DPDK to v21.05 you 
can avail of the preferred busy polling feature. Info on the benefits can be 
found here: http://mails.dpdk.org/archives/dev/2021-March/201172.html
Essentially it should improve the performance for a single core use case 
(driver and application on same core).
2. IRQ pinning
If you are not using the preferred busy polling feature, I suggest pinning the 
IRQ for your driver to a dedicated core that is not busy with other tasks eg. 
the application. For most devices you can find IRQ info in /proc/interrupts and 
you can change the pinning by modifying /proc/irq/<irq_number>/smp_affinity
3. Queue configuration
Make sure you are using all queues on the device. Check the output of ethtool 
-l <iface> and either set the PMD queue_count to equal the number of queues, or 
reduce the number of queues using ethtool -L <iface> combined N.

I can't confirm whether the performance should reach that of the 
driver-specific PMD, but hopefully some of the above helps getting some of the 
way there.

Thanks,
Ciara

> 
> Br,
> Christian
> ________________________________________
> 发件人: Loftus, Ciara <mailto:ciara.lof...@intel.com>
> 发送时间: 2021年11月4日 10:19
> 收件人: Hong Christian <mailto:hongguoc...@hotmail.com>
> 抄送: mailto:users@dpdk.org <mailto:users@dpdk.org>;
> mailto:xiaolong...@intel.com <mailto:xiaolong...@intel.com>
> 主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue
> configuration
> 
> >
> > Hello DPDK users,
> >
> > Sorry to disturb.
> >
> > I am currently testing net_af_xdp device.
> > But I found the device configure always failed if I configure my rx queue !=
> tx
> > queue.
> > In my project, I use pipeline mode, and require 1 rx and several tx queues.
> >
> > Example:
> > I run my app with paramter: "--no-pci --vdev
> > net_af_xdp0,iface=ens12,queue_count=2 --vdev
> > net_af_xdp1,iface=ens13,queue_count=2"
> > And config 1 rx and 2 tx queue, it will setup failed by print: "Port0
> > dev_configure = -22"
> >
> > After checking some xdp docs, I found the rx and tx always bind to use,
> > which connected to filling and completing ring.
> > But I still want to comfirm this with you ? Could you please share your
> > comments ?
> > Thanks in advance.
> 
> Hi Christian,
> 
> Thanks for your question. Yes, at the moment this configuration is forbidden
> for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.
> However maybe this is an unnecessary restriction of the PMD. It is indeed
> possible to create a socket with either one rxq or one txq. I will put looking
> into the feasibility of enabling this in the PMD on my backlog.
> In the meantime, one workaround you could try would be to create an even
> number of rxq and txqs but steer all traffic to the first rxq using some NIC
> filtering eg. tc.
> 
> Thanks,
> Ciara
> 
> >
> > Br,
> > Christian

Reply via email to