Thanks a lot. I didn't notice that the emails were incorrect in the original 
email.

> -----Original Message-----
> From: Dariusz Sosnowski <dsosnow...@nvidia.com>
> Sent: Monday, April 28, 2025 3:01 PM
> To: Bing Zhao <bi...@nvidia.com>; Slava Ovsiienko
> <viachesl...@nvidia.com>; 韩康康 <961373...@qq.com>; Ori Kam
> <or...@nvidia.com>
> Cc: users@dpdk.org
> Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> 
> Adding correct mail addresses for users mailing list and to Ori Kam.
> 
> > From: Bing Zhao <bi...@nvidia.com>
> > Sent: Monday, April 28, 2025 9:00 AM
> > To: Slava Ovsiienko <viachesl...@nvidia.com>; 韩康康 <961373...@qq.com>;
> > Dariusz Sosnowski <dsosnow...@nvidia.com>; orika <or...@nivida.com>
> > Cc: users <us...@dpdk.com>
> > Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> >
> > Hi Kangkang,
> >
> > Please take the DPDK performance report and the configurations under DOC
> link of dpdk.org for a reference.
> > It shows how to configure the HW and how to run the testpmd / l2fwd for
> SCP / ZPL testing. Or searching for the DPDK testing white paper from CTC
> as well(中国电信DPDK测试白皮书).
> > In the docs, the NUMA / PCIe / lcore affinity / queue depths will all be
> explained inside.
> >
> > From: Slava Ovsiienko <mailto:viachesl...@nvidia.com>
> > Sent: Monday, April 28, 2025 1:57 PM
> > To: 韩康康 <mailto:961373...@qq.com>; Dariusz Sosnowski
> > <mailto:dsosnow...@nvidia.com>; Bing Zhao <mailto:bi...@nvidia.com>;
> > orika <mailto:or...@nivida.com>
> > Cc: users <mailto:us...@dpdk.com>
> > Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> >
> > Hi,
> >
> > First of all, please, increase the number of queues and handling cores.
> For 100GBps link from 4 to 8 queue/cores usually are needed.
> > Also, you can try combinations of multiple queues per core (2..4 queues
> handled by one core).
> > And there are the offload options “inline” and mprq might be useful to
> gain the wire speed  for the small packets.
> > Please see the Nvidia/Mellanox performance reports
> (https://core.dpdk.org/perf-reports/) for the details.
> >
> > With best regards,
> > Slava
> >
> > From: 韩康康 <mailto:961373...@qq.com>
> > Sent: Friday, April 25, 2025 1:36 PM
> > To: Dariusz Sosnowski <mailto:dsosnow...@nvidia.com>; Slava Ovsiienko
> > <mailto:viachesl...@nvidia.com>; Bing Zhao <mailto:bi...@nvidia.com>;
> > orika <mailto:or...@nivida.com>
> > Cc: users <mailto:us...@dpdk.com>
> > Subject: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets"
> under low bandwidth conditions.
> >
> >
> > You don't often get email from mailto:961373...@qq.com.
> > https://aka.ms/LearnAboutSenderIdentification
> >
> > Hi all,
> > I am using dpdk-testpmd and pkt generator to test bandwith according to
> RFC2544.
> > However, I observed that CX5 rx_phy_discard_packets at low bandwith,
> resulting in abnormal measurements under zero packet loss conditions.
> >
> > dpdk version: dpdk-21.11.5
> > ethtool -i enp161s0f1np1:
> >     driver: mlx5_core
> >     version: 5.8-6.0.4
> >     firmware-version: 16.32.1010
> > Hardware: AMD EPYC 7742 64-core Processor
> > dpdk-testpmd:
> > dpdk-testpmd -l 96-111 -n 4 -a 0000:a1:00.1 -- -i --rxq=1  --txq=1
> > --txd=8192 --rxd=8192 --nb-cores=1  --burst=128 -a --mbcache=512  --rss-
> udp test result:
> > frame size(bytes)    offerd Load(%)    packet loss rate
> > 128                         15.69                   0.000211
> > 256                         14.148                 0.0004
> > 512                         14.148                 0.00008
> > 1518                        14.92                  0.00099
> >
> > i'd like to ask, is this an issue with CX5 NIC? How can i debug it to
> eliminate the packet drops?
> >
> > 韩康康
> > mailto:961373...@qq.com
> 

Reply via email to