Adding correct mail addresses for users mailing list and to Ori Kam.

> From: Bing Zhao <[email protected]>
> Sent: Monday, April 28, 2025 9:00 AM
> To: Slava Ovsiienko <[email protected]>; 韩康康 <[email protected]>; Dariusz 
> Sosnowski <[email protected]>; orika <[email protected]>
> Cc: users <[email protected]>
> Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" 
> under low bandwidth conditions.
>
> Hi Kangkang,
>
> Please take the DPDK performance report and the configurations under DOC link 
> of dpdk.org for a reference.
> It shows how to configure the HW and how to run the testpmd / l2fwd for SCP / 
> ZPL testing. Or searching for the DPDK testing white paper from CTC as 
> well(中国电信DPDK测试白皮书).
> In the docs, the NUMA / PCIe / lcore affinity / queue depths will all be 
> explained inside.
>
> From: Slava Ovsiienko <mailto:[email protected]>
> Sent: Monday, April 28, 2025 1:57 PM
> To: 韩康康 <mailto:[email protected]>; Dariusz Sosnowski 
> <mailto:[email protected]>; Bing Zhao <mailto:[email protected]>; orika 
> <mailto:[email protected]>
> Cc: users <mailto:[email protected]>
> Subject: RE: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" 
> under low bandwidth conditions.
>
> Hi,
>
> First of all, please, increase the number of queues and handling cores. For 
> 100GBps link from 4 to 8 queue/cores usually are needed.
> Also, you can try combinations of multiple queues per core (2..4 queues 
> handled by one core).
> And there are the offload options “inline” and mprq might be useful to gain 
> the wire speed  for the small packets.
> Please see the Nvidia/Mellanox performance reports 
> (https://core.dpdk.org/perf-reports/) for the details.
>
> With best regards,
> Slava
>
> From: 韩康康 <mailto:[email protected]>
> Sent: Friday, April 25, 2025 1:36 PM
> To: Dariusz Sosnowski <mailto:[email protected]>; Slava Ovsiienko 
> <mailto:[email protected]>; Bing Zhao <mailto:[email protected]>; orika 
> <mailto:[email protected]>
> Cc: users <mailto:[email protected]>
> Subject: net/mlx5: mellanox cx5 experiences "rx_phy_discard_packets" under 
> low bandwidth conditions.
>
>
> You don't often get email from mailto:[email protected]. 
> https://aka.ms/LearnAboutSenderIdentification
>
> Hi all,
> I am using dpdk-testpmd and pkt generator to test bandwith according to 
> RFC2544.
> However, I observed that CX5 rx_phy_discard_packets at low bandwith, 
> resulting in abnormal measurements under zero packet loss conditions.
>
> dpdk version: dpdk-21.11.5
> ethtool -i enp161s0f1np1:
>     driver: mlx5_core
>     version: 5.8-6.0.4
>     firmware-version: 16.32.1010
> Hardware: AMD EPYC 7742 64-core Processor
> dpdk-testpmd:
> dpdk-testpmd -l 96-111 -n 4 -a 0000:a1:00.1 -- -i --rxq=1  --txq=1 --txd=8192 
> --rxd=8192 --nb-cores=1  --burst=128 -a --mbcache=512  --rss-udp
> test result:
> frame size(bytes)    offerd Load(%)    packet loss rate
> 128                         15.69                   0.000211
> 256                         14.148                 0.0004
> 512                         14.148                 0.00008
> 1518                        14.92                  0.00099
>
> i'd like to ask, is this an issue with CX5 NIC? How can i debug it to 
> eliminate the packet drops?
>
> 韩康康
> mailto:[email protected]

Reply via email to