Hi Tony, I apologize for the late response.
> From: Tony Hart <tony.h...@domainhart.com> > Sent: Saturday, October 12, 2024 17:09 > To: users@dpdk.org > Subject: mlx5: imissed versus prio0_buf_discards > > External email: Use caution opening links or attachments > > > > I have a simple DPDK app that receives packets via RSS from a CX7 (400G). > The app uses 16 queues across 16 cores. What I see is dropped packets even > at only 50Mpps. > > Looking at rte_eth_port_xstats() I see rx_prio0_buf_discard_packets matches > the number of packets dropped however the imissed counter (from > rte_eth_port_stats) is 0. Indeed when I look at the rx_queue depths from > each thread in the app they barely reach 30 entries (I'm using the default > number of queue descs). > > What is the difference between rx_prio0_buf_discards and imissed counters, > why would rx_prio0_buf_discards increase but not imissed? Both counters measure packet drops, but at different levels: - imissed - Measures drops caused by lack of free descriptors in the Rx queue. This indicates that SW cannot keep up with current packet rate. - rx_prio0_buf_discards - Measures drops caused by lack of free space in NIC's Rx buffer. This indicates that HW cannot keep up with current packet rate. What kind of traffic are you generating? What kind of flow tables and rules do you create? In your application, do you see that packets are roughly equally distributed across all 16 Rx queues? > > many thanks, > tony > > fyi: this is using DPKD 24.07 and the HWS RTE FLOW Api to setup the RSS flow. > Firmware is 28.41 Best regards, Dariusz Sosnowski