On Mon, Apr 28, 2025 at 07:55:32PM +0000, Brian Vazquez wrote: > Add a helper function to correctly handle the lockless > synchronization when the sender needs to block. The paradigm is > > if (no_resources()) { > stop_queue(); > barrier(); > if (!no_resources()) > restart_queue(); > } > > netif_subqueue_maybe_stop already handles the paradigm correctly, but > the code split the check for resources in three parts, the first one > (descriptors) followed the protocol, but the other two (completions and > tx_buf) were only doing the first part and so race prone. > > Luckily netif_subqueue_maybe_stop macro already allows you to use a > function to evaluate the start/stop conditions so the fix only requires > the right helper function to evaluate all the conditions at once. > > The patch removes idpf_tx_maybe_stop_common since it's no longer needed > and instead adjusts separately the conditions for singleq and splitq. > > Note that idpf_rx_buf_hw_update doesn't need to check for resources > since that will be covered in idpf_tx_splitq_frame.
Should the above read idpf_tx_buf_hw_update() rather than idpf_rx_buf_hw_update()? If so, I see that this is true when idpf_tx_buf_hw_update() is called from idpf_tx_singleq_frame(). But is a check required in the case where idpf_rx_buf_hw_update() is called by idpf_tx_singleq_map()? > > To reproduce: > > Reduce the threshold for pending completions to increase the chances of > hitting this pause by changing your kernel: > > drivers/net/ethernet/intel/idpf/idpf_txrx.h > > -#define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1) > +#define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 4) > > Use pktgen to force the host to push small pkts very aggressively: > > ./pktgen_sample02_multiqueue.sh -i eth1 -s 100 -6 -d $IP -m $MAC \ > -p 10000-10000 -t 16 -n 0 -v -x -c 64 > > Fixes: 6818c4d5b3c2 ("idpf: add splitq start_xmit") > Signed-off-by: Josh Hay <joshua.a....@intel.com> > Signed-off-by: Brian Vazquez <bria...@google.com> > Signed-off-by: Luigi Rizzo <lri...@google.com> ...