+ Dan Carpenter

On Tue, May 28, 2024 at 03:48:46PM +0200, Alexander Lobakin wrote:
> idpf uses Page Pool for data buffers with hardcoded buffer lengths of
> 4k for "classic" buffers and 2k for "short" ones. This is not flexible
> and does not ensure optimal memory usage. Why would you need 4k buffers
> when the MTU is 1500?
> Use libeth for the data buffers and don't hardcode any buffer sizes. Let
> them be calculated from the MTU for "classics" and then divide the
> truesize by 2 for "short" ones. The memory usage is now greatly reduced
> and 2 buffer queues starts make sense: on frames <= 1024, you'll recycle
> (and resync) a page only after 4 HW writes rather than two.
> 
> Signed-off-by: Alexander Lobakin <aleksander.loba...@intel.com>

...

> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c 
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c

...

Hi Alexander,

The code above the hunk below, starting at line 3321, is:

                if (unlikely(!hdr_len && !skb)) {
                        hdr_len = idpf_rx_hsplit_wa(hdr, rx_buf, pkt_len);
                        pkt_len -= hdr_len;
                        u64_stats_update_begin(&rxq->stats_sync);
                        u64_stats_inc(&rxq->q_stats.hsplit_buf_ovf);
                        u64_stats_update_end(&rxq->stats_sync);
                }
                if (libeth_rx_sync_for_cpu(hdr, hdr_len)) {
                        skb = idpf_rx_build_skb(hdr, hdr_len);
                        if (!skb)
                                break;
                        u64_stats_update_begin(&rxq->stats_sync);
                        u64_stats_inc(&rxq->q_stats.hsplit_pkts);
                        u64_stats_update_end(&rxq->stats_sync);
                }

> @@ -3413,24 +3340,24 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue 
> *rxq, int budget)
>               hdr->page = NULL;
>  
>  payload:
> -             if (pkt_len) {
> -                     idpf_rx_sync_for_cpu(rx_buf, pkt_len);
> -                     if (skb)
> -                             idpf_rx_add_frag(rx_buf, skb, pkt_len);
> -                     else
> -                             skb = idpf_rx_construct_skb(rxq, rx_buf,
> -                                                         pkt_len);
> -             } else {
> -                     idpf_rx_put_page(rx_buf);
> -             }
> +             if (!libeth_rx_sync_for_cpu(rx_buf, pkt_len))
> +                     goto skip_data;
> +
> +             if (skb)
> +                     idpf_rx_add_frag(rx_buf, skb, pkt_len);
> +             else
> +                     skb = idpf_rx_build_skb(rx_buf, pkt_len);
>  
>               /* exit if we failed to retrieve a buffer */
>               if (!skb)
>                       break;
>  
> -             idpf_rx_post_buf_refill(refillq, buf_id);
> +skip_data:
> +             rx_buf->page = NULL;
>  
> +             idpf_rx_post_buf_refill(refillq, buf_id);
>               IDPF_RX_BUMP_NTC(rxq, ntc);
> +
>               /* skip if it is non EOP desc */
>               if (!idpf_rx_splitq_is_eop(rx_desc))
>                       continue;

The code following this hunk, ending at line 3372, looks like this:

                /* pad skb if needed (to make valid ethernet frame) */
                if (eth_skb_pad(skb)) {
                        skb = NULL;
                        continue;
                }
                /* probably a little skewed due to removing CRC */
                total_rx_bytes += skb->len;

Smatch warns that:
.../idpf_txrx.c:3372 idpf_rx_splitq_clean() error: we previously assumed 'skb' 
could be null (see line 3321)

I think, but am not sure, this is because it thinks skb might
be NULL at the point where "goto skip_data;" is now called above.

Could you look into this?

...

Reply via email to