On Thu, 26 Mar 2026 at 19:32, Stephen Hemminger
<[email protected]> wrote:
>
> On Thu, 26 Mar 2026 13:46:11 +0100
> David Marchand <[email protected]> wrote:
>
> > In case no rxq has been set up (like when starting testpmd with no mempool
> > drivers), a crash happens in tap_dev_close:
> >
> > Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
> > 0x00007ffff7fad68b in tap_dev_close (dev=dev@entry=0x4c4a80
> >       <rte_eth_devices@INTERNAL>) at ../drivers/net/tap/rte_eth_tap.c:1111
> > 1111                  struct rx_queue *rxq = dev->data->rx_queues[i];
> >
> > (gdb) p dev->data->rx_queues
> > $4 = (void **) 0x0
> >
> > Fixes: 23e2387b49a1 ("net/tap: allocate queue structures dynamically")
> >
> > Signed-off-by: David Marchand <[email protected]>
> > ---
>
> It looked ok, but then the AI review spotted some stuff...
>
>
> Two issues:
>
> The txq variable is declared as struct rx_queue * but should be
> struct tx_queue *. Works by accident since it comes from a void *
> array and is only NULL-tested and passed to rte_free(), but the
> type is wrong.

Ah yes, will send a v2 quickly.

>
> Pre-existing: the loop runs to RTE_PMD_TAP_MAX_QUEUES but the
> rx_queues/tx_queues arrays are allocated with nb_rx_queues /
> nb_tx_queues entries by ethdev. If dev_configure() was called
> with fewer queues, the arrays are non-NULL but the access is
> out-of-bounds. Since these lines are being reworked anyway, worth
> adding a bounds check against nb_rx_queues/nb_tx_queues. The
> tap_queue_close() call is fine -- process_private fds are sized
> to RTE_PMD_TAP_MAX_QUEUES.

Indeed, and that makes the fix even simpler.


>
> Also missing Cc: [email protected] for a crash fix.

No, this is a fix for a 26.03 regression.


-- 
David Marchand

Reply via email to