On Tue, 24 Mar 2026 11:13:04 -0700 Stanislav Fomichev wrote:
> > > +         netif_addr_lock_bh(dev);
> > > +
> > > +         err = __hw_addr_list_snapshot(&uc_snap, &dev->uc,
> > > +                                       dev->addr_len);
> > > +         if (!err)
> > > +                 err = __hw_addr_list_snapshot(&uc_ref, &dev->uc,
> > > +                                               dev->addr_len);
> > > +         if (!err)
> > > +                 err = __hw_addr_list_snapshot(&mc_snap, &dev->mc,
> > > +                                               dev->addr_len);
> > > +         if (!err)
> > > +                 err = __hw_addr_list_snapshot(&mc_ref, &dev->mc,
> > > +                                               dev->addr_len);  
> > 
> > This doesn't get slow with a few thousands of addresses?  
> 
> I can add kunit benchmark and attach the output? Although not sure where
> to go from that. The alternative to this is allocating an array of entries.
> I started with that initially but __hw_addr_sync_dev wants to kfree the
> individual entries and I decided not to have a separate helpers to
> manage the snapshots.

Let's see what the benchmark says. Hopefully it's fast enough and 
we don't have to worry. Is keeping these lists around between the
invocations of the work tricky?

> > Can we give the work a reference on the netdev (at init time) and
> > cancel + release it here instead of flushing / waiting?  
> 
> Not sure why cancel+release, maybe you're thinking about the unregister
> path? This is rtnl_unlock -> netdev_run_todo -> __rtnl_unlock + some
> extras.
> 
> And the flush is here to plumb the addresses to the real devices
> before we return to the callers. Mostly because of the following
> things we have in the tests:
> 
> # TEST: team cleanup mode lacp                                        [FAIL]
> #       macvlan unicast address not found on a slave
> 
> Can you explain a bit more on the suggestion?

Oh, I thought it's here for unregister! Feels like it'd be cleaner to
add the flush in dev_*c_add() and friends? How hard would it be to
identify the callers in atomic context?

> > >   /* Wait for rcu callbacks to finish before next phase */
> > >   if (!list_empty(&list))
> > >           rcu_barrier();
> > > @@ -12099,6 +12173,7 @@ struct net_device *alloc_netdev_mqs(int 
> > > sizeof_priv, const char *name,
> > >  #endif
> > >  
> > >   mutex_init(&dev->lock);
> > > + INIT_WORK(&dev->rx_mode_work, dev_rx_mode_work);
> > >  
> > >   dev->priv_flags = IFF_XMIT_DST_RELEASE | IFF_XMIT_DST_RELEASE_PERM;
> > >   setup(dev);
> > > @@ -12203,6 +12278,8 @@ void free_netdev(struct net_device *dev)
> > >  
> > >   kfree(rcu_dereference_protected(dev->ingress_queue, 1));
> > >  
> > > + cancel_work_sync(&dev->rx_mode_work);  
> > 
> > Should never happen so maybe wrap it in a WARN ?  
> 
> Or maybe just flush_workqueue here as well? To signal the intent that we
> are mostly waiting for the wq entry to be unused to be able to kfree it?
> 

Reply via email to