On Mon, 27 May 2024 12:42:43 +0200, Paolo Abeni <pab...@redhat.com> wrote:
> On Thu, 2024-05-23 at 15:46 +0800, Heng Qi wrote:
> > This reverts commit 4d4ac2ececd3c42a08dd32a6e3a4aaf25f7efe44.
> > 
> > When the following snippet is run, lockdep will report a deadlock[1].
> > 
> >   /* Acquire all queues dim_locks */
> >   for (i = 0; i < vi->max_queue_pairs; i++)
> >           mutex_lock(&vi->rq[i].dim_lock);
> > 
> > There's no deadlock here because the vq locks are always taken
> > in the same order, but lockdep can not figure it out, and we
> > can not make each lock a separate class because there can be more
> > than MAX_LOCKDEP_SUBCLASSES of vqs.
> > 
> > However, dropping the lock is harmless:
> >   1. If dim is enabled, modifications made by dim worker to coalescing
> >      params may cause the user's query results to be dirty data.
> 
> It looks like the above can confuse the user-space/admin?

Maybe, but we don't seem to guarantee this --
the global query interface (.get_coalesce) cannot 
guarantee correct results when the DIM and .get_per_queue_coalesce are present:

1. DIM has been around for a long time (it will modify the per-queue 
parameters),
   but many nics only have interfaces for querying global parameters.
2. Some nics provide the .get_per_queue_coalesce interface, it is not
   synchronized with DIM.

So I think this is acceptable.

> 
> Have you considered instead re-factoring
> virtnet_send_rx_notf_coal_cmds() to avoid acquiring all the mutex in
> sequence?

Perhaps it is a way to not traverse and update the parameters of each queue
in the global settings interface.

Thanks.

> 
> Thanks!
> 
> Paolo
> 

Reply via email to