On Tue, Feb 08, 2022 at 11:13:41AM +0000, Oleksandr Andrushchenko wrote:
> 
> 
> On 08.02.22 12:50, Roger Pau Monné wrote:
> > On Tue, Feb 08, 2022 at 07:35:34AM +0000, Oleksandr Andrushchenko wrote:
> >> 5. You name it
> >> ==============================================================
> >>
> >>   From all the above I would recommend we go with option 2 which seems to 
> >> reliably
> >> solve ABBA and does not bring cons of the other approaches.
> > 6. per-domain rwlock + per-device vpci lock
> >
> > Introduce vpci_header_write_lock(start, {end, size}) helper: return
> > whether a range requires the per-domain lock in write mode. This will
> > only return true if the range overlaps with the BAR ROM or the command
> > register.
> >
> > In vpci_{read,write}:
> >
> > if ( vpci_header_write_lock(...) )
> >      /* Gain exclusive access to all of the domain pdevs vpci. */
> >      write_lock(d->vpci);
> > else
> > {
> >      read_lock(d->vpci);
> >      spin_lock(vpci->lock);
> > }
> > ...
> >
> > The vpci assign/deassign functions would need to be modified to write
> > lock the per-domain rwlock. The MSI-X table MMIO handler will also
> > need to read lock the per domain vpci lock.
> Ok, so it seems you are in favor of this implementation and I have
> no objection as well. The only limitation we should be aware of is
> that once a path has acquired the read lock it is not possible to do
> any write path operations in there.
> vpci_process_pending will acquire write lock though as it can
> lead to vpci_remove_device on its error path.
> 
> So, I am going to implement pdev->vpci->lock + d->vpci_lock

I think it's the less uncertain option.

As said, if you want to investigate whether you can successfully move
the checking into vpci_process_pending that would also be fine with
me, but I cannot assert it's going to be successful. OTOH I think the
per-domain rwlock + per-device spinlock seems quite likely to solve
our issues.

Thanks, Roger.

Reply via email to