On 1/24/24 00:00, Stewart Hildebrand wrote:
> On 1/23/24 10:07, Roger Pau Monné wrote:
>> On Tue, Jan 23, 2024 at 03:32:12PM +0100, Jan Beulich wrote:
>>> On 15.01.2024 20:43, Stewart Hildebrand wrote:
>>>> @@ -2888,6 +2888,8 @@ int allocate_and_map_msi_pirq(struct domain *d, int 
>>>> index, int *pirq_p,
>>>>  {
>>>>      int irq, pirq, ret;
>>>>  
>>>> +    ASSERT(pcidevs_locked() || rw_is_locked(&d->pci_lock));
>>>
>>> If either lock is sufficient to hold here, ...
>>>
>>>> --- a/xen/arch/x86/physdev.c
>>>> +++ b/xen/arch/x86/physdev.c
>>>> @@ -123,7 +123,9 @@ int physdev_map_pirq(domid_t domid, int type, int 
>>>> *index, int *pirq_p,
>>>>  
>>>>      case MAP_PIRQ_TYPE_MSI:
>>>>      case MAP_PIRQ_TYPE_MULTI_MSI:
>>>> +        pcidevs_lock();
>>>>          ret = allocate_and_map_msi_pirq(d, *index, pirq_p, type, msi);
>>>> +        pcidevs_unlock();
>>>>          break;
>>>
>>> ... why is it the global lock that's being acquired here?
>>>
>>
>> IIRC (Stewart can further comment) this is done holding the pcidevs
>> lock to keep the path unmodified, as there's no need to hold the
>> per-domain rwlock.
>>
> 
> Although allocate_and_map_msi_pirq() was itself acquiring the global 
> pcidevs_lock() before this patch, we could just as well use 
> read_lock(&d->pci_lock) here instead now. It seems like a good optimization 
> to make, so if there aren't any objections I'll change it to 
> read_lock(&d->pci_lock).
> 

Actually, I take this back. As mentioned in the cover letter of this series, 
and has been reiterated in recent discussions, the goal with this is to keep 
existing (non-vPCI) code paths as unmodified as possible. So I'll keep it as 
pcidevs_lock() here.

Reply via email to