On 26.07.2023 03:17, Volodymyr Babchuk wrote:
> Roger Pau Monné <roger....@citrix.com> writes:
>> On Thu, Jul 20, 2023 at 12:32:31AM +0000, Volodymyr Babchuk wrote:
>>> --- a/xen/arch/x86/hvm/vmsi.c
>>> +++ b/xen/arch/x86/hvm/vmsi.c
>>> @@ -895,6 +895,8 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>>>  {
>>>      unsigned int i;
>>>  
>>> +    ASSERT(rw_is_locked(&msix->pdev->domain->pci_lock));
>>> +
>>>      for ( i = 0; i < msix->max_entries; i++ )
>>>      {
>>>          const struct vpci_msix_entry *entry = &msix->entries[i];
>>> @@ -913,7 +915,9 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>>>              struct pci_dev *pdev = msix->pdev;
>>>  
>>>              spin_unlock(&msix->pdev->vpci->lock);
>>> +            read_unlock(&pdev->domain->pci_lock);
>>>              process_pending_softirqs();
>>> +            read_lock(&pdev->domain->pci_lock);
>>
>> This should be a read_trylock(), much like the spin_trylock() below.
> 
> vpci_dump_msi() expects that vpci_msix_arch_print() will return holding
> this lock. I can rework both functions, of course. But then we will in
> situation when we need to known exact behavior of vpci_dump_msi() wrt of
> locks in the calling code...

Your reply sounds as if you hadn't seen my earlier suggestion on this
matter (making the behavior match also for the now 2nd lock involved).

Jan

Reply via email to