Prior extension of these functions to enable per-device quarantine page tables already didn't add more locking there, but merely left in place what had been there before. But really locking is unnecessary here: We're running with pcidevs_lock held (i.e. multiple invocations of the same function [or their teardown equivalents] are impossible, and hence there are no "local" races), while all consuming of the data being populated here can't race anyway due to happening sequentially afterwards. See also the comment in struct arch_pci_dev.
Signed-off-by: Jan Beulich <jbeul...@suse.com> --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -699,15 +699,11 @@ int cf_check amd_iommu_quarantine_init(s union amd_iommu_pte *root; struct page_info *pgs[IOMMU_MAX_PT_LEVELS] = {}; - spin_lock(&hd->arch.mapping_lock); - root = __map_domain_page(pdev->arch.amd.root_table); rc = fill_qpt(root, level - 1, pgs); unmap_domain_page(root); pdev->arch.leaf_mfn = page_to_mfn(pgs[0]); - - spin_unlock(&hd->arch.mapping_lock); } page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list); --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -3054,15 +3054,11 @@ static int cf_check intel_iommu_quaranti struct dma_pte *root; struct page_info *pgs[6] = {}; - spin_lock(&hd->arch.mapping_lock); - root = map_vtd_domain_page(pdev->arch.vtd.pgd_maddr); rc = fill_qpt(root, level - 1, pgs); unmap_vtd_domain_page(root); pdev->arch.leaf_mfn = page_to_mfn(pgs[0]); - - spin_unlock(&hd->arch.mapping_lock); } page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list);