On 25.11.2021 12:02, Oleksandr Andrushchenko wrote: > From: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com> > > When a vPCI is removed for a PCI device it is possible that we have > scheduled a delayed work for map/unmap operations for that device. > For example, the following scenario can illustrate the problem: > > pci_physdev_op > pci_add_device > init_bars -> modify_bars -> defer_map -> > raise_softirq(SCHEDULE_SOFTIRQ) > iommu_add_device <- FAILS > vpci_remove_device -> xfree(pdev->vpci) > > leave_hypervisor_to_guest > vpci_process_pending: v->vpci.mem != NULL; v->vpci.pdev->vpci == NULL > > For the hardware domain we continue execution as the worse that > could happen is that MMIO mappings are left in place when the > device has been deassigned. > > For unprivileged domains that get a failure in the middle of a vPCI > {un}map operation we need to destroy them, as we don't know in which > state the p2m is. This can only happen in vpci_process_pending for > DomUs as they won't be allowed to call pci_add_device. > > Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushche...@epam.com> > > --- > Cc: Roger Pau Monné <roger....@citrix.com> > --- > Since v4: > - crash guest domain if map/unmap operation didn't succeed > - re-work vpci cancel work to cancel work on all vCPUs > - use new locking scheme with pdev->vpci_lock > New in v4 > > Fixes: 86dbcf6e30cb ("vpci: cancel pending map/unmap on vpci removal")
What is this about? Jan