On 20/06/2023 04:04, Duan, Zhenzhong wrote: >> -----Original Message----- >> From: Avihai Horon <avih...@nvidia.com> >> Sent: Monday, June 19, 2023 7:14 PM > ... >>> a/hw/vfio/migration.c b/hw/vfio/migration.c index >>> 6b58dddb8859..bc51aa765cb8 100644 >>> --- a/hw/vfio/migration.c >>> +++ b/hw/vfio/migration.c >>> @@ -632,42 +632,41 @@ int64_t vfio_mig_bytes_transferred(void) >>> return bytes_transferred; >>> } >>> >>> -int vfio_migration_realize(VFIODevice *vbasedev, Error **errp) >>> +bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp) >>> { >>> - int ret = -ENOTSUP; >>> + int ret; >>> >>> - if (!vbasedev->enable_migration) { >>> + if (!vbasedev->enable_migration || vfio_migration_init(vbasedev)) { >>> + error_setg(&vbasedev->migration_blocker, >>> + "VFIO device doesn't support migration"); >>> goto add_blocker; >>> } >>> >>> - ret = vfio_migration_init(vbasedev); >>> - if (ret) { >>> + if (vfio_block_multiple_devices_migration(errp)) { >>> + error_setg(&vbasedev->migration_blocker, >>> + "Migration is currently not supported with multiple " >>> + "VFIO devices"); >>> goto add_blocker; >>> } >> >> Here you are tying the multiple devices blocker to a specific device. >> This could be problematic: >> If you add vfio device #1 and then device #2 then the blocker will be added >> to >> device #2. If you then remove device #1, migration will still be blocked >> although it shouldn't. >> >> I think we should keep it as a global blocker and not a per-device blocker. > > Thanks for point out, you are right, seems I need to restore the multiple > devices part code.
It's the same for vIOMMU migration blocker. You could have a machine with default_bus_bypass_iommu=on and add device #1 with bypass_iommu=off attribute in pxb PCI port, and then add device #2 with bypass_iommu=on. The blocker is added because of device #1 but then it will remain blocked if you remove it.