On Mon, Apr 10, 2023 at 11:17 AM Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpe...@huawei.com> wrote: > > > > 在 2023/4/10 10:14, Jason Wang 写道: > > On Wed, Apr 5, 2023 at 7:38 PM Eugenio Perez Martin <epere...@redhat.com> > > wrote: > >> > >> Hi! > >> > >> As mentioned in the last upstream virtio-networking meeting, one of > >> the factors that adds more downtime to migration is the handling of > >> the guest memory (pin, map, etc). At this moment this handling is > >> bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the > >> destination device waits until all the guest memory / state is > >> migrated to start pinning all the memory. > >> > >> The proposal is to bind it to the char device life cycle (open vs > >> close), so all the guest memory can be pinned for all the guest / qemu > >> lifecycle. > >> > >> This has two main problems: > >> * At this moment the reset semantics forces the vdpa device to unmap > >> all the memory. So this change needs a vhost vdpa feature flag. > > > > Is this true? I didn't find any codes to unmap the memory in > > vhost_vdpa_set_status(). > > > > It could depend on the vendor driver, for example, the vdpasim would do > something like that. > > vhost_vdpa_set_status->vdpa_reset->vdpasim_reset->vdpasim_do_reset->vhost_iotlb_reset
This looks like a bug. Or I wonder if any user space depends on this behaviour, if yes, we really need a new flag then. Thanks > > > Thanks > > > >> * This may increase the initialization time. Maybe we can delay it if > >> qemu is not the destination of a LM. Anyway I think this should be > >> done as an optimization on top. > >> > >> Any ideas or comments in this regard? > >> > >> Thanks! > >> > > > > . >