On Tue, Apr 11, 2023 at 8:34 PM Eugenio Perez Martin <epere...@redhat.com> wrote: > > On Wed, Apr 5, 2023 at 1:37 PM Eugenio Perez Martin <epere...@redhat.com> > wrote: > > > > Hi! > > > > As mentioned in the last upstream virtio-networking meeting, one of > > the factors that adds more downtime to migration is the handling of > > the guest memory (pin, map, etc). At this moment this handling is > > bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the > > destination device waits until all the guest memory / state is > > migrated to start pinning all the memory. > > > > The proposal is to bind it to the char device life cycle (open vs > > close), so all the guest memory can be pinned for all the guest / qemu > > lifecycle. > > > > This has two main problems: > > * At this moment the reset semantics forces the vdpa device to unmap > > all the memory. So this change needs a vhost vdpa feature flag. > > * This may increase the initialization time. Maybe we can delay it if > > qemu is not the destination of a LM. Anyway I think this should be > > done as an optimization on top. > > > > Expanding on this we could reduce the pinning even more now that vring > supports VA [1] with the emulated CVQ.
Note that VA for hardware means the device needs to support page fault through either PRI or vendor specific interface. > > Something like: > - Add VHOST_VRING_GROUP_CAN_USE_VA ioctl to check if a given VQ group > capability. Passthrough devices with emulated CVQ would return false > for the dataplane and true for the control vq group. > - If that is true, qemu does not need to map and translate addresses > for CVQ but to directly provide VA for buffers. This avoids pinning, > translations, etc in this case. For CVQ yes, but we only avoid the pinning for CVQ not others. Thanks > > Thanks! > > [1] > https://lore.kernel.org/virtualization/20230404131326.44403-2-sgarz...@redhat.com/ >