On Wed, Apr 19, 2023 at 06:12:05PM +0100, Daniel P. Berrangé wrote:
> On Tue, Apr 18, 2023 at 03:26:45PM -0400, Peter Xu wrote:
> > On Tue, Apr 18, 2023 at 05:58:44PM +0100, Daniel P. Berrangé wrote:
> > > Libvirt has multiple APIs where it currently uses its migrate-to-file
> > > approach
> > > 
> > >   * virDomainManagedSave()
> > > 
> > >     This saves VM state to an libvirt managed file, stops the VM, and the
> > >     file state is auto-restored on next request to start the VM, and the
> > >     file deleted. The VM CPUs are stopped during both save + restore
> > >     phase
> > > 
> > >   * virDomainSave/virDomainRestore
> > > 
> > >     The former saves VM state to a file specified by the mgmt app/user.
> > >     A later call to virDomaniRestore starts the VM using that saved
> > >     state. The mgmt app / user can delete the file state, or re-use
> > >     it many times as they desire. The VM CPUs are stopped during both
> > >     save + restore phase
> > > 
> > >   * virDomainSnapshotXXX
> > > 
> > >     This family of APIs takes snapshots of the VM disks, optionally
> > >     also including the full VM state to a separate file. The snapshots
> > >     can later be restored. The VM CPUs remain running during the
> > >     save phase, but are stopped during restore phase
> > 
> > For this one IMHO it'll be good if Libvirt can consider leveraging the new
> > background-snapshot capability (QEMU 6.0+, so not very new..).  Or is there
> > perhaps any reason why a generic migrate:fd approach is better?
> 
> I'm not sure I fully understand the implications of 'background-snapshot' ?
> 
> Based on what the QAPI comment says, it sounds potentially interesting,
> as conceptually it would be nicer to have the memory / state snapshot
> represent the VM at the point where we started the snapshot operation,
> rather than where we finished the snapshot operation.
> 
> It would not solve the performance problems that the work in this thread
> was intended to address though.  With large VMs (100's of GB of RAM),
> saving all the RAM state to disk takes a very long time, regardless of
> whether the VM vCPUs are paused or running.

I think it solves the performance problem by only copy each of the guest
page once, even if the guest is running.

Different from mostly all the rest of "migrate" use cases, background
snapshot does not use the generic dirty tracking at all (for KVM that's
get-dirty-log), instead it uses userfaultfd wr-protects, so that when
taking the snapshot all the guest pages will be protected once.

Then when each page is written, the guest cannot proceed before copying the
snapshot page over first.  After one guest page is unprotected, any write
to it will be with full speed because the follow up writes won't matter for
a snapshot.

It guarantees the best efficiency of creating a snapshot with VM running,
afaict.  I sincerely think Libvirt should have someone investigating and
see whether virDomainSnapshotXXX() can be implemented by this cap rather
than the default migration.

I actually thought the Libvirt support was there. I think it must be that
someone posted support for Libvirt but it didn't really land for some
reason.

> 
> Currently when doing this libvirt has a "libvirt_iohelper" process
> that we use so that we can do writes with O_DIRECT set. This avoids
> thrashing the host OS's  I/O buffers/cache, and thus negatively
> impacting performance of anything else on the host doing I/O. This
> can't take advantage of multifd though, and even if extended todo
> so, it still imposes extra data copies during the save/restore paths.
> 
> So to speed up the above 3 libvirt APIs, we want QEMU to be able to
> directly save/restore mem/vmstate to files, with parallization and
> O_DIRECT.

Here IIUC above question can be really important on whether existing
virDomainSnapshotXXX() can (and should) use "background-snapshot" to
implement, because that's the only one that will need to support migration
live (out of 3 use cases).

If virDomainSnapshotXXX() can be implemented differently, I think it'll be
much easier to have both virDomainManagedSave() and virDomainSave() trigger
a migration command that will stop the VM first by whatever way.

It's probably fine if we still want to have CAP_FIXED_RAM as a new
capability describing the file property (so that libvirt will know iohelper
is not needed anymore), it can support live migrating even if it shouldn't
really use it.  But then we could probably have another CAP_SUSPEND which
gives QEMU a hint so QEMU can be smart on this non-live migration.

It's just that AFAIU CAP_FIXED_RAM should just always be set with
CAP_SUSPEND, because it must be a SUSPEND to fixed ram or one should just
use virDomainSnapshotXXX() (or say, live snapshot).

Thanks,

-- 
Peter Xu


Reply via email to