On Friday, September 19, 2025 23:24 CEST, Fabiano Rosas <[email protected]> wrote:

> "Marco Cavenati" <[email protected]> writes:
> 
> > Hello Fabiano,
> >
> > On Thursday, April 10, 2025 21:52 CEST, Fabiano Rosas <[email protected]> 
> > wrote:
> >
> >> Marco Cavenati <[email protected]> writes:
> >> 
> >> > Enable the use of the mapped-ram migration feature with savevm/loadvm
> >> > snapshots by adding the QIO_CHANNEL_FEATURE_SEEKABLE feature to
> >> > QIOChannelBlock. Implement io_preadv and io_pwritev methods to provide
> >> > positioned I/O capabilities that don't modify the channel's position
> >> > pointer.
> >> 
> >> We'll need to add the infrastructure to reject multifd and direct-io
> >> before this. The rest of the capabilities should not affect mapped-ram,
> >> so it's fine (for now) if we don't honor them.
> >
> > Do you have any status update on this infrastructure you mentioned?
> >
> 
> I'm doing the work suggested by Daniel of passing migration
> configuration options via the commands themselves. When that is ready we
> can include savevm and have it only accept mapped-ram and clear all
> other options.
> 
> But don't worry about that, propose your changes and I'll make sure to
> have *something* ready before it merges. I don't see an issue with
> merging this single patch, for instance:
> https://lore.kernel.org/r/[email protected]

Perfect!

> >> What about zero page handling? Mapped-ram doesn't send zero pages
> >> because the file will always have zeroes in it and the migration
> >> destination is guaranteed to not have been running previously. I believe
> >> loading a snapshot in a VM that's already been running would leave stale
> >> data in the guest's memory.
> >
> > About the zero handling I'd like to hear your opinion about this idea I
> > proposed a while back:
> > The scenarios where zeroing is not required (incoming migration and
> > -loadvm) share a common characteristic: the VM has not yet run in the
> > current QEMU process.
> > To avoid splitting read_ramblock_mapped_ram(), could we implement
> > a check to determine if the VM has ever run and decide whether to zero
> > the memory based on that? Maybe using RunState?
> >
> 
> We could just as well add some flag to load_snapshot() since we know
> which invocations are guaranteed to happen with clean memory.
> 
> But if you can use existing code for that it would be great. Adding a
> global guest_has_ever_run flag, not so much. What's the MachineInitPhase
> before -loadvm?

MachineInitPhase is set to PHASE_MACHINE_READY during ram_load() for
both -loadvm and HMP loadvm, so unfortunately that isn’t an option.

RunState during ram_load() is
- RUN_STATE_INMIGRATE for -incoming,
- RUN_STATE_PRELAUNCH for -loadvm
- RUN_STATE_RESTORE_VM for HMP loadvm.
But I’m not sure how reliable (or unreliable) it would be to depend on this
to infer that RAM is zero.

As for using a flag, I don’t see an obvious way to pass one down through
load_snapshot -> qemu_loadvm_state -> ... -> read_ramblock_mapped_ram.
Do you already have something in mind?

Thank you
Marco


Reply via email to