"Marco Cavenati" <[email protected]> writes:

> Hello Fabiano,
>
> On Thursday, April 10, 2025 21:52 CEST, Fabiano Rosas <[email protected]> wrote:
>
>> Marco Cavenati <[email protected]> writes:
>> 
>> > Enable the use of the mapped-ram migration feature with savevm/loadvm
>> > snapshots by adding the QIO_CHANNEL_FEATURE_SEEKABLE feature to
>> > QIOChannelBlock. Implement io_preadv and io_pwritev methods to provide
>> > positioned I/O capabilities that don't modify the channel's position
>> > pointer.
>> 
>> We'll need to add the infrastructure to reject multifd and direct-io
>> before this. The rest of the capabilities should not affect mapped-ram,
>> so it's fine (for now) if we don't honor them.
>
> Do you have any status update on this infrastructure you mentioned?
>

I'm doing the work suggested by Daniel of passing migration
configuration options via the commands themselves. When that is ready we
can include savevm and have it only accept mapped-ram and clear all
other options.

But don't worry about that, propose your changes and I'll make sure to
have *something* ready before it merges. I don't see an issue with
merging this single patch, for instance:
https://lore.kernel.org/r/[email protected]

>> What about zero page handling? Mapped-ram doesn't send zero pages
>> because the file will always have zeroes in it and the migration
>> destination is guaranteed to not have been running previously. I believe
>> loading a snapshot in a VM that's already been running would leave stale
>> data in the guest's memory.
>
> About the zero handling I'd like to hear your opinion about this idea I
> proposed a while back:
> The scenarios where zeroing is not required (incoming migration and
> -loadvm) share a common characteristic: the VM has not yet run in the
> current QEMU process.
> To avoid splitting read_ramblock_mapped_ram(), could we implement
> a check to determine if the VM has ever run and decide whether to zero
> the memory based on that? Maybe using RunState?
>

We could just as well add some flag to load_snapshot() since we know
which invocations are guaranteed to happen with clean memory.

But if you can use existing code for that it would be great. Adding a
global guest_has_ever_run flag, not so much. What's the MachineInitPhase
before -loadvm?

> Then we can add something like this to read_ramblock_mapped_ram()
> ...
> clear_bit_idx = 0;
> for (...) {
>     // Zero pages
>     if (guest_has_ever_run()) {
>         unread = TARGET_PAGE_SIZE * (set_bit_idx - clear_bit_idx);
>         offset = clear_bit_idx << TARGET_PAGE_BITS;
>         host = host_from_ram_block_offset(block, offset);
>         if (!host) {...}
>         ram_handle_zero(host, unread);
>     }
>     // Non-zero pages
>     clear_bit_idx = find_next_zero_bit(bitmap, num_pages, set_bit_idx + 1);
> ...
> (Plus trailing zero pages handling)
>

The suggestion of splitting read_ramblock_mapped_ram() was to spare you
from having to touch this code. =) But since you already know how to do
it, then yes, let's add the change inline.

> Thank you :)
> Marco

Reply via email to