Hello Fabiano, On Thursday, April 10, 2025 21:52 CEST, Fabiano Rosas <[email protected]> wrote:
> Marco Cavenati <[email protected]> writes: > > > Enable the use of the mapped-ram migration feature with savevm/loadvm > > snapshots by adding the QIO_CHANNEL_FEATURE_SEEKABLE feature to > > QIOChannelBlock. Implement io_preadv and io_pwritev methods to provide > > positioned I/O capabilities that don't modify the channel's position > > pointer. > > We'll need to add the infrastructure to reject multifd and direct-io > before this. The rest of the capabilities should not affect mapped-ram, > so it's fine (for now) if we don't honor them. Do you have any status update on this infrastructure you mentioned? > What about zero page handling? Mapped-ram doesn't send zero pages > because the file will always have zeroes in it and the migration > destination is guaranteed to not have been running previously. I believe > loading a snapshot in a VM that's already been running would leave stale > data in the guest's memory. About the zero handling I'd like to hear your opinion about this idea I proposed a while back: The scenarios where zeroing is not required (incoming migration and -loadvm) share a common characteristic: the VM has not yet run in the current QEMU process. To avoid splitting read_ramblock_mapped_ram(), could we implement a check to determine if the VM has ever run and decide whether to zero the memory based on that? Maybe using RunState? Then we can add something like this to read_ramblock_mapped_ram() ... clear_bit_idx = 0; for (...) { // Zero pages if (guest_has_ever_run()) { unread = TARGET_PAGE_SIZE * (set_bit_idx - clear_bit_idx); offset = clear_bit_idx << TARGET_PAGE_BITS; host = host_from_ram_block_offset(block, offset); if (!host) {...} ram_handle_zero(host, unread); } // Non-zero pages clear_bit_idx = find_next_zero_bit(bitmap, num_pages, set_bit_idx + 1); ... (Plus trailing zero pages handling) Thank you :) Marco
