Hello! > ok, your problem here is that you modify ram. Could you take a look at > how vhost manage this? It is done at migration_bitmap_sync(), and it > just marks the pages that are dirty.
Hm, interesting... I see it hooks into memory_region_sync_dirty_bitmap(). Sorry for maybe lame question, i do not know the whole code, and it will be much faster for you to explain it to me, than for me to dig into it myself. At what moment is it called during migration? For you to better understand what is necessary... ITS is a thing which can be implemented in-kernel by KVM, and i work on exactly this. In my implementation i add an ioctl, which is called after CPUs are stopped. It flushes internal caches of the vITS to the RAM. It happens inside the kernel. I guess, dirty state tracking works correctly in this case, because memory gets modified by the kernel, and i guess from qemu's point of view it's the same as memory being modified by the guest. Therefore, i do not need to modify memory state bitmaps. I only need to tell the kernel to actually write out the data. If we talk about making this thing iterative, we anyway need this ioctl. It could be modified inside the kernel to update only those RAM parts whose data have been modified since the last flush. The semantics would stay the same - it's just an ioctl telling the virtual device to store its data in RAM. Ah, and again, these memory listeners are not prioritized too. I guess i could use them, but i would need a guarantee that my listener is called before KVMMemoryListener, which picks up changes. Kind regards, Pavel Fedin Expert Engineer Samsung Electronics Research center Russia