* Yang Hongyang (yan...@cn.fujitsu.com) wrote: > The ram cache was initially the same as PVM's memory. At > checkpoint, we cache the dirty memory of PVM into ram cache > (so that ram cache always the same as PVM's memory at every > checkpoint), flush cached memory to SVM after we received > all PVM dirty memory(only needed to flush memory that was > both dirty on PVM and SVM since last checkpoint).
(Typo: 'r' on the end of the title) I think I understand the need for the cache, to be able to restore pages that the SVM has modified that the PVM hadn't; however, if I understand the change here, (to host_from_stream_offset) the SVM will load the snapshot into the ram_cache rather than directly into host memory - why is this necessary? If the SVMs CPU is stopped at this point couldn't it load snapshot pages directly into host memory, clearing pages in the SVMs bitmap, so that the only pages that then get copied in flush_cache are the pages that the SVM modified but the PVM *didn't* include in the snapshot? I can see that you would need to do it the way you've done it if the snapshot-load could fail (at the sametime the PVM failed) and thus the old SVM state would be the surviving state, but how could it fail at this point given the whole stream is in the colo-buffer? > +static void ram_flush_cache(void); > static int ram_load(QEMUFile *f, void *opaque, int version_id) > { > ram_addr_t addr; > int flags, ret = 0; > static uint64_t seq_iter; > + bool need_flush = false; Probably better as 'ram_cache_needs_flush' Dave -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html