* Zhang Chen (zhangc...@gmail.com) wrote: > From: zhanghailiang <zhang.zhanghaili...@huawei.com> > > Don't need to flush all VM's ram from cache, only > flush the dirty pages since last checkpoint > > Signed-off-by: Li Zhijian <lizhij...@cn.fujitsu.com> > Signed-off-by: Zhang Chen <zhangc...@gmail.com> > Signed-off-by: zhanghailiang <zhang.zhanghaili...@huawei.com>
Yes, I think that's right (although I wonder if it can actually be merged in with the loop directly below it). Reviewed-by: Dr. David Alan Gilbert <dgilb...@redhat.com> > --- > migration/ram.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/migration/ram.c b/migration/ram.c > index 4235a8f24d..21027c5b4d 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -2786,6 +2786,7 @@ int colo_init_ram_cache(void) > } > ram_state = g_new0(RAMState, 1); > ram_state->migration_dirty_pages = 0; > + memory_global_dirty_log_start(); > > return 0; > > @@ -2806,10 +2807,12 @@ void colo_release_ram_cache(void) > { > RAMBlock *block; > > + memory_global_dirty_log_stop(); > QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { > g_free(block->bmap); > block->bmap = NULL; > } > + > rcu_read_lock(); > QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { > if (block->colo_cache) { > @@ -3042,6 +3045,15 @@ static void colo_flush_ram_cache(void) > void *src_host; > unsigned long offset = 0; > > + memory_global_dirty_log_sync(); > + qemu_mutex_lock(&ram_state->bitmap_mutex); > + rcu_read_lock(); > + RAMBLOCK_FOREACH(block) { > + migration_bitmap_sync_range(ram_state, block, 0, block->used_length); > + } > + rcu_read_unlock(); > + qemu_mutex_unlock(&ram_state->bitmap_mutex); > + > trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages); > rcu_read_lock(); > block = QLIST_FIRST_RCU(&ram_list.blocks); > -- > 2.17.0 > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK