Lai Jiangshan <jiangshan...@gmail.com> wrote: Hi
First of all, I like a lot the patchset, but I would preffer to split it to find "possible" bugs along the lines, especially in postcopy, but not only. [very nice description of the patch] Nothing to say about the QMP and shared memory detection, looks correct to me. > diff --git a/migration/ram.c b/migration/ram.c > index 815bc0e..880972d 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -605,6 +605,28 @@ static void migration_bitmap_sync_init(void) > num_dirty_pages_period = 0; > xbzrle_cache_miss_prev = 0; > iterations_prev = 0; > + migration_dirty_pages = 0; > +} > + > +static void migration_bitmap_init(unsigned long *bitmap) > +{ > + RAMBlock *block; > + > + bitmap_clear(bitmap, 0, last_ram_offset() >> TARGET_PAGE_BITS); > + rcu_read_lock(); > + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { > + if (!migrate_bypass_shared_memory() || !qemu_ram_is_shared(block)) { > + bitmap_set(bitmap, block->offset >> TARGET_PAGE_BITS, > + block->used_length >> TARGET_PAGE_BITS); > + > + /* > + * Count the total number of pages used by ram blocks not > including > + * any gaps due to alignment or unplugs. > + */ > + migration_dirty_pages += block->used_length >> TARGET_PAGE_BITS; > + } > + } > + rcu_read_unlock(); > } We can split this function in a different patch. I haven't fully search if we care about taking the rcu lock here. The thing that I am more interested is in knowing what happens when we don't set migration_dirty_pages as the full "possible" memory pages. Once here, should we check for ROM regions? BTW, could'nt we use: int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque) { RAMBlock *block; int ret = 0; rcu_read_lock(); QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { ret = func(block->idstr, block->host, block->offset, block->used_length, opaque); if (ret) { break; } } rcu_read_unlock(); return ret; } > > static void migration_bitmap_sync(void) > @@ -631,7 +653,9 @@ static void migration_bitmap_sync(void) > qemu_mutex_lock(&migration_bitmap_mutex); > rcu_read_lock(); > QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { > - migration_bitmap_sync_range(block->offset, block->used_length); > + if (!migrate_bypass_shared_memory() || !qemu_ram_is_shared(block)) { > + migration_bitmap_sync_range(block->offset, block->used_length); > + } > } > rcu_read_unlock(); > qemu_mutex_unlock(&migration_bitmap_mutex); Oops, another place where we were not using qemu_ram_foreach_block :p > @@ -1926,19 +1950,14 @@ static int ram_save_setup(QEMUFile *f, void *opaque) > ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS; > migration_bitmap_rcu = g_new0(struct BitmapRcu, 1); > migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages); > - bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages); > + migration_bitmap_init(migration_bitmap_rcu->bmap); > > if (migrate_postcopy_ram()) { > migration_bitmap_rcu->unsentmap = bitmap_new(ram_bitmap_pages); > - bitmap_set(migration_bitmap_rcu->unsentmap, 0, ram_bitmap_pages); > + bitmap_copy(migration_bitmap_rcu->unsentmap, > + migration_bitmap_rcu->bmap, ram_bitmap_pages); > } I think that if we go this route, we should move the whole if inside the migration_bitmap_init? > > - /* > - * Count the total number of pages used by ram blocks not including any > - * gaps due to alignment or unplugs. > - */ > - migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS; > - > memory_global_dirty_log_start(); > migration_bitmap_sync(); > qemu_mutex_unlock_ramlist(); As said, very happy with the patch. And it got much simpler that I would have expected. Thanks, Juan.