Yang Hongyang <yanghongy...@huawei.com> wrote: > On 2017/3/24 4:45, Juan Quintela wrote: >> We change the meaning of start to be the offset from the beggining of >> the block. >> >> @@ -701,7 +701,7 @@ static void migration_bitmap_sync(RAMState *rs) >> qemu_mutex_lock(&rs->bitmap_mutex); >> rcu_read_lock(); >> QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { >> - migration_bitmap_sync_range(rs, block->offset, block->used_length); >> + migration_bitmap_sync_range(rs, block, 0, block->used_length); > > Since RAMBlock been passed to bitmap_sync, could we remove > param 'block->used_length' either?
Hi good catch. I had that removed, and then realized that I want to synchronize parts of the bitmap, not the whole one. That part of the series is still not done. Right now we do something like (I have simplified a lot of details): while(true) { foreach(block) bitmap_sync(block) foreach(page) if(dirty(page)) page_send(page) } If you have several terabytes of RAM that is too ineficient, because when we arrive to the page_send(page), it is possible that it is already dirty again, and we have to send it twice. So, the idea is to change to something like: while(true) { foreach(block) bitmap_sync(block) foreach(block) foreach(64pages) bitmap_sync(64pages) foreach(page of the 64) if (dirty) page_send(page) } Where 64 is a magic number, I have to test what is the good value. Basically it should be a multiple of sizeof(long) and a multiple/divisor of host page size. Reason of changing the for to be for each block, is that then we can easily put bitmaps by hostpage size, instead of having to had it for target page size. Thanks for the review, Juan. Later, Juan.