On Tue, 05/27 16:54, chai wen wrote:
> If we want to track dirty blocks using dirty_maps on a BlockDriverState
> when doing live block-migration, its correspoding 'BlkMigDevState' should be
> add to block_mig_state.bmds_list firstly for subsequent processing.
> Otherwise set_dirty_tracking will do nothing on an empty list than allocating
> dirty_bitmaps for them.
> 
> And what's the worse, bdrv_get_dirty_count will access the
> bmds->dirty_maps directly, there could be a segfault as the reasons
> above.
> 
> Signed-off-by: chai wen <chaiw.f...@cn.fujitsu.com>
> ---
>  block-migration.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/block-migration.c b/block-migration.c
> index 56951e0..43203aa 100644
> --- a/block-migration.c
> +++ b/block-migration.c
> @@ -626,6 +626,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
>              block_mig_state.submitted, block_mig_state.transferred);
>  
>      qemu_mutex_lock_iothread();
> +    init_blk_migration(f);

Thanks for spotting this!

I reverted the order of init_blk_migration and set_dirty_tracking in commit
b8afb520e (block: Handle error of bdrv_getlength in bdrv_create_dirty_bitmap)
incorrectly, thought that in this way, no clean up is needed if
set_dirty_tracking fails.

But by looking at savevm.c:qemu_savevm_state() we can see that
qemu_savevm_state_cancel() will do the clean up automatically, so this fix is
valid.

Reviewed-by: Fam Zheng <f...@redhat.com>

>  
>      /* start track dirty blocks */
>      ret = set_dirty_tracking();
> @@ -635,7 +636,6 @@ static int block_save_setup(QEMUFile *f, void *opaque)
>          return ret;
>      }
>  
> -    init_blk_migration(f);
>  
>      qemu_mutex_unlock_iothread();
>  
> -- 
> 1.7.1
> 
> 

Reply via email to