Hi, Sorry for forgetting to cc maintainers. I got this issue when doing live migration test, and simple steps to reproduce are master: qemu -enable-kvm -smp 1 -m 512 -drive file=/data1/src.img,if=virtio \ -net none -monitor stdio -vnc 0:2 slave: qemu -enable-kvm -smp 1 -m 512 -dirve file=/data2/dest.img,if=virtio \ -net none -monitor stdio -vnc 0:3 -incoming tcp:127.0.0.1:3333 \
when doing migration cmd "migrate -b tcp:127.0.0.1:3333" in master's monitor, it throws out a segfault error. After checking some code of block migration, I think there is something wrong with the setup sequence in block migration setup. thanks chai wen On 05/27/2014 04:54 PM, chai wen wrote: > If we want to track dirty blocks using dirty_maps on a BlockDriverState > when doing live block-migration, its correspoding 'BlkMigDevState' should be > add to block_mig_state.bmds_list firstly for subsequent processing. > Otherwise set_dirty_tracking will do nothing on an empty list than allocating > dirty_bitmaps for them. > > And what's the worse, bdrv_get_dirty_count will access the > bmds->dirty_maps directly, there could be a segfault as the reasons > above. > > Signed-off-by: chai wen <chaiw.f...@cn.fujitsu.com> > --- > block-migration.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/block-migration.c b/block-migration.c > index 56951e0..43203aa 100644 > --- a/block-migration.c > +++ b/block-migration.c > @@ -626,6 +626,7 @@ static int block_save_setup(QEMUFile *f, void *opaque) > block_mig_state.submitted, block_mig_state.transferred); > > qemu_mutex_lock_iothread(); > + init_blk_migration(f); > > /* start track dirty blocks */ > ret = set_dirty_tracking(); > @@ -635,7 +636,6 @@ static int block_save_setup(QEMUFile *f, void *opaque) > return ret; > } > > - init_blk_migration(f); > > qemu_mutex_unlock_iothread(); > -- Regards Chai Wen