Il 18/10/2012 09:29, Juan Quintela ha scritto:
> v3:
> 
> This is work in progress on top of the previous migration series just sent.
> 
> - Introduces a thread for migration instead of using a timer and callback
> - remove the writting to the fd from the iothread lock
> - make the writes synchronous
> - Introduce a new pending method that returns how many bytes are pending for
>   one save live section
> - last patch just shows printfs to see where the time is being spent
>   on the migration complete phase.
>   (yes it pollutes all uses of stop on the monitor)
> 
> So far I have found that we spent a lot of time on bdrv_flush_all() It
> can take from 1ms to 600ms (yes, it is not a typo).  That dwarfs the
> migration default downtime time (30ms).
> 
> Stop all vcpus:
> 
> - it works now (after the changes on qemu_cpu_is_vcpu on the previous
>   series) caveat is that the time that brdv_flush_all() takes is
>   "unpredictable".  Any silver bullets?

You could reuse the "block" live migration item.  In block_save_pending,
start a bdrv_aio_flush() on all block devices that have already
completed the previous one.

But that's not a regression in the migration thread, isn't it?

Paolo

Reply via email to