On Tue, Sep 16, 2025 at 06:32:59PM -0300, Fabiano Rosas wrote:
> Peter Xu <[email protected]> writes:
> 
> > [this is an early RFC, not for merge, but to collect initial feedbacks]
> >
> > Background
> > ==========
> >
> > Nowadays, live migration heavily depends on threads. For example, most of
> > the major features that will be used nowadays in live migration (multifd,
> > postcopy, mapped-ram, vfio, etc.) all work with threads internally.
> >
> > But still, from time to time, we'll see some coroutines floating around the
> > migration context.  The major one is precopy's loadvm, which is internally
> > a coroutine.  It is still a critical path that any live migration depends 
> > on.
> >
> 
> I always wanted to be an archaeologist:
> 
> https://lists.gnu.org/archive/html/qemu-devel//2012-08/msg01136.html
> 
> I was expecting to find some complicated chain of events leading to the
> choice of using a coroutine, but no.

I actually didn't see that previously..  I'll add this link into that major
patch commit message, to make future archaeology work easier.

> 
> > A mixture of using both coroutines and threads is prone to issues.  Some
> > examples can refer to commit e65cec5e5d ("migration/ram: Yield periodically
> > to the main loop") or commit 7afbdada7e ("migration/postcopy: ensure
> > preempt channel is ready before loading states").
> >
> > Overview
> > ========
> >
> > This series tries to move migration further into the thread-based model, by
> > allowing the loadvm process to happen in a thread rather than in the main
> > thread with a coroutine.
> >
> > Luckily, since the qio channel code is always ready for both cases, IO
> > paths should all be fine.
> >
> > Note that loadvm for postcopy already happens in a ram load thread which is
> > separate.  However, RAM is just the simple case here, even it has its own
> > challenges (on atomically update of the pgtables), its complexity lies in
> > the kernel.
> >
> > For precopy, loadvm has quite a few operations that will need BQL.  The
> > question is we can't take BQL for the whole process of loadvm, because
> > that'll block the main thread from executions (e.g. QMP hangs).  Here, the
> > finer granule we can push BQL the better.  This series so far chose
> > somewhere in the middle, by taking BQL on majorly these two places:
> >
> >   - CPU synchronizations
> >   - Device START/FULL sections
> >
> > After this series applied, most of the rest loadvm path will run without
> > BQL anymore.  There is a more detailed discussion / todo in the commit
> > message of patch "migration: Thread-ify precopy vmstate load process"
> > explaning how to further split the BQL critical sections.
> >
> > I was trying to split the patches into smaller ones if possible, but it's
> > still quite challenging so there's one major patch that does the work.
> >
> > After the series applied, the only leftover pieces in migration/ that would
> > use a coroutine is snapshot save/load/delete jobs.
> >
> 
> Which are then fine because the work itself runs on the main loop,
> right? So the bottom-half scheduling could be left as a coroutine.

Correct, iochannel works for both cases.

For coroutines, it can properly register the fd and yield like before for
snapshot save/load.  It used to do the same for live loadvm, but now after
moving to a thread it will start to use qio_channel_wait() instead.

I think we could also move back to blocking mode for live migration
incoming side after make it a thread, which might be slightly more
efficient to directly block in recvmsg() rather than return+poll.  But it
is trivial comparing to "moving to thread" change, and it can be done for
later even if it works.

> 
> > Tests
> > =====
> >
> > Default CI passes.
> >
> > RDMA unit tests pass as usual. I also tried out cancellation / failure
> > tests over RDMA channels, making sure nothing is stuck.
> >
> > I also roughly measured how long it takes to run the whole 80+ migration
> > qtest suite, and see no measurable difference before / after this series.
> >
> > Risks
> > =====
> >
> > This series has the risk of breaking things.  I would be surprised if it
> > didn't..
> >
> > I confess I didn't test anything on COLO but only from code observations
> > and analysis.  COLO maintainers: could you add some unit tests to QEMU's
> > qtests?
> >
> > The current way of taking BQL during FULL section load may cause issues, it
> > means when the IOs are unstable we could be waiting for IO (in the new
> > migration incoming thread) with BQL held.  This is low possibility, though,
> > only happens when the network halts during flushing the device states.
> > However still possible.  One solution is to further breakdown the BQL
> > critical sections to smaller sections, as mentioned in TODO.
> >
> > Anything more than welcomed: suggestions, questions, objections, tests..
> >
> > Todo
> > ====
> >
> > - Test COLO?
> > - Finer grained BQL breakdown
> > - More..
> >
> > Thanks,
> >
> > Peter Xu (9):
> >   migration/vfio: Remove BQL implication in
> >     vfio_multifd_switchover_start()
> >   migration/rdma: Fix wrong context in qio_channel_rdma_shutdown()
> >   migration/rdma: Allow qemu_rdma_wait_comp_channel work with thread
> >   migration/rdma: Change io_create_watch() to return immediately
> >   migration: Thread-ify precopy vmstate load process
> >   migration/rdma: Remove coroutine path in qemu_rdma_wait_comp_channel
> >   migration/postcopy: Remove workaround on wait preempt channel
> >   migration/ram: Remove workaround on ram yield during load
> >   migration/rdma: Remove rdma_cm_poll_handler
> >
> >  include/migration/colo.h    |   6 +-
> >  migration/migration.h       |  52 +++++++--
> >  migration/savevm.h          |   5 +-
> >  hw/vfio/migration-multifd.c |   9 +-
> >  migration/channel.c         |   7 +-
> >  migration/colo-stubs.c      |   2 +-
> >  migration/colo.c            |  23 +---
> >  migration/migration.c       |  62 ++++++++---
> >  migration/ram.c             |  13 +--
> >  migration/rdma.c            | 206 ++++++++----------------------------
> >  migration/savevm.c          |  85 +++++++--------
> >  migration/trace-events      |   4 +-
> >  12 files changed, 196 insertions(+), 278 deletions(-)
> 

-- 
Peter Xu


Reply via email to