On Fri, Mar 29, 2024 at 08:54:07AM +0000, Wang, Wei W wrote:
> On Friday, March 29, 2024 11:32 AM, Wang, Lei4 wrote:
> > When using the post-copy preemption feature to perform post-copy live
> > migration, the below scenario could lead to a deadlock and the migration 
> > will
> > never finish:
> > 
> >  - Source connect() the preemption channel in postcopy_start().
> >  - Source and the destination side TCP stack finished the 3-way handshake
> >    thus the connection is successful.
> >  - The destination side main thread is handling the loading of the bulk RAM
> >    pages thus it doesn't start to handle the pending connection event in the
> >    event loop. and doesn't post the semaphore postcopy_qemufile_dst_done for
> >    the preemption thread.
> >  - The source side sends non-iterative device states, such as the virtio
> >    states.
> >  - The destination main thread starts to receive the virtio states, this
> >    process may lead to a page fault (e.g., virtio_load()->vring_avail_idx()
> >    may trigger a page fault since the avail ring page may not be received
> >    yet).

Ouch.  Yeah I think this part got overlooked when working on the preempt
channel.

> >  - The page request is sent back to the source side. Source sends the page
> >    content to the destination side preemption thread.
> >  - Since the event is not arrived and the semaphore
> >    postcopy_qemufile_dst_done is not posted, the preemption thread in
> >    destination side is blocked, and cannot handle receiving the page.
> >  - The QEMU main load thread on the destination side is stuck at the page
> >    fault, and cannot yield and handle the connect() event for the
> >    preemption channel to unblock the preemption thread.
> >  - The postcopy will stuck there forever since this is a deadlock.
> > 
> > The key point to reproduce this bug is that the source side is sending 
> > pages at a
> > rate faster than the destination handling, otherwise, the qemu_get_be64() in
> > ram_load_precopy() will have a chance to yield since at that time there are 
> > no
> > pending data in the buffer to get. This will make this bug harder to be
> > reproduced.

How hard would this reproduce?

I'm thinking whether this should be 9.0 material or 9.1.  It's pretty late
for 9.0 though, but we can still discuss.

> > 
> > Fix this by yielding the load coroutine when receiving
> > MIG_CMD_POSTCOPY_LISTEN so the main event loop can handle the
> > connection event before loading the non-iterative devices state to avoid the
> > deadlock condition.
> > 
> > Signed-off-by: Lei Wang <lei4.w...@intel.com>
> 
> This seems to be a regression issue caused by this commit:
> 737840e2c6ea (migration: Use the number of transferred bytes directly)
> 
> Adding qemu_fflush back to migration_rate_exceeded() or ram_save_iterate
> seems to work (might not be a good fix though).
> 
> > ---
> >  migration/savevm.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/migration/savevm.c b/migration/savevm.c index
> > e386c5267f..8fd4dc92f2 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -2445,6 +2445,11 @@ static int loadvm_process_command(QEMUFile *f)
> >          return loadvm_postcopy_handle_advise(mis, len);
> > 
> >      case MIG_CMD_POSTCOPY_LISTEN:
> > +        if (migrate_postcopy_preempt() && qemu_in_coroutine()) {
> > +            aio_co_schedule(qemu_get_current_aio_context(),
> > +                            qemu_coroutine_self());
> > +            qemu_coroutine_yield();
> > +        }
> 
> The above could be moved to loadvm_postcopy_handle_listen().

I'm not 100% sure such thing (no matter here or moved into it, which does
look cleaner) would work for us.

The problem is I still don't yet see an ordering restricted on top of (1)
accept() happens, and (2) receive LISTEN cmd here.  What happens if the
accept() request is not yet received when reaching LISTEN?  Or is it always
guaranteed the accept(fd) will always be polled here?

For example, the source QEMU (no matter pre-7.2 or later) will always setup
the preempt channel asynchrounously, then IIUC it can connect() after
sending the whole chunk of packed data which should include this LISTEN.  I
think it means it's not guaranteed this will 100% work, but maybe further
reduce the possibility of the race.

One right fix that I can think of is moving the sem_wait(&done) into the
main thread too, so we wait for the sem _before_ reading the packed data,
so there's no chance of fault.  However I don't think sem_wait() will be
smart enough to yield when in a coroutine..  In the long term run I think
we should really make migration loadvm to do work in the thread rather than
the main thread.  I think it means we have one more example to be listed in
this todo so that's preferred..

https://wiki.qemu.org/ToDo/LiveMigration#Create_a_thread_for_migration_destination

I attached such draft patch below, but I'm not sure it'll work.  Let me
know how both of you think about it.

> 
> Another option is to follow the old way (i.e. pre_7_2) to do 
> postcopy_preempt_setup
> in migrate_fd_connect. This can save the above overhead of switching to the
> main thread during the downtime. Seems Peter's previous patch already solved 
> the
> channel disordering issue. Let's see Peter and others' opinions.

IIUC we still need that pre_7_2 stuff and keep the postponed connect() to
make sure the ordering is done properly.  Wei, could you elaborate the
patch you mentioned?  Maybe I missed some spots.

You raised a good point that this may introduce higher downtime.  Did you
or Lei tried to measure how large it is?  If that is too high, we may need
to think another solution, e.g., wait the channel connection before vm stop
happens.

Thanks,

> 
> >          return loadvm_postcopy_handle_listen(mis);
> > 
> 
> >      case MIG_CMD_POSTCOPY_RUN:
> > --
> > 2.39.3
> 

===8<===
diff --git a/migration/migration.c b/migration/migration.c
index 696762bc64..bacd1328cf 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2593,6 +2593,12 @@ static int postcopy_start(MigrationState *ms, Error 
**errp)
     /*
      * Make sure the receiver can get incoming pages before we send the rest
      * of the state
+     *
+     * When preempt mode enabled, this must be done after we initiate the
+     * preempt channel, as destination QEMU will wait for the channel when
+     * processing the LISTEN request.  Currently it may not matter a huge
+     * deal if we always create the channel asynchrously with a qio task,
+     * but we need to keep this in mind.
      */
     qemu_savevm_send_postcopy_listen(fb);
 
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index eccff499cb..4f26a89ac9 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -1254,6 +1254,26 @@ int postcopy_ram_incoming_setup(MigrationIncomingState 
*mis)
     }
 
     if (migrate_postcopy_preempt()) {
+        /*
+         * The preempt channel is established in asynchronous way.  Wait
+         * for its completion.
+         */
+        while (!qemu_sem_timedwait(&mis->postcopy_qemufile_dst_done, 100)) {
+            /*
+             * Note that to make sure the main thread can still schedule an
+             * accept() request we need to proactively yield for the main
+             * loop to run for some duration (100ms in this case), which is
+             * pretty ugly.
+             *
+             * TODO: we should do this in a separate thread to load the VM
+             * rather than in the main thread, just like the source side.
+             */
+            if (qemu_in_coroutine()) {
+                aio_co_schedule(qemu_get_current_aio_context(),
+                                qemu_coroutine_self());
+                qemu_coroutine_yield();
+            }
+        }
         /*
          * This thread needs to be created after the temp pages because
          * it'll fetch RAM_CHANNEL_POSTCOPY PostcopyTmpPage immediately.
@@ -1743,12 +1763,6 @@ void *postcopy_preempt_thread(void *opaque)
 
     qemu_sem_post(&mis->thread_sync_sem);
 
-    /*
-     * The preempt channel is established in asynchronous way.  Wait
-     * for its completion.
-     */
-    qemu_sem_wait(&mis->postcopy_qemufile_dst_done);
-
     /* Sending RAM_SAVE_FLAG_EOS to terminate this thread */
     qemu_mutex_lock(&mis->postcopy_prio_thread_mutex);
     while (preempt_thread_should_run(mis)) {
-- 
2.44.0


-- 
Peter Xu


Reply via email to