On Tue, Jan 20, 2026 at 07:04:09PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu ([email protected]) wrote:
> > On Tue, Jan 20, 2026 at 12:48:47PM +0100, Lukas Straub wrote:
> > > On Mon, 19 Jan 2026 17:33:25 -0500
> > > Peter Xu <[email protected]> wrote:
> > > 
> > > > On Sat, Jan 17, 2026 at 08:49:13PM +0100, Lukas Straub wrote:
> > > > > On Thu, 15 Jan 2026 18:38:51 -0500
> > > > > Peter Xu <[email protected]> wrote:
> > > > >   
> > > > > > On Thu, Jan 15, 2026 at 10:59:47PM +0000, Dr. David Alan Gilbert 
> > > > > > wrote:  
> > > > > > > * Peter Xu ([email protected]) wrote:    
> > > > > > > > On Thu, Jan 15, 2026 at 10:49:29PM +0100, Lukas Straub wrote:   
> > > > > > > >  
> > > > > > > > > Nack.
> > > > > > > > > 
> > > > > > > > > This code has users, as explained in my other email:
> > > > > > > > > https://lore.kernel.org/qemu-devel/20260115224516.7f0309ba@penguin/T/#mc99839451d6841366619c4ec0d5af5264e2f6464
> > > > > > > > >     
> > > > > > > > 
> > > > > > > > Please then rework that series and consider include the 
> > > > > > > > following (I
> > > > > > > > believe I pointed out a long time ago somewhere..):
> > > > > > > >     
> > > > > > >     
> > > > > > > > - Some form of justification of why multifd needs to be enabled 
> > > > > > > > for COLO.
> > > > > > > >   For example, in your cluster deployment, using multifd can 
> > > > > > > > improve XXX
> > > > > > > >   by YYY.  Please describe the use case and improvements.    
> > > > > > > 
> > > > > > > That one is pretty easy; since COLO is regularly taking 
> > > > > > > snapshots, the faster
> > > > > > > the snapshoting the less overhead there is.    
> > > > > > 
> > > > > > Thanks for chiming in, Dave.  I can explain why I want to request 
> > > > > > for some
> > > > > > numbers.
> > > > > > 
> > > > > > Firstly, numbers normally proves it's used in a real system.  It's 
> > > > > > at least
> > > > > > being used and seriously tested.
> > > > > > 
> > > > > > Secondly, per my very limited understanding to COLO... the two VMs 
> > > > > > in most
> > > > > > cases should be in-sync state already when both sides generate the 
> > > > > > same
> > > > > > network packets.
> > > > > > 
> > > > > > Another sync (where multifd can start to take effect) is only 
> > > > > > needed when
> > > > > > there're packets misalignments, but IIUC it should be rare.  I 
> > > > > > don't know
> > > > > > how rare it is, it would be good if Lukas could introduce some of 
> > > > > > those
> > > > > > numbers in his deployment to help us understand COLO better if 
> > > > > > we'll need
> > > > > > to keep it.  
> > > > > 
> > > > > It really depends on the workload and if you want to tune for
> > > > > throughput or latency.  
> > > > 
> > > > Thanks for all the answers from all of you.
> > > > 
> > > > If we decide to keep COLO, looks like I'll have no choice but 
> > > > understand it
> > > > better, as long as it still has anything to do with migration..  I'll 
> > > > leave
> > > > some more questions / comments at the end.
> > > > 
> > > > > 
> > > > > You need to do a checkpoint eventually and the more time passes 
> > > > > between
> > > > > checkpoints the more dirty memory you have to transfer during the
> > > > > checkpoint.
> > > > > 
> > > > > Also keep in mind that the guest is stopped during checkpoints. 
> > > > > Because
> > > > > even if we continue running the guest, we can not release the 
> > > > > mismatched
> > > > > packets since that would expose a state of the guest to the outside
> > > > > world that is not yet replicated to the secondary.  
> > > > 
> > > > Yes this makes sense.  However it is also the very confusing part of 
> > > > COLO.
> > > > 
> > > > When I said "I was expecting migration to not be the hot path", one 
> > > > reason
> > > > is I believe COLO checkpoint (or say, when migration happens) will
> > > > introduce a larger downtime than normal migration, because this process
> > > > transfers RAM with both VMs stopped.
> > > > 
> > > > You helped explain why that large downtime is needed, thanks.  However 
> > > > then
> > > > it means either (1) packet misalignment, or (2) periodical timer 
> > > > kickoff,
> > > > either of them will kickoff checkpoint..
> > > 
> > > Yes, it could be optimized so we don't stop the guest for the periodical
> > > checkpoints.
> > 
> > Likely we must stop it at least to savevm on non-rams.  But I get your
> > point.  Yes, I think it might be good idea to try to keep in sync even
> > without an explicit checkpoint request, almost like what live precopy does
> > with RAM to shrink the downtime.
> > 
> > > 
> > > > 
> > > > I don't know if COLO services care about such relatively large downtime,
> > > > especially it is not happening once, but periodically for every tens of
> > > > seconds at least (assuming when periodically then packet misalignment is
> > > > rare).
> > > > 
> > > 
> > > If you want to tune for latency you go for like 500ms checkpoint
> > > interval.
> > > 
> > > 
> > > The alternative way to do fault tolerance is micro checkpointing where
> > > only the primary guest runs while you buffer all sent packets. Then
> > > every checkpoint you transfer all ram and device state to the secondary
> > > and only then release all network packets.
> > > So in this approach every packet is delayed by checkpoint interval +
> > > checkpoint downtime and you use a checkpoint interval of like 30-100ms.
> > > 
> > > Obviously, COLO is a much better approach because only few packets
> > > observe a delay.
> > > 
> > > > > 
> > > > > So the migration performance is actually the most important part in
> > > > > COLO to keep the checkpoints as short as possible.  
> > > > 
> > > > IIUC when a heartbeat will be lost on PVM _during_ sync checkpoints, 
> > > > then
> > > > SVM can only rollback to the last time checkpoint.  Would this be good
> > > > enough in reality?  It means if there's a TCP transaction then it may 
> > > > broke
> > > > anyway.  x-checkpoint-delay / periodic checkpoints definitely make this
> > > > more possible to happen.
> > > 
> > > We only release the mismatched packets after the ram and device state
> > > is fully sent to the secondary. Because then the secondary is in the
> > > state that generated these mismatched packets and can take over.
> > 
> > My question was more about how COLO failover works (or work at all?) if a
> > failure happens exactly during checkpointing (aka, migration happening).
> > 
> > First of all, if the failure happens on SVM, IIUC it's not a problem,
> > because PVM has all the latest data.
> > 
> > The problem lies more in the case where the failure happened in PVM. In
> > this case, SVM only contains the previous checkpoint results, maybe plus
> > something on top of that snapshot, as SVM kept running after the previous
> > checkpoint.
> > 
> > So the failure can happen at different spots:
> > 
> >   (1) Failure happens _before_ applying the new checkpoint, that is, while
> >       receiving the checkpoint from src and for example the PVM host is
> >       down, channel shutdown.
> > 
> >       This one looks "okay", IIUC what will happen is SVM will keep running
> >       but then as I described above it only contains the previous version
> >       of the PVM snapshot, plus something SVM updated which may not match
> >       with PVM's data:
> > 
> >            (1.a) if checkpoint triggered because of x-checkpoint-delay,
> >            lower risk, possibly still in sync with src
> > 
> >            (1.b) if checkpoint triggered by colo-compare notification of
> >            packet misalignment, I believe this may cause service
> >            interruptions and it means SVM will not be able to competely
> 
> No, that's ok - the colo-compare mismatch triggers the need for a checkpoint;
> but if the PVM dies during the creation of that checkpoint, it's the same as
> if the PVM had never started making the checkpoint; the SVM just takes over.
> But the important thing is that the packet that caused the micompare can't
> be released until after the hosts are in sync again.
> 
> > 
> >   (2) Failure happens _after_ applying the new checkpoint, but _before_ the
> >       whole checkpoint is applied.
> > 
> >       To be explicit, consider qemu_load_device_state() when the process of
> >       colo_incoming_process_checkpoint() failed.  It means SVM applied
> >       partial of PVM's checkpoint, I think it should mean PVM is completely
> >       corrupted.
> 
> As long as the SVM has got the entire checkpoint, then it *can* apply it all
> and carry on from that point.

Does it mean we assert() that qemu_load_device_state() will always success
for COLO syncs?

Logically post_load() can invoke anything and I'm not sure if something can
start to fail, but I confess I don't know an existing device that can
trigger it.

Lukas told me something was broken though with pc machine type, on
post_load() not re-entrant.  I think it might be possible though when
post_load() is relevant to some device states (that guest driver can change
between two checkpoint loads), but that's still only theoretical.  So maybe
we can indeed assert it here.

> 
> > Here either (1.b) or (2) seems fatal to me on the whole high level design.
> > Periodical syncs with x-checkpoint-delay can make this easier to happen, so
> > larger windows of critical failures.  That's also why I think it's
> > confusing COLO prefers more checkpoints - while it helps sync things up, it
> > enlarges high risk window and overall overhead.
> 
> No, there should be no point at which a failure leaves the SVM without a 
> checkpoint
> that it can apply to take over.
> 
> > > > > I have quite a few more performance and cleanup patches on my hands,
> > > > > for example to transfer dirty memory between checkpoints.
> > > > >   
> > > > > > 
> > > > > > IIUC, the critical path of COLO shouldn't be migration on its own?  
> > > > > > It
> > > > > > should be when heartbeat gets lost; that normally should happen 
> > > > > > when two
> > > > > > VMs are in sync.  In this path, I don't see how multifd helps..  
> > > > > > because
> > > > > > there's no migration happening, only the src recording what has 
> > > > > > changed.
> > > > > > Hence I think some number with description of the measurements may 
> > > > > > help us
> > > > > > understand how important multifd is to COLO.
> > > > > > 
> > > > > > Supporting multifd will cause new COLO functions to inject into core
> > > > > > migration code paths (even if not much..). I want to make sure such 
> > > > > > (new)
> > > > > > complexity is justified. I also want to avoid introducing a feature 
> > > > > > only
> > > > > > because "we have XXX, then let's support XXX in COLO too, maybe 
> > > > > > some day
> > > > > > it'll be useful".  
> > > > > 
> > > > > What COLO needs from migration at the low level:
> > > > > 
> > > > > Primary/Outgoing side:
> > > > > 
> > > > > Not much actually, we just need a way to incrementally send the
> > > > > dirtied memory and the full device state.
> > > > > Also, we ensure that migration never actually finishes since we will
> > > > > never do a switchover. For example we never set
> > > > > RAMState::last_stage with COLO.
> > > > > 
> > > > > Secondary/Incoming side:
> > > > > 
> > > > > colo cache:
> > > > > Since the secondary always needs to be ready to take over (even during
> > > > > checkpointing), we can not write the received ram pages directly to
> > > > > the guest ram to prevent having half of the old and half of the new
> > > > > contents.
> > > > > So we redirect the received ram pages to the colo cache. This is
> > > > > basically a mirror of the primary side ram.
> > > > > It also simplifies the primary side since from it's point of view it's
> > > > > just a normal migration target. So primary side doesn't have to care
> > > > > about dirtied pages on the secondary for example.
> > > > > 
> > > > > Dirty Bitmap:
> > > > > With COLO we also need a dirty bitmap on the incoming side to track
> > > > > 1. pages dirtied by the secondary guest
> > > > > 2. pages dirtied by the primary guest (incoming ram pages)
> > > > > In the last step during the checkpointing, this bitmap is then used
> > > > > to overwrite the guest ram with the colo cache so the secondary guest
> > > > > is in sync with the primary guest.
> > > > > 
> > > > > All this individually is very little code as you can see from my
> > > > > multifd patch. Just something to keep in mind I guess.
> > > > > 
> > > > > 
> > > > > At the high level we have the COLO framework outgoing and incoming
> > > > > threads which just tell the migration code to:
> > > > > Send all ram pages (qemu_savevm_live_state()) on the outgoing side
> > > > > paired with a qemu_loadvm_state_main on the incoming side.
> > > > > Send the device state (qemu_save_device_state()) paired with writing
> > > > > that stream to a buffer on the incoming side.
> > > > > And finally flusing the colo cache and loading the device state on the
> > > > > incoming side.
> > > > > 
> > > > > And of course we coordinate with the colo block replication and
> > > > > colo-compare.  
> > > > 
> > > > Thank you.  Maybe you should generalize some of the explanations and 
> > > > put it
> > > > into docs/devel/migration/ somewhere.  I think many of them are not
> > > > mentioned in the doc on how COLO works internally.
> > > > 
> > > > Let me ask some more questions while I'm reading COLO today:
> > > > 
> > > > - For each of the checkpoint (colo_do_checkpoint_transaction()), COLO 
> > > > will
> > > >   do the following:
> > > > 
> > > >     bql_lock()
> > > >     vm_stop_force_state(RUN_STATE_COLO)     # stop vm
> > > >     bql_unlock()
> > > > 
> > > >     ...
> > > >   
> > > >     bql_lock()
> > > >     qemu_save_device_state()                # into a temp buffer fb
> > > >     bql_unlock()
> > > > 
> > > >     ...
> > > > 
> > > >     qemu_savevm_state_complete_precopy()    # send RAM, directly to the 
> > > > wire
> > > >     qemu_put_buffer(fb)                     # push temp buffer fb to 
> > > > wire
> > > > 
> > > >     ...
> > > > 
> > > >     bql_lock()
> > > >     vm_start()                              # start vm
> > > >     bql_unlock()
> > > > 
> > > >   A few questions that I didn't ask previously:
> > > > 
> > > >   - If VM is stopped anyway, why putting the device states into a temp
> > > >     buffer, instead of using what we already have for precopy phase, or
> > > >     just push everything directly to the wire?
> > > 
> > > Actually we only do that to get the size of the device state and send
> > > the size out-of-band, since we can not use qemu_load_device_state()
> > > directly on the secondary side and look for the in-band EOF.
> > 
> > I also don't understand why the size is needed..
> > 
> > Currently the streaming protocol for COLO is:
> > 
> >   - ...
> >   - COLO_MESSAGE_VMSTATE_SEND
> >   - RAM data
> >   - EOF
> >   - COLO_MESSAGE_VMSTATE_SIZE
> >   - non-RAM data
> >   - EOF
> > 
> > My question is about, why can't we do this instead?
> > 
> >   - ...
> >   - COLO_MESSAGE_VMSTATE_SEND
> >   - RAM data

[1]

> >   - non-RAM data
> >   - EOF
> > 
> > If the VM is stoppped during the whole process anyway..
> > 
> > Here RAM/non-RAM data all are vmstates, and logically can also be loaded in
> > one shot of a vmstate load loop.
> 
> You might be able to; in that case you would have to stream the 
> entire thing into a buffer on the secondary rather than applying the
> RAM updates to the colo cache.

I thought the colo cache is already such a buffering when receiving at [1]
above?  Then we need to flush the colo cache (including scan the SVM bitmap
and only flush those pages in colo cache) like before.

If something went wrong (e.g. channel broken during receiving non-ram
device states), SVM can directly drop all colo cache as the latest
checkpoint isn't complete.

> 
> > 
> > > 
> > > > 
> > > >   - Above operation frequently releases BQL, why is it needed?  What
> > > >     happens if (within the window where BQL released) someone invoked 
> > > > QMP
> > > >     command "cont" and causing VM to start? Would COLO be broken with 
> > > > it?
> > > >     Should we take BQL for the whole process to avoid it?
> > > 
> > > We need to release the BQL because block replication on the secondary 
> > > side and
> > > colo-compare and netfilters on the primary side need the main loop to
> > > make progress.
> > 
> > Do we need it to make progress before vm_start(), though?  If we take BQL
> > once and release it once only after vm_start(), would it work?
> > 
> > I didn't see anything being checked in colo_do_checkpoint_transaction(),
> > after vm_stop() + replication_do_checkpoint_all(), and before vm_start()..
> > 
> > > 
> > > Issuing a cont during a checkpoint will probably break it yes.
> > 
> > Feel free to send a patch if you think it's a concern.  Ok to me even if
> > without, if mgmt has full control of it, so I'll leave it to you to decide
> > as I'm not a colo user after all.
> > 
> > > 
> > > > 
> > > > - Does colo_cache has an limitation, or should we expect SVM to double
> > > >   consume the guest RAM size?  As I didn't see where colo_cache will be
> > > >   released during each sync (e.g. after colo_flush_ram_cache).  I am
> > > >   expecting over time SVM will have most of the pages touched, then the
> > > >   colo_cache can consume the same as guest mem on SVM.
> > > 
> > > Yes, the secondary side consumes twice the guest ram size. That is one
> > > disadvantage of this approach.
> > > 
> > > I guess we could do some kind of copy on write mapping for the
> > > secondary guest ram. But even then it's hard to make the ram overhead
> > > bounded in size.
> > 
> > It's ok, though this sounds also like something good to be documented, it's
> > a very high level knowledge an user should know when considering COLO as an
> > HA solution.
> 
> The thought of using userfaultfd-write had floated around at some time
> as ways to optimise this.

It's an interesting idea. Yes it looks working, but as Lukas said, it looks
still unbounded.

One idea to provide a strict bound:

  - admin sets a proper buffer to limit the extra pages to remember on SVM,
    should be much smaller than total guest mem, but admin should make sure
    in 99.99% cases it won't hit the limit with a proper x-checkpoint-delay,

  - if limit triggered, both VMs needs to pause (initiated by SVM), SVM
    needs to explicitly request a checkpoint to src,

  - VMs can only start again after two VMs sync again

Thanks,

-- 
Peter Xu


Reply via email to