On Tue, Jan 20, 2026 at 06:05:15PM +0000, Daniel P. Berrangé wrote:
> On Tue, Jan 20, 2026 at 12:13:58PM -0500, Peter Xu wrote:
> > On Sat, Jan 17, 2026 at 03:09:11PM +0100, Lukas Straub wrote:
> > > Like in the normal ram_load() path, put the received pages into the
> > > colo cache and mark the pages in the bitmap so that they will be
> > > flushed to the guest later.
> > > 
> > > Multifd with COLO is useful to reduce the VM pause time during 
> > > checkpointing
> > > for latency sensitive workloads. In such workloads the worst-case latency
> > > is especially important.
> > > 
> > > Also, multifd migration is the preferred way to do migration nowadays and 
> > > this
> > > allows to use multifd compression with COLO.
> > > 
> > > Benchmark:
> > > Cluster nodes
> > >  - Intel Xenon E5-2630 v3
> > >  - 48Gb RAM
> > >  - 10G Ethernet
> > > Guest
> > >  - Windows Server 2016
> > >  - 6Gb RAM
> > >  - 4 cores
> > > Workload
> > >  - Upload a file to the guest with SMB to simulate moderate
> > >    memory dirtying
> > >  - Measure the memory transfer time portion of each checkpoint
> > >  - 600ms COLO checkpoint interval
> > > 
> > > Results
> > > Plain
> > >  idle mean: 4.50ms 99per: 10.33ms
> > >  load mean: 24.30ms 99per: 78.05ms
> > > Multifd-4
> > >  idle mean: 6.48ms 99per: 10.41ms
> > >  load mean: 14.12ms 99per: 31.27ms
> > 
> > Thanks for the numbers.  They're persuasive at least from 1st look.
> > 
> > Said that, one major question is, multifd should only help with throughput
> > when cpu is a bottleneck sending, in your case it's 10Gbps NIC.  Normally
> > any decent cpu should be able to push closer to 10Gbps even without
> > multifd.
> 
> That assumes the CPUs used by migration are otherwise idle though. If the
> host is busy running guest workloads, only small timeslices may be available
> for use by migration threads. Using multifd would better utilize what's
> available if multiple host CPUs have partial availability.

Hmm, I'm not sure this is the case for when the test was run above.  I
rarely see a host's CPUs been completely occupied.  Say, on 16 cores system
it means ~1600% CPU utilization.

I think it's because normally when a host will be hosting VMs, we should
normally have some of CPU resources reserved for host housekeeping.
Otherwise I'm not sure how to guarantee general availability of the
host.. and IIUC it may also affect the guest.

Here, IMHO as long as there's >100% CPU resource on this host (e.g. out of
1600% on a 16 cores system), enabling multifd or not shouldn't matter much
when the NIC is 10Gbps.

Old but decent processor should be able to push 10~15Gbps, new processor
should be able to push to ~25Gbps or more, with 100% CPU resource.

It's because the scheduler will schedule whatever thread (either the
migration thread alone, or multifd threads) onto whatever core that will
still be free (or some cores that have free cycles).

When all CPUs are occupied, IMHO multifd shouldn't help much
either.. maybe >1 threads make it easier to get scheduled (hence more time
slices from scheduler), but I believe that's not the major use case for
multifd.. it should really be when there're plenty of CPU resources.

Thanks,

-- 
Peter Xu


Reply via email to