On Thu, Mar 30, 2023 at 03:20:14PM +0100, Daniel P. Berrangé wrote: > On Mon, Mar 27, 2023 at 01:15:18PM -0300, Leonardo Bras wrote: > > Since the introduction of multifd, it's possible to perform a multifd > > migration and finish it using postcopy. > > > > A bug introduced by yank (fixed on cfc3bcf373) was previously preventing > > a successful use of this migration scenario, and now it should be > > working on most cases. > > > > But since there is not enough testing/support nor any reported users for > > this scenario, we should disable this combination before it may cause any > > problems for users. > > Clearly we don't have enough testing, but multifd+postcopy looks > like a clearly useful scenario that we should be supporting. > > Every post-copy starts with at least one pre-copy iteration, and > using multifd for that will be important for big VMs where single > threaded pre-copy is going to be CPU bound. The greater amount we > can transfer in the pre-copy phase, the less page faults / latency > spikes postcopy is going to see.
If we're using 1-round precopy + postcopy approach, the amount of memory will be the same which is the guest mem size. Multifd will make the round shorter so more chance of getting less re-dirtied pages during the iteration, but that effect is limited. E.g.: - For a very idle guest, finishing 1st round in 1min or 3min may not bring a large difference because most of the pages will be constant anyway, or - For a very busy guest, probably similar amount of pages will be dirtied no matter in 1min / 3min. Multifd will bring a benefit here, but busier the guest smaller the effect. > > In terms of migration usage, my personal recommendation to mgmt > apps would be that they should always enable the post-copy feature > when starting a migration. Even if they expect to try to get it to > complete using exclusively pre-copy in the common case, its useful > to have post-copy capability flag enabled, as a get out of jail > free card. ie if migration ends up getting stuck in non-convergance, > or they have a sudden need to urgently complete the migration it is > good to be able to flip to post-copy mode. I fully agree. It should not need to be enabled only if not capable, e.g., the dest host may not have privilege to initiate the userfaultfd (since QEMU postcopy requires kernel fault traps, so it's very likely). The recent introduction of /dev/userfaultfd should make it even less likely to happen, it'll still require (1) admin adjusted permissions of the devnode and qemu ownership so qemu is in the white list, and (2) kernel needs to be new enough to have /dev/userfaultfd. > > I'd suggest that we instead add a multifd+postcopy test case to > migration-test.c and tackle any bugs it exposes. By blocking it > unconditionally we ensure no one will exercise it to expose any > further bugs. That's doable. But then we'd better also figure out how to identify the below two use cases of both features enabled: a. Enable multifd in precopy only, then switch to postcopy (currently mostly working but buggy; I think Juan can provide more information here, at least we need to rework multifd flush when switching, and test and test over to make sure there's nothing else missing). b. Enable multifd in both precopy and postcopy phase (currently definitely not supported) So that mgmt app will be aware whether multifd will be enabled in postcopy or not. Currently we can't identify it. I assume we can say by default "mutlifd+postcopy" means a) above, but we need to document it, and when b) is wanted and implemented someday, we'll need some other flag/cap for it. -- Peter Xu