On Tue, Sep 20, 2022 at 08:47:20PM -0400, Peter Xu wrote: > On Tue, Sep 20, 2022 at 06:52:27PM -0400, Peter Xu wrote: > > With the new code to send pages in rp-return thread, there's little help to > > keep lots of the old code on maintaining the preempt state in migration > > thread, because the new way should always be faster.. > > > > Then if we'll always send pages in the rp-return thread anyway, we don't > > need those logic to maintain preempt state anymore because now we serialize > > things using the mutex directly instead of using those fields. > > > > It's very unfortunate to have those code for a short period, but that's > > still one intermediate step that we noticed the next bottleneck on the > > migration thread. Now what we can do best is to drop unnecessary code as > > long as the new code is stable to reduce the burden. It's actually a good > > thing because the new "sending page in rp-return thread" model is (IMHO) > > even cleaner and with better performance. > > > > Remove the old code that was responsible for maintaining preempt states, at > > the meantime also remove x-postcopy-preempt-break-huge parameter because > > with concurrent sender threads we don't really need to break-huge anymore. > > > > Signed-off-by: Peter Xu <pet...@redhat.com> > > --- > > migration/migration.c | 2 - > > migration/ram.c | 258 +----------------------------------------- > > 2 files changed, 3 insertions(+), 257 deletions(-) > > > > diff --git a/migration/migration.c b/migration/migration.c > > index fae8fd378b..698fd94591 100644 > > --- a/migration/migration.c > > +++ b/migration/migration.c > > @@ -4399,8 +4399,6 @@ static Property migration_properties[] = { > > DEFINE_PROP_SIZE("announce-step", MigrationState, > > parameters.announce_step, > > DEFAULT_MIGRATE_ANNOUNCE_STEP), > > - DEFINE_PROP_BOOL("x-postcopy-preempt-break-huge", MigrationState, > > - postcopy_preempt_break_huge, true), > > Forgot to drop the variable altogether: > > diff --git a/migration/migration.h b/migration/migration.h > index cdad8aceaa..ae4ffd3454 100644 > --- a/migration/migration.h > +++ b/migration/migration.h > @@ -340,13 +340,6 @@ struct MigrationState { > bool send_configuration; > /* Whether we send section footer during migration */ > bool send_section_footer; > - /* > - * Whether we allow break sending huge pages when postcopy preempt is > - * enabled. When disabled, we won't interrupt precopy within sending a > - * host huge page, which is the old behavior of vanilla postcopy. > - * NOTE: this parameter is ignored if postcopy preempt is not enabled. > - */ > - bool postcopy_preempt_break_huge; > > /* Needed by postcopy-pause state */ > QemuSemaphore postcopy_pause_sem; > > Will squash this in in next version.
Two more varialbes to drop, as attached.. -- Peter Xu
>From b3308e34398e21c19bd36ec21aae9c7f9f623d75 Mon Sep 17 00:00:00 2001 From: Peter Xu <pet...@redhat.com> Date: Wed, 21 Sep 2022 09:51:55 -0400 Subject: [PATCH] fixup! migration: Remove old preempt code around state maintainance Content-type: text/plain Signed-off-by: Peter Xu <pet...@redhat.com> --- migration/ram.c | 33 --------------------------------- 1 file changed, 33 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 03bf2324ab..2599eee070 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -97,28 +97,6 @@ struct PageSearchStatus { unsigned long page; /* Set once we wrap around */ bool complete_round; - /* - * [POSTCOPY-ONLY] Whether current page is explicitly requested by - * postcopy. When set, the request is "urgent" because the dest QEMU - * threads are waiting for us. - */ - bool postcopy_requested; - /* - * [POSTCOPY-ONLY] The target channel to use to send current page. - * - * Note: This may _not_ match with the value in postcopy_requested - * above. Let's imagine the case where the postcopy request is exactly - * the page that we're sending in progress during precopy. In this case - * we'll have postcopy_requested set to true but the target channel - * will be the precopy channel (so that we don't split brain on that - * specific page since the precopy channel already contains partial of - * that page data). - * - * Besides that specific use case, postcopy_target_channel should - * always be equal to postcopy_requested, because by default we send - * postcopy pages via postcopy preempt channel. - */ - bool postcopy_target_channel; /* Whether we're sending a host page */ bool host_page_sending; /* The start/end of current host page. Invalid if host_page_sending==false */ @@ -1573,13 +1551,6 @@ retry: */ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again) { - /* - * This is not a postcopy requested page, mark it "not urgent", and use - * precopy channel to send it. - */ - pss->postcopy_requested = false; - pss->postcopy_target_channel = RAM_CHANNEL_PRECOPY; - /* Update pss->page for the next dirty bit in ramblock */ pss_find_next_dirty(pss); @@ -2091,9 +2062,6 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss) * really rare. */ pss->complete_round = false; - /* Mark it an urgent request, meanwhile using POSTCOPY channel */ - pss->postcopy_requested = true; - pss->postcopy_target_channel = RAM_CHANNEL_POSTCOPY; } return !!block; @@ -2190,7 +2158,6 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len) * we should be the only one who operates on the qemufile */ pss->pss_channel = migrate_get_current()->postcopy_qemufile_src; - pss->postcopy_requested = true; assert(pss->pss_channel); /* -- 2.32.0