RE: [PATCH 2/3] COLO: Migrate dirty pages during the gap of checkpointing
> -Original Message- > From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] > Sent: Thursday, February 20, 2020 2:51 AM > To: Zhanghailiang > Cc: qemu-devel@nongnu.org; quint...@redhat.com; chen.zh...@intel.com; > daniel...@qnap.com > Subject: Re: [PATCH 2/3] COLO: Migrate dirty pages during the gap of > checkpointing > > * Hailiang Zhang (zhang.zhanghaili...@huawei.com) wrote: > > We can migrate some dirty pages during the gap of checkpointing, > > by this way, we can reduce the amount of ram migrated during > checkpointing. > > > > Signed-off-by: Hailiang Zhang > > --- > > migration/colo.c | 69 > +++--- > > migration/migration.h | 1 + > > migration/trace-events | 1 + > > qapi/migration.json| 4 ++- > > 4 files changed, 70 insertions(+), 5 deletions(-) > > > > diff --git a/migration/colo.c b/migration/colo.c > > index 93c5a452fb..d30c6bc4ad 100644 > > --- a/migration/colo.c > > +++ b/migration/colo.c > > @@ -46,6 +46,13 @@ static COLOMode last_colo_mode; > > > > #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024) > > > > +#define DEFAULT_RAM_PENDING_CHECK 1000 > > + > > +/* should be calculated by bandwidth and max downtime ? */ > > +#define THRESHOLD_PENDING_SIZE (100 * 1024 * 1024UL) > > Turn both of these magic constants into parameters. > Good idea, will do this in later patches. > > +static int checkpoint_request; > > + > > bool migration_in_colo_state(void) > > { > > MigrationState *s = migrate_get_current(); > > @@ -516,6 +523,20 @@ static void > colo_compare_notify_checkpoint(Notifier *notifier, void *data) > > colo_checkpoint_notify(data); > > } > > > > +static bool colo_need_migrate_ram_background(MigrationState *s) > > +{ > > +uint64_t pending_size, pend_pre, pend_compat, pend_post; > > +int64_t max_size = THRESHOLD_PENDING_SIZE; > > + > > +qemu_savevm_state_pending(s->to_dst_file, max_size, &pend_pre, > > + &pend_compat, &pend_post); > > +pending_size = pend_pre + pend_compat + pend_post; > > + > > +trace_colo_need_migrate_ram_background(pending_size); > > +return (pending_size >= max_size); > > +} > > + > > + > > static void colo_process_checkpoint(MigrationState *s) > > { > > QIOChannelBuffer *bioc; > > @@ -571,6 +592,8 @@ static void > colo_process_checkpoint(MigrationState *s) > > > > timer_mod(s->colo_delay_timer, > > current_time + s->parameters.x_checkpoint_delay); > > +timer_mod(s->pending_ram_check_timer, > > +current_time + DEFAULT_RAM_PENDING_CHECK); > > What happens if the iterate takes a while and this triggers in the > middle of the iterate? > It will trigger another iterate after this one been finished. > > while (s->state == MIGRATION_STATUS_COLO) { > > if (failover_get_state() != FAILOVER_STATUS_NONE) { > > @@ -583,10 +606,25 @@ static void > colo_process_checkpoint(MigrationState *s) > > if (s->state != MIGRATION_STATUS_COLO) { > > goto out; > > } > > -ret = colo_do_checkpoint_transaction(s, bioc, fb); > > -if (ret < 0) { > > -goto out; > > -} > > +if (atomic_xchg(&checkpoint_request, 0)) { > > +/* start a colo checkpoint */ > > +ret = colo_do_checkpoint_transaction(s, bioc, fb); > > +if (ret < 0) { > > +goto out; > > +} > > +} else { > > +if (colo_need_migrate_ram_background(s)) { > > +colo_send_message(s->to_dst_file, > > + > COLO_MESSAGE_MIGRATE_RAM_BACKGROUND, > > + &local_err); > > +if (local_err) { > > +goto out; > > +} > > + > > +qemu_savevm_state_iterate(s->to_dst_file, false); > > +qemu_put_byte(s->to_dst_file, QEMU_VM_EOF); > > Maybe you should do a qemu_file_get_error(..) at this point to check > it's OK. Agreed, we should check it. > > > +} > > + } > > } > > > > out: > > @@ -626,6 +664,8 @@ out: > > colo_compare_unregister_notifier(&packets_compare_notifier); > > timer_del(s->colo_delay_timer); > > timer_free(s->colo_delay_timer); > > +timer_del(s-&
Re: [PATCH 2/3] COLO: Migrate dirty pages during the gap of checkpointing
* Hailiang Zhang (zhang.zhanghaili...@huawei.com) wrote: > We can migrate some dirty pages during the gap of checkpointing, > by this way, we can reduce the amount of ram migrated during checkpointing. > > Signed-off-by: Hailiang Zhang > --- > migration/colo.c | 69 +++--- > migration/migration.h | 1 + > migration/trace-events | 1 + > qapi/migration.json| 4 ++- > 4 files changed, 70 insertions(+), 5 deletions(-) > > diff --git a/migration/colo.c b/migration/colo.c > index 93c5a452fb..d30c6bc4ad 100644 > --- a/migration/colo.c > +++ b/migration/colo.c > @@ -46,6 +46,13 @@ static COLOMode last_colo_mode; > > #define COLO_BUFFER_BASE_SIZE (4 * 1024 * 1024) > > +#define DEFAULT_RAM_PENDING_CHECK 1000 > + > +/* should be calculated by bandwidth and max downtime ? */ > +#define THRESHOLD_PENDING_SIZE (100 * 1024 * 1024UL) Turn both of these magic constants into parameters. > +static int checkpoint_request; > + > bool migration_in_colo_state(void) > { > MigrationState *s = migrate_get_current(); > @@ -516,6 +523,20 @@ static void colo_compare_notify_checkpoint(Notifier > *notifier, void *data) > colo_checkpoint_notify(data); > } > > +static bool colo_need_migrate_ram_background(MigrationState *s) > +{ > +uint64_t pending_size, pend_pre, pend_compat, pend_post; > +int64_t max_size = THRESHOLD_PENDING_SIZE; > + > +qemu_savevm_state_pending(s->to_dst_file, max_size, &pend_pre, > + &pend_compat, &pend_post); > +pending_size = pend_pre + pend_compat + pend_post; > + > +trace_colo_need_migrate_ram_background(pending_size); > +return (pending_size >= max_size); > +} > + > + > static void colo_process_checkpoint(MigrationState *s) > { > QIOChannelBuffer *bioc; > @@ -571,6 +592,8 @@ static void colo_process_checkpoint(MigrationState *s) > > timer_mod(s->colo_delay_timer, > current_time + s->parameters.x_checkpoint_delay); > +timer_mod(s->pending_ram_check_timer, > +current_time + DEFAULT_RAM_PENDING_CHECK); What happens if the iterate takes a while and this triggers in the middle of the iterate? > while (s->state == MIGRATION_STATUS_COLO) { > if (failover_get_state() != FAILOVER_STATUS_NONE) { > @@ -583,10 +606,25 @@ static void colo_process_checkpoint(MigrationState *s) > if (s->state != MIGRATION_STATUS_COLO) { > goto out; > } > -ret = colo_do_checkpoint_transaction(s, bioc, fb); > -if (ret < 0) { > -goto out; > -} > +if (atomic_xchg(&checkpoint_request, 0)) { > +/* start a colo checkpoint */ > +ret = colo_do_checkpoint_transaction(s, bioc, fb); > +if (ret < 0) { > +goto out; > +} > +} else { > +if (colo_need_migrate_ram_background(s)) { > +colo_send_message(s->to_dst_file, > + COLO_MESSAGE_MIGRATE_RAM_BACKGROUND, > + &local_err); > +if (local_err) { > +goto out; > +} > + > +qemu_savevm_state_iterate(s->to_dst_file, false); > +qemu_put_byte(s->to_dst_file, QEMU_VM_EOF); Maybe you should do a qemu_file_get_error(..) at this point to check it's OK. > +} > + } > } > > out: > @@ -626,6 +664,8 @@ out: > colo_compare_unregister_notifier(&packets_compare_notifier); > timer_del(s->colo_delay_timer); > timer_free(s->colo_delay_timer); > +timer_del(s->pending_ram_check_timer); > +timer_free(s->pending_ram_check_timer); > qemu_sem_destroy(&s->colo_checkpoint_sem); > > /* > @@ -643,6 +683,7 @@ void colo_checkpoint_notify(void *opaque) > MigrationState *s = opaque; > int64_t next_notify_time; > > +atomic_inc(&checkpoint_request); Can you explain what you've changed about this atomic in this patch, I don't quite see what you're doing. > qemu_sem_post(&s->colo_checkpoint_sem); > s->colo_checkpoint_time = qemu_clock_get_ms(QEMU_CLOCK_HOST); > next_notify_time = s->colo_checkpoint_time + > @@ -650,6 +691,19 @@ void colo_checkpoint_notify(void *opaque) > timer_mod(s->colo_delay_timer, next_notify_time); > } > > +static void colo_pending_ram_check_notify(void *opaque) > +{ > +int64_t next_notify_time; > +MigrationState *s = opaque; > + > +if (migration_in_colo_state()) { > +next_notify_time = DEFAULT_RAM_PENDING_CHECK + > + qemu_clock_get_ms(QEMU_CLOCK_HOST); > +timer_mod(s->pending_ram_check_timer, next_notify_time); > +qemu_sem_post(&s->colo_checkpoint_sem); > +} > +} > + > void migrate_start_colo_process(MigrationState *s) > { > qemu_mutex_unlock_iothread(); > @@ -657,6 +711,8 @@ void migrate_start_colo_process(MigrationState *s) > s->colo_delay_timer
Re: [PATCH 2/3] COLO: Migrate dirty pages during the gap of checkpointing
On 2/16/20 7:20 PM, Hailiang Zhang wrote: We can migrate some dirty pages during the gap of checkpointing, by this way, we can reduce the amount of ram migrated during checkpointing. Signed-off-by: Hailiang Zhang --- +++ b/qapi/migration.json @@ -977,12 +977,14 @@ # # @vmstate-loaded: VM's state has been loaded by SVM. # +# @migrate-ram-background: Send some dirty pages during the gap of COLO checkpoint +# Missing a '(since 5.0)' marker. # Since: 2.8 ## { 'enum': 'COLOMessage', 'data': [ 'checkpoint-ready', 'checkpoint-request', 'checkpoint-reply', 'vmstate-send', 'vmstate-size', 'vmstate-received', -'vmstate-loaded' ] } +'vmstate-loaded', 'migrate-ram-background' ] } ## # @COLOMode: -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org