On Thu,  9 May 2013 12:43:20 -0700
Chegu Vinod <chegu_vi...@hp.com> wrote:

>  If a user chooses to turn on the auto-converge migration capability
>  these changes detect the lack of convergence and throttle down the
>  guest. i.e. force the VCPUs out of the guest for some duration
>  and let the migration thread catchup and help converge.
> 
[...]
> +
> +static void mig_delay_vcpu(void)
> +{
> +    qemu_mutex_unlock_iothread();
> +    g_usleep(50*1000);
> +    qemu_mutex_lock_iothread();
> +}
> +
> +/* Stub used for getting the vcpu out of VM and into qemu via
> +   run_on_cpu()*/
> +static void mig_kick_cpu(void *opq)
> +{
> +    mig_delay_vcpu();
> +    return;
> +}
> +
> +/* To reduce the dirty rate explicitly disallow the VCPUs from spending
> +   much time in the VM. The migration thread will try to catchup.
> +   Workload will experience a performance drop.
> +*/
> +void migration_throttle_down(void)
> +{
> +    if (throttling_needed()) {
> +        CPUArchState *penv = first_cpu;
> +        while (penv) {
> +            qemu_mutex_lock_iothread();
Locking it here and the unlocking it inside of queued work doesn't look nice.
What exactly are you protecting with this lock?


> +            async_run_on_cpu(ENV_GET_CPU(penv), mig_kick_cpu, NULL);
> +            qemu_mutex_unlock_iothread();
> +            penv = penv->next_cpu;
> +        }
> +    }
> +}



-- 
Regards,
  Igor

Reply via email to