Il 10/05/2013 01:00, Chegu Vinod ha scritto:
> On 5/9/2013 1:24 PM, Igor Mammedov wrote:
>> On Thu,  9 May 2013 12:43:20 -0700
>> Chegu Vinod <chegu_vi...@hp.com> wrote:
>>
>>>   If a user chooses to turn on the auto-converge migration capability
>>>   these changes detect the lack of convergence and throttle down the
>>>   guest. i.e. force the VCPUs out of the guest for some duration
>>>   and let the migration thread catchup and help converge.
>>>
>> [...]
>>> +
>>> +static void mig_delay_vcpu(void)
>>> +{
>>> +    qemu_mutex_unlock_iothread();
>>> +    g_usleep(50*1000);
>>> +    qemu_mutex_lock_iothread();
>>> +}
>>> +
>>> +/* Stub used for getting the vcpu out of VM and into qemu via
>>> +   run_on_cpu()*/
>>> +static void mig_kick_cpu(void *opq)
>>> +{
>>> +    mig_delay_vcpu();
>>> +    return;
>>> +}
>>> +
>>> +/* To reduce the dirty rate explicitly disallow the VCPUs from spending
>>> +   much time in the VM. The migration thread will try to catchup.
>>> +   Workload will experience a performance drop.
>>> +*/
>>> +void migration_throttle_down(void)
>>> +{
>>> +    if (throttling_needed()) {
>>> +        CPUArchState *penv = first_cpu;
>>> +        while (penv) {
>>> +            qemu_mutex_lock_iothread();
>> Locking it here and the unlocking it inside of queued work doesn't
>> look nice.
> Yes...but see below.

Actually, no. :)  It looks strange, but it is correct and perfectly fine.

The queued work is running in a completely different thread.  run_on_cpu
work items run under the BQL, thus mig_delay_vcpu needs to unlock.

On the other hand, migration_throttle_down runs in the migration thread,
outside the BQL.  It needs to lock because the first_cpu list can change
through hotplug at any time.  qemu_for_each_cpu would also need the BQL
for the same reason.

Paolo

Reply via email to