Am 20.02.2012 10:29, schrieb Zhi Yong Wu:
> On Mon, Feb 20, 2012 at 5:26 PM, Kevin Wolf <kw...@redhat.com> wrote:
>> Am 20.02.2012 05:50, schrieb zwu.ker...@gmail.com:
>>> From: Zhi Yong Wu <wu...@linux.vnet.ibm.com>
>>>
>>> If one guest has multiple disks with enabling I/O throttling function 
>>> separately, when draining activities are done, some requests maybe are in 
>>> the throttled queue; So we need to restart them at first.
>>>
>>> Moreover, when only one disk need to be drained such as hotplug out, if 
>>> another disk still has some requests in its throttled queue, these request 
>>> should not be effected.
>>>
>>> Signed-off-by: Zhi Yong Wu <wu...@linux.vnet.ibm.com>
>>> ---
>>>  block.c     |   29 ++++++++++++++++++++++-------
>>>  block_int.h |    1 +
>>>  2 files changed, 23 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/block.c b/block.c
>>> index ae297bb..f78df78 100644
>>> --- a/block.c
>>> +++ b/block.c
>>> @@ -853,25 +853,40 @@ void bdrv_close_all(void)
>>>      }
>>>  }
>>>
>>> -/*
>>> - * Wait for pending requests to complete across all BlockDriverStates
>>> - *
>>> - * This function does not flush data to disk, use bdrv_flush_all() for that
>>> - * after calling this function.
>>> - */
>>> -void bdrv_drain_all(void)
>>> +void bdrv_drain_request(BlockDriverState *throttled_bs)
>>>  {
>>>      BlockDriverState *bs;
>>>
>>> +    QTAILQ_FOREACH(bs, &bdrv_states, list) {
>>> +        if (throttled_bs && throttled_bs != bs) {
>>> +            continue;
>>> +        }
>>> +        qemu_co_queue_restart_all(&bs->throttled_reqs);
>>> +    }
>>> +
>>>      qemu_aio_flush();
>>
>> Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is
>> the real bug that should be fixed.
> Do you mean that why qemu_aio_flush doesn't invoke those lines of
> codes above it? As what i said in above comments, when one VM has 2
> multiple disks with enabling I/O throttling, if only one disk need to
> be flushed, only this disk's throttled request need to be drained, and
> another disk's throttled requests don't need.

So this is an optimisation rather than a fix? Would the right
optimisation be a qemu_aio_flush_disk() that flushes the requests for a
single disk?

Kevin

Reply via email to