On Mon, Sep 28, 2015 at 12:04 PM, Alberto Garcia <be...@igalia.com> wrote:
> On Mon 28 Sep 2015 02:18:33 AM CEST, Fam Zheng <f...@redhat.com> wrote:
>
>>> > Can this be abused? If I have a guest running in a cloud where the
>>> > cloud provider has put severe throttling limits on me, but lets me
>>> > hotplug to my heart's content, couldn't I just repeatedly
>>> > plug/unplug the disk to get around the throttling (every time I
>>> > unplug, all writes flush at full speed, then I immediately replug
>>> > to start batching up a new set of writes).  In other words,
>>> > shouldn't the draining still be throttled, to prevent my abuse?
>>>
>>> I didn't think about this case, and I don't know how practical this
>>> is, but note that bdrv_drain() (which is already at the beginning of
>>> bdrv_close()) flushes the I/O queue explicitly bypassing the limits,
>>> so other cases where a user can trigger a bdrv_drain() would also be
>>> vulnerable to this.
>>
>> Yes, the issue is pre-existing. This patch only reordered things
>> inside bdrv_close() so it's no worse.
>>
>> But indeed there is this vulnerability, maybe we should throttle the
>> queue in all cases?
>
> I would like to see a test case with numbers that show how much you can
> actually bypass the I/O limits.
>
> Berto
>

For a wild real-world case, consider a written log/db xlog. As an
example, attached picture shows an actual IOPS measurement for the
test sample which has been automatically throttled to 70 wIOPS. The
application behind is an exim4 sending messages at a rate about 20/s.
Databases also could break the QEMU IOPS write limits but on more
specific conditions and I think it could be problematic to reproduce.
Breaking through limit could be possible on an advertised/set qd > 1.

Reply via email to