Am 01.03.21 um 11:59 schrieb Kevin Wolf: > Am 26.02.2021 um 13:33 hat Peter Lieven geschrieben: >> Am 26.02.21 um 10:27 schrieb Alberto Garcia: >>> On Thu 25 Feb 2021 06:34:48 PM CET, Peter Lieven <p...@kamp.de> wrote: >>>> I was wondering if there is a way to check from outside (qmp etc.) if >>>> a throttled block device has exceeded the iops_max_length seconds of >>>> time bursting up to iops_max and is now hard limited to the iops limit >>>> that is supplied? >>>> >>>> Would it be also a good idea to exetend the accounting to account for >>>> requests that must have waited before being sent out to the backend >>>> device? >>> No, there's no such interface as far as I'm aware. I think one problem >>> is that throttling is now done using a filter, that can be inserted >>> anywhere in the node graph, and accounting is done at the BlockBackend >>> level. >>> >>> We don't even have a query-block-throttle function. I actually started >>> to write one six years ago but it was never finished. >> >> A quick idea that came to my mind was to add an option to emit a QMP >> event if the burst_bucket is exhausted and hard limits are enforced. > Do you actually need to do something every time that it's exceeded, so > QEMU needs to be the active part sending out an event, or is it > something that you need to check in specific places and could reasonably > query on demand? > > For the latter, my idea would have been adding a new read-only QOM > property to the throttle group object that exposes how much is still > left. When it becomes 0, the hard limits are enforced. > >> There seems to be something wrong in the throttling code anyway. >> Throttling causes addtional i/o latency always even if the actual iops >> rate is far away from the limits and ever more far away from the burst >> limits. I will dig into this. >> >> My wishlist: >> >> - have a possibility to query the throttling state. >> - have counters for no of delayed ops and for how long they were delayed. >> - have counters for untrottled <= 4k request performance for a backend >> storage device. >> >> The later two seem not trivial as you mentioned. > Do you need the information per throttle node or per throttle group? For > the latter, the same QOM property approach would work.
Hi Kevin, per throttle-group information would be sufficient. So you would expose the the level of the bucket and additionally a counter for throttled vs. total ops and total delay? Why we talk about throttling I still do not understand the following part in util/throttle.c function throttle_compute_wait if (!bkt->max) { /* If bkt->max is 0 we still want to allow short bursts of I/O * from the guest, otherwise every other request will be throttled * and performance will suffer considerably. */ bucket_size = (double) bkt->avg / 10; burst_bucket_size = 0; } else { /* If we have a burst limit then we have to wait until all I/O * at burst rate has finished before throttling to bkt->avg */ bucket_size = bkt->max * bkt->burst_length; burst_bucket_size = (double) bkt->max / 10; } Why burst_bucket_size = bkt->max / 10? >From what I understand it should be bkt->max. Otherwise we compare the "extra" >against a tenth of the bucket capacity and schedule a timer where it is not necessary. What am I missing here? Peter