Hi!

> The problem is we don't know the max bandwidth a disk can provide for a
> specific workload, which depends on the device and IO pattern. The estimated
> bandwidth by patch 1 will be always not accurate unless the disk is already in
> max bandwidth. To solve this issue, we always over estimate the bandwidth. 
> Over
> esitmate bandwidth, workload dispatchs more IO, estimated bandwidth becomes
> higher, dispatches even more IO. The loop will run till we enter a stable
> state, in which the disk gets max bandwidth. The 'slightly adjust and run into
> stable state' is the core algorithm the patch series use. We also use it to
> detect inactive cgroup.

Ok, so you want to reach a steady state, but what if workloads varies
a lot?

Lets say random writes for ten minutes, then linear write.

Will the linear write be severely throttled because of the previous
seeks?

Can a task get bigger bandwidth by doing some additional (useless)
work?

Like "I do bigger reads in the random read phase, so that I'm not
throttled that badly when I do the linear read"?
                                                                Pavel

Reply via email to