> Scale writeback cache per backing device, proportional to its writeout speed. > > akpm sayeth: > > Which problem are we trying to solve here? afaik our two uppermost > > problems are: > > > > a) Heavy write to queue A causes light writer to queue B to blok for a long > > time in balance_dirty_pages(). Even if the devices have the same speed. > > This one; esp when not the same speed. The - my usb stick makes my > computer suck - problem. But even on similar speed, the separation of > device should avoid blocking dev B when dev A is being throttled. > > The writeout speed is measure dynamically, so when it doesn't have > anything to write out for a while its writeback cache size goes to 0. > > Conversely, when starting up it will in the beginning act almost > synchronous but will quickly build up a 'fair' share of the writeback > cache.
I'm worried about two things: 1) If the per-bdi threshold becomes smaller than the granularity of the per-bdi stat (due to the per-CPU counters), then things will break. Shouldn't there be some sanity checking for the calculated threshold? 2) The loop is sleeping in congestion_wait(WRITE), which seems wrong. It may well be possible that none of the queues are congested, so it will sleep the full .1 second. But by that time the queue may have become idle and is just sitting there doing nothing. Maybe there should be a per-bdi waitq, that is woken up, when the per-bdi stats are updated. Thanks, Miklos - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/