On Mon, Nov 26, 2018 at 02:25:52PM +0200, Nikolay Borisov wrote: > > > On 21.11.18 г. 21:03 ч., Josef Bacik wrote: > > With the introduction of the per-inode block_rsv it became possible to > > have really really large reservation requests made because of data > > fragmentation. Since the ticket stuff assumed that we'd always have > > relatively small reservation requests it just killed all tickets if we > > were unable to satisfy the current request. However this is generally > > not the case anymore. So fix this logic to instead see if we had a > > ticket that we were able to give some reservation to, and if we were > > continue the flushing loop again. Likewise we make the tickets use the > > space_info_add_old_bytes() method of returning what reservation they did > > receive in hopes that it could satisfy reservations down the line. > > > The logic of the patch can be summarised as follows: > > If no progress is made for a ticket, then start fail all tickets until > the first one that has progress made on its reservation (inclusive). In > this case this first ticket will be failed but at least it's space will > be reused via space_info_add_old_bytes. > > Frankly this seem really arbitrary.
It's not though. The tickets are in order of who requested the reservation. Because we will backfill reservations for things like hugely fragmented files or large amounts of delayed refs we can have spikes where we're trying to reserve 100mb's of metadata space. We may fill 50mb of that before we run out of space. Well so we can't satisfy that reservation, but the small 100k reservations that are waiting to be serviced can be satisfied and they can run. The alternative is you get ENOSPC and then you can turn around and touch a file no problem because it's a small reservation and there was room for it. This patch enables better behavior for the user. Thanks, Josef