On Tue, Mar 15, 2016 at 10:23 PM, Nazar Mokrynskyi <na...@mokrynskyi.com> wrote:
> Sounds like a really good idea!
>
> I'll try to implement in in my backup tool, but it might take some time to
> see real benefit from it (or no benefit:)).

There is a catch. I'm not sure how much testing deleting 100
subvolumes at once gets. It should work. I haven't looked in xfstests
to see how much of this is being tested. So it's possible you're
testing it. So be ready.

Also since it's batched, consider doing it at night when it's not
used. The cleaner task will always slow things down because it has to
decrement reference counts and then find out what to actually delete
and then update metadata to reflect it. You could also add in a 5
minute delay after subvolume deletes then issue sysrq+w in case
there's significant blocked tasks, the logs will have extra debug
info.

Another idea is maybe graphically modeling (seekwatcher) the normal
write pattern, and see how it changes when even a single subvolume is
deleted (the current every 15 minute method). That might give an idea
how significantly cleaner tasks affect your particular workload. The
results might support batching not only to avoid fragmentation by
getting larger contiguous free space, but to avoid the IOPS hit during
the day time from too aggressive (?) of a cleaner task.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to