On Thu, Sep 12, 2019 at 1:18 PM <webmas...@zedlx.com> wrote:
>
> It is normal and common for defrag operation to use some disk space
> while it is running. I estimate that a reasonable limit would be to
> use up to 1% of total partition size. So, if a partition size is 100
> GB, the defrag can use 1 GB. Lets call this "defrag operation space".

The simplest case of a file with no shared extents, the minimum free
space should be set to the potential maximum rewrite of the file, i.e.
100% of the file size. Since Btrfs is COW, the entire operation must
succeed or fail, no possibility of an ambiguous in between state, and
this does apply to defragment.

So if you're defragging a 10GiB file, you need 10GiB minimum free
space to COW those extents to a new, mostly contiguous, set of exents,
and then some extra free space to COW the metadata to point to these
new extents. Once that change is committed to stable media, then the
stale data and metadata extents can be released.

And this process is subject to ENOSPC condition. That's really what'll
tell you if you don't have enough space otherwise your setup time for
a complete volume recursive defragment is going to be really long, and
has some chance of reporting back that defragment isn't possible even
though most of it could be possible.

Arguably the defragmenting strategy should differ depending on whether
no_ssd or ssd mount option is enabled. Massive fragmentation on SSD
does impact latency, but there are no locality concerns, so as long as
the file is defragmented into ~32MiB extents, I think it's fine.
Perhaps ideal would be erase block sized extents? Whereas on a hard
drive, locality matters as well.


-- 
Chris Murphy

Reply via email to