On Thu, Sep 12, 2019 at 08:26:04PM -0400, General Zed wrote:
Quoting Zygo Blaxell <ce3g8...@umail.furryterror.org>:
On Thu, Sep 12, 2019 at 06:57:26PM -0400, General Zed wrote:
At worst, it just has to completely write-out "all metadata",
all the way up
to the super. It needs to be done just once, because what's the point of
writing it 10 times over? Then, the super is updated as the
final commit.
This is kind of a silly discussion. The biggest extent possible on
btrfs is 128MB, and the incremental gains of forcing 128MB extents to
be consecutive are negligible. If you're defragging a 10GB file, you're
just going to end up doing 80 separate defrag operations.
Ok, then the max extent is 128 MB, that's fine. Someone here
previously said
that it is 2 GB, so he has disinformed me (in order to further his false
argument).
If the 128MB limit is removed, you then hit the block group size limit,
which is some number of GB from 1 to 10 depending on number of disks
available and raid profile selection (the striping raid profiles cap
block group sizes at 10 disks, and single/raid1 profiles always use 1GB
block groups regardless of disk count). So 2GB is _also_ a valid extent
size limit, just not the first limit that is relevant for defrag.
A lot of people get confused by 'filefrag -v' output, which coalesces
physically adjacent but distinct extents. So if you use that tool,
it can _seem_ like there is a 2.5GB extent in a file, but it is really
20 distinct 128MB extents that start and end at adjacent addresses.
You can see the true structure in 'btrfs ins dump-tree' output.
That also brings up another reason why 10GB defrags are absurd on btrfs:
extent addresses are virtual. There's no guarantee that a pair of extents
that meet at a block group boundary are physically adjacent, and after
operations like RAID array reorganization or free space defragmentation,
they are typically quite far apart physically.
I didn't ever said that I would force extents larger than 128 MB.
If you are defragging a 10 GB file, you'll likely have to do it
in 10 steps,
because the defrag is usually allowed to only use a limited amount of disk
space while in operation. That has nothing to do with the extent size.
Defrag is literally manipulating the extent size. Fragments and extents
are the same thing in btrfs.
Currently a 10GB defragment will work in 80 steps, but doesn't necessarily
commit metadata updates after each step, so more than 128MB of temporary
space may be used (especially if your disks are fast and empty,
and you start just after the end of the previous commit interval).
There are some opportunities to coalsce metadata updates, occupying up
to a (arbitrary) limit of 512MB of RAM (or when memory pressure forces
a flush, whichever comes first), but exploiting those opportunities
requires more space for uncommitted data.
If the filesystem starts to get low on space during a defrag, it can
inject commits to force metadata updates to happen more often, which
reduces the amount of temporary space needed (we can't delete the original
fragmented extents until their replacement extent is committed); however,
if the filesystem is so low on space that you're worried about running
out during a defrag, then you probably don't have big enough contiguous
free areas to relocate data into anyway, i.e. the defrag is just going to
push data from one fragmented location to a different fragmented location,
or bail out with "sorry, can't defrag that."