On Fri, Feb 19, 2021 at 7:16 PM Qu Wenruo <quwenruo.bt...@gmx.com> wrote:
> On 2021/2/20 上午10:47, Erik Jensen wrote:
> > Given that it sounds like the issue is the metadata address space, and
> > given that I surely don't actually have 16TiB of metadata on a 24TiB
> > file system (indeed, Metadata, RAID1: total=30.00GiB, used=28.91GiB),
> > is there any way I could compact the metadata offsets into the lower
> > 16TiB of the virtual metadata inode? Perhaps that could be something
> > balance could be taught to do? (Obviously, the initial run of such a
> > balance would have to be performed using a 64-bit system.)
>
> Unfortunately, no.
>
> Btrfs relies on increasing bytenr in the logical address space for
> things like balance, thus we can't relocate chunks to smaller bytenr.

That's… unfortunate. How much relies on the assumption that bytenr is monotonic?

Brainstorming some ideas, is compacting the address space something
that could be done offline? E.g., maybe some two-pass process: first
something balance-like that bumps all of the metadata up to a compact
region of address space, starting at a new 16TiB boundary, and then a
follow up pass that just strips the top bits off?

Or maybe once all of the bytenrs are brought within 16TiB of each
other by balance, btrfs could just keep track of an offset that needs
to be applied when mapping page cache indexes?

Or maybe btrfs could use multiple virtual inodes on 32-bit systems,
one for each 16TiB block of address space with metadata in it? If this
were to ever grow to need more than a handful of virtual inodes, it
seems like a balance *would* actually help in this case by compacting
the metadata higher in the address space, allowing the virtual inodes
for lower in the address space to be dropped.

Or maybe btrfs could just not use the page cache for the metadata
inode once the offset exceeds 16TiB, and only cache at the block
layer? This would surely hurt performance, but at least the filesystem
could still be accessed.

Given that this issue appears to be not due to the size of the
filesystem, but merely how much I've used it, having the only solution
be to copy all of the data off, reformat the drives, and then restore
every time filesystem usage exceeds a certain thresholds is… not very
satisfying.

Finally, I've never done kernel dev before, but I do have some C
experience, so if there is a solution that falls into the category of
seeming reasonable, likely to be accepted if implemented, but being
unlikely to get implemented given the low priority of supporting
32-bit systems, let me know and maybe I can carve out some time to
give it a try.

Reply via email to