On 2019-06-18 14:57, Hugo Mills wrote:
On Tue, Jun 18, 2019 at 02:50:34PM -0400, Austin S. Hemmelgarn wrote:
On 2019-06-18 14:45, Hugo Mills wrote:
On Tue, Jun 18, 2019 at 08:26:32PM +0200, Stéphane Lesimple wrote:
I've been a btrfs user for quite a number of years now, but it seems
I need the wiseness of the btrfs gurus on this one!

I have a 5-hdd btrfs raid1 setup with 4x3T+1x10T drives.
A few days ago, I replaced one of the 3T by a new 10T, running btrfs
replace and then resizing the FS to use all the available space of
the new device.

The filesystem was 90% full before I expanded it so, as expected,
most of the space on the new device wasn't actually allocatable in
raid1, as very few available space was available on the 4 other
devs.

Of course the solution is to run a balance, but as the filesystem is
now quite big, I'd like to avoid running a full rebalance. This
would be quite i/o intensive, would be running for several days, and
putting and unecessary stress on the drives. This also seems
excessive as in theory only some Tb would need to be moved: if I'm
correct, only one of two block groups of a sufficient amount of
chunks to be moved to the new device so that the sum of the amount
of available space on the 4 preexisting devices would at least equal
the available space on the new device, ~7Tb instead of moving ~22T.
I don't need to have a perfectly balanced FS, I just want all the
space to be allocatable.

I tried using the -ddevid option but it only instructs btrfs to work
on the block groups allocated on said device, as it happens, it
tends to move data between the 4 preexisting devices and doesn't fix
my problem. A full balance with -dlimit=100 did no better.

    -dlimit=100 will only move 100 GiB of data (i.e. 200 GiB), so it'll
be a pretty limited change. You'll need to use a larger number than
that if you want it to have a significant visible effect.
Last I checked, that's not how the limit filter works.  AFAIUI, it's
an upper limit on how full a chunk can be to be considered for the
balance operation.  So, balancing with only `-dlimit=100` should
actually balance all data chunks (but only data chunks, because you
haven't asked for metadata balancing).

    That's usage, not limit. limit is simply counting the number of
block groups to move.

Realized that I got the two mixed up right after I hit send.

That said, given the size of the FS, it's not unlikely that it may move more than 100GB worth of data (pre-replication), as the FS itself is getting into the range where chunk sizes start to scale up.

Reply via email to