June 18, 2019 9:42 PM, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
> On 2019-06-18 15:37, Stéphane Lesimple wrote: > >> June 18, 2019 9:06 PM, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote: >> On 2019-06-18 14:26, Stéphane Lesimple wrote: >>> [...] >> >> I don't need to have a perfectly balanced FS, I just want all the space > to >> be allocatable. >> I tried using the -ddevid option but it only instructs btrfs to work on > >> the block groups >> allocated on said device, as it happens, it tends to > move data between the >> 4 preexisting devices >> and doesn't fix my problem. > A full balance with -dlimit=100 did no better. >> Is there a way to ask the block group allocator to prefer writing to a > >> specific device during a >> balance? Something like -ddestdevid=N? This > would just be a hint to the >> allocator and the usual >> constraints would > always apply (and prevail over the hint when needed). >> Or is there any obvious solution I'm completely missing? >>> Based on what you've said, you may actually not have enough free space that >>> can be allocated to >>> balance things properly. >>> >>> When a chunk gets balanced, you need to have enough space to create a new >>> instance of that type of >>> chunk before the old one is removed. As such, if you can't allocate new >>> chunks at all, you can't >>> balance those chunks either. >>> >>> So, that brings up the question of how to deal with your situation. >>> >>> The first thing I would do is multiple compaction passes using the `usage` >>> filter. Start with: >>> >>> btrfs balance -dusage=0 -musage=0 /wherever >>> >>> That will clear out any empty chunks which haven't been removed (there >>> shouldn't be any if you're >>> on a recent kernel, but it's good practice anyway). After that, repeat the >>> same command, but with a >>> value of 10 instead of 0, and then keep repeating in increments of 10 up >>> until 50. Doing this will >>> clean up chunks that are more than half empty (making multiple passes like >>> this is a bit more >>> reliable, and in some cases also more efficient), which should free up >>> enough space for balance to >>> work with (as well as probably moving most of the block groups it touches >>> to use the new disk). >> >> Fair point, I do run some balances with -dusage=20 from time to time, the >> current state of the FS >> is actually as follows: >> btrfs d u /tank | grep Unallocated: >> Unallocated: 57.45GiB >> Unallocated: 4.58TiB <= new 10T >> Unallocated: 16.03GiB >> Unallocated: 63.49GiB >> Unallocated: 69.52GiB >> As you can see I was able to move some data to the new 10T drive in the last >> few days, mainly by >> trial/error with several -ddevid and -dlimit parameters. As of now I still >> have 4.38T that are >> unallocatable, out of the 4.58T that are unallocated on the new drive. I was >> looking for a better >> solution that just running a full balance (with or without -devid=old10T) by >> asking btrfs to >> balance data to the new drive, but it seems there's no way to instruct btrfs >> to do that. >> I think I'll still run a -dusage pass before doing the full balance indeed, >> can't hurt. > I would specifically make a point to go all the way up to `-dusage=50` on > that pass though. It > will, of course, take longer than a run with `-dusage=20` would, but it will > also do a much better > job. > That said, it looks like you should have more than enough space for balance > to be doing it's job > correctly here, so I suspect you may have a lot of partially full chunks > around and the balance is > repacking into those instead of allocating new chunks. > Regardless though, I suspect that just doing a balance pass with the devid > filter and only > balancing chunks that are on the old 10TB disk as Hugo suggested is probably > going to get you the > best results proportionate to the time it takes. About the chunks, that's entirely possible. I'll run some passes up to -dusage=50 before launching the balance then. Thanks! -- Stéphane.