On 2018/10/28 上午1:42, Marc MERLIN wrote: > On Wed, Oct 24, 2018 at 01:07:25PM +0800, Qu Wenruo wrote: >>> saruman:/mnt/btrfs_pool1# btrfs balance start -musage=80 -v . >>> Dumping filters: flags 0x6, state 0x0, force is off >>> METADATA (flags 0x2): balancing, usage=80 >>> SYSTEM (flags 0x2): balancing, usage=80 >>> Done, had to relocate 5 out of 202 chunks >>> saruman:/mnt/btrfs_pool1# btrfs fi show . >>> Label: 'btrfs_pool1' uuid: fda628bc-1ca4-49c5-91c2-4260fe967a23 >>> Total devices 1 FS bytes used 188.24GiB >>> devid 1 size 228.67GiB used 203.54GiB path /dev/mapper/pool1 >>> >>> and it's back to 15GB :-/ >>> >>> How can I get 188.24 and 203.54 to converge further? Where is all that >>> space gone? >> >> Your original chunks are already pretty compact. >> Thus really no need to do extra balance. >> >> You may get some extra space by doing full system balance (no usage= >> filter), but that's really not worthy in my opinion. >> >> Maybe you could try defrag to free some space wasted by CoW instead? >> (If you're not using many snapshots) > > Thanks for the reply. > > So right now, I have: > saruman:~# btrfs fi show /mnt/btrfs_pool1/ > Label: 'btrfs_pool1' uuid: fda628bc-1ca4-49c5-91c2-4260fe967a23 > Total devices 1 FS bytes used 188.25GiB > devid 1 size 228.67GiB used 203.54GiB path /dev/mapper/pool1
The fs is over 50G, so your metadata chunk will be allocated in 1G size. > > saruman:~# btrfs fi df /mnt/btrfs_pool1/ > Data, single: total=192.48GiB, used=184.87GiB Your data usage is over 96%, so your data chunks are already pretty compact. In theory you could reach the minimal data usage 185G, but I think any new data write would cause new data chunks to be created in that case. To reclaim that 7.5G, you need to use -dusage filter other than your -musage filter. And your usage parameter may be pretty low. > System, DUP: total=32.00MiB, used=48.00KiB > Metadata, DUP: total=5.50GiB, used=3.38GiB Metadata looks more sparse than data. But considering your metadata chunks are allocated in 1G size and CoW happens more frequently, it's not that easy to reclaim more space. And even you succeeded relocating all these metadata chunks, you could only reclaim at most 2~4G. > GlobalReserve, single: total=512.00MiB, used=0.00B > > I've been using btrfs for a long time now but I've never had a > filesystem where I had 15GB apparently unusable (7%) after a balance. You really don't need to worry, and that "15G" is not unusable. It's will mostly likely to be used by data, and from your "fi df" output, your data:metadata ratio is over 10, so it should be completely fine. > > I can't drop all the snapshots since at least two is used for btrfs > send/receive backups. > However, if I delete more snapshots, and do a full balance, you think > it'll free up more space? No. You're already too worried about an non-existing problem. Your fs looks pretty healthy. Thanks, Qu > I can try a defrag next, but since I have COW for snapshots, it's not > going to help much, correct?> > Thanks, > Marc >
signature.asc
Description: OpenPGP digital signature