24.10.2018 3:36, Marc MERLIN пишет:
> Normally when btrfs fi show will show lost space because 
> your trees aren't balanced.
> Balance usually reclaims that space, or most of it.
> In this case, not so much.
> 
> kernel 4.17.6:
> 
> saruman:/mnt/btrfs_pool1# btrfs fi show .
> Label: 'btrfs_pool1'  uuid: fda628bc-1ca4-49c5-91c2-4260fe967a23
>       Total devices 1 FS bytes used 186.89GiB
>       devid    1 size 228.67GiB used 207.60GiB path /dev/mapper/pool1
> 
> Ok, I have 21GB between used by FS and used in block layer.
> 
> saruman:/mnt/btrfs_pool1# btrfs balance start -dusage=40 -v .
> Dumping filters: flags 0x1, state 0x0, force is off
>   DATA (flags 0x2): balancing, usage=40
> Done, had to relocate 1 out of 210 chunks
> saruman:/mnt/btrfs_pool1# btrfs balance start -musage=60 -v .
> Dumping filters: flags 0x6, state 0x0, force is off
>   METADATA (flags 0x2): balancing, usage=60
>   SYSTEM (flags 0x2): balancing, usage=60
> Done, had to relocate 4 out of 209 chunks
> saruman:/mnt/btrfs_pool1# btrfs fi show .
> Label: 'btrfs_pool1'  uuid: fda628bc-1ca4-49c5-91c2-4260fe967a23
>       Total devices 1 FS bytes used 186.91GiB
>       devid    1 size 228.67GiB used 205.60GiB path /dev/mapper/pool1
> 
> That didn't help much, delta is now 19GB
> 
> saruman:/mnt/btrfs_pool1# btrfs balance start -dusage=80 -v .
> Dumping filters: flags 0x1, state 0x0, force is off
>   DATA (flags 0x2): balancing, usage=80
> Done, had to relocate 8 out of 207 chunks
> saruman:/mnt/btrfs_pool1# btrfs fi show .
> Label: 'btrfs_pool1'  uuid: fda628bc-1ca4-49c5-91c2-4260fe967a23
>       Total devices 1 FS bytes used 187.03GiB
>       devid    1 size 228.67GiB used 201.54GiB path /dev/mapper/pool1
> 
> Ok, now delta is 14GB
> 
> saruman:/mnt/btrfs_pool1# btrfs balance start -musage=80 -v .
> Dumping filters: flags 0x6, state 0x0, force is off
>   METADATA (flags 0x2): balancing, usage=80
>   SYSTEM (flags 0x2): balancing, usage=80
> Done, had to relocate 5 out of 202 chunks
> saruman:/mnt/btrfs_pool1# btrfs fi show .
> Label: 'btrfs_pool1'  uuid: fda628bc-1ca4-49c5-91c2-4260fe967a23
>       Total devices 1 FS bytes used 188.24GiB
>       devid    1 size 228.67GiB used 203.54GiB path /dev/mapper/pool1
> 
> and it's back to 15GB :-/
> 
> How can I get 188.24 and 203.54 to converge further? Where is all that
> space gone?
> 

Most likely this is due to partially used extents which has been
explained more than once on this list. When large extent is partially
overwritten, extent is not physically split - it remains allocated in
full but only part of it is referenced. Balance does not change it (at
least that is my understanding) - it moves extents, but here we have
internal fragmentation inside of extent. Defragmentation should rewrite
files, but if you have snapshots, it is unclear if there will be any gain.

I wonder if there is any tool that can compute physical vs. logical
space consumption (i.e. how much space in extent is actually
referenced). Should be possible using python-btrfs but probably time
consuming as it needs to walk each extent.

Reply via email to