On 2018-10-27 04:19 PM, Marc MERLIN wrote:

> Thanks for confirming. Because I always have snapshots for btrfs
> send/receive, defrag will duplicate as you say, but once the older
> snapshots get freed up, the duplicate blocks should go away, correct?
> 
> Back to usage, thanks for pointing out that command:
> saruman:/mnt/btrfs_pool1# btrfs fi usage .
> Overall:
>     Device size:               228.67GiB
>     Device allocated:          203.54GiB
>     Device unallocated:                 25.13GiB
>     Device missing:                0.00B
>     Used:                      192.01GiB
>     Free (estimated):           32.44GiB      (min: 19.88GiB)
>     Data ratio:                             1.00
>     Metadata ratio:                 2.00
>     Global reserve:            512.00MiB      (used: 0.00B)
> 
> Data,single: Size:192.48GiB, Used:185.16GiB
>    /dev/mapper/pool1   192.48GiB
> 
> Metadata,DUP: Size:5.50GiB, Used:3.42GiB
>    /dev/mapper/pool1    11.00GiB
> 
> System,DUP: Size:32.00MiB, Used:48.00KiB
>    /dev/mapper/pool1    64.00MiB
> 
> Unallocated:
>    /dev/mapper/pool1    25.13GiB
> 
> 
> I'm still seing that I'm using 192GB, but 203GB allocated.
> Do I have 25GB usable:
>     Device unallocated:                 25.13GiB
> 
> Or 35GB usable?
>     Device size:               228.67GiB
>       -
>     Used:                        192.01GiB
>       = 36GB ?    
> 


The answer is somewhere between the two.  (BTRFS's estimate of 32.44
Free is probably as close as you'll get to predicting.)

So you have 7.32GB  allocated but still free space for data, and 25GB of
completely unallocated disk space. However, as you add more data, or
create more snapshots and create metadata duplication, some of that 25GB
will be allocated for Metadata.  Remember that Metadata is Duplicated
(so that 3.42GB of Metadata you are using now is actually using 6.84GB
of disk space, out of the allocated 11GB

You want to be careful that unallocated space doesn't run out.    If the
system runs out of usable space for metadata, it can be tricky to get
yourself out of the corner.  That is why a large discrepency between
Data Size and Used would be a concern.  If those 25GB of space were
allocated to data, your would get out of space errors even if the 25GB
was still unused.

On that note, you seem to have a rather high metadata to data ratio..
(at least, compared to my limited experience.).  Are you using noatime
on your filesystems?  without it, snapshots will end up causing
duplicated metadata when atime updates.




<<attachment: remi.vcf>>

Reply via email to