>>> Now, the weird part for me is exclusive data count:
>>>
>>> # btrfs sub sh ./snapshot-171125
>>> [...]
>>>         Subvolume ID:           388
>>> # btrfs fi du -s ./snapshot-171125 
>>>      Total   Exclusive  Set shared  Filename
>>>   21.50GiB    63.35MiB    20.77GiB  snapshot-171125
>>>
>>> How is that possible? This doesn't even remotely relate to 7.15 GiB
>>> from qgroup.~The same amount differs in total: 28.75-21.50=7.25 GiB.
>>> And the same happens with other snapshots, much more exclusive data
>>> shown in qgroup than actually found in files. So if not files, where
>>> is that space wasted? Metadata?
>>
>>    Personally, I'd trust qgroups' output about as far as I could spit
>> Belgium(*).
> 
> Well, there is something wrong here, as after removing the .ccache
> directories inside all the snapshots the 'excl' values decreased
> ...except for the last snapshot (the list below is short by ~40 snapshots
> that have 2 GB excl in total):
> 
> qgroupid         rfer         excl 
> --------         ----         ---- 
> 0/260        12.25GiB      3.22GiB    from 170712 - first snapshot
> 0/312        17.54GiB      4.56GiB    from 170811
> 0/366        25.59GiB      2.44GiB    from 171028
> 0/370        23.27GiB     59.46MiB    from 111118 - prev snapshot
> 0/388        21.69GiB      7.16GiB    from 171125 - last snapshot
> 0/291        24.29GiB      9.77GiB    default subvolume

You may need to manually sync the filesystem (trigger a transaction
commitment) to update qgroup accounting.
> 
> 
> [~/test/snapshot-171125]#  du -sh .
> 15G     .
> 
> 
> After changing back to ro I tested how much data really has changed
> between the previous and last snapshot:
> 
> [~/test]#  btrfs send -p snapshot-171118 snapshot-171125 | pv > /dev/null
> At subvol snapshot-171125
> 74.2MiB 0:00:32 [2.28MiB/s]
> 
> This means there can't be 7 GiB of exclusive data in the last snapshot.

Mentioned before, sync the fs first before checking the qgroup numbers.
Or use --sync option along with qgroup show.

> 
> Well, even btrfs send -p snapshot-170712 snapshot-171125 | pv > /dev/null
> 5.68GiB 0:03:23 [28.6MiB/s]
> 
> I've created a new snapshot right now to compare it with 171125:
> 75.5MiB 0:00:43 [1.73MiB/s]
> 
> 
> OK, I could even compare all the snapshots in sequence:
> 
> # for i in snapshot-17*; btrfs prop set $i ro true
> # p=''; for i in snapshot-17*; do [ -n "$p" ] && btrfs send -p "$p" "$i" | pv 
> > /dev/null; p="$i" done
>  1.7GiB 0:00:15 [ 114MiB/s]
> 1.03GiB 0:00:38 [27.2MiB/s]
>  155MiB 0:00:08 [19.1MiB/s]
> 1.08GiB 0:00:47 [23.3MiB/s]
>  294MiB 0:00:29 [ 9.9MiB/s]
>  324MiB 0:00:42 [7.69MiB/s]
> 82.8MiB 0:00:06 [12.7MiB/s]
> 64.3MiB 0:00:05 [11.6MiB/s]
>  137MiB 0:00:07 [19.3MiB/s]
> 85.3MiB 0:00:13 [6.18MiB/s]
> 62.8MiB 0:00:19 [3.21MiB/s]
>  132MiB 0:00:42 [3.15MiB/s]
>  102MiB 0:00:42 [2.42MiB/s]
>  197MiB 0:00:50 [3.91MiB/s]
>  321MiB 0:01:01 [5.21MiB/s]
>  229MiB 0:00:18 [12.3MiB/s]
>  109MiB 0:00:11 [ 9.7MiB/s]
>  139MiB 0:00:14 [9.32MiB/s]
>  573MiB 0:00:35 [15.9MiB/s]
> 64.1MiB 0:00:30 [2.11MiB/s]
>  172MiB 0:00:11 [14.9MiB/s]
> 98.9MiB 0:00:07 [14.1MiB/s]
>   54MiB 0:00:08 [6.17MiB/s]
> 78.6MiB 0:00:02 [32.1MiB/s]
> 15.1MiB 0:00:01 [12.5MiB/s]
> 20.6MiB 0:00:00 [  23MiB/s]
> 20.3MiB 0:00:00 [  23MiB/s]
>  110MiB 0:00:14 [7.39MiB/s]
> 62.6MiB 0:00:11 [5.67MiB/s]
> 65.7MiB 0:00:08 [7.58MiB/s]
>  731MiB 0:00:42 [  17MiB/s]
> 73.7MiB 0:00:29 [ 2.5MiB/s]
>  322MiB 0:00:53 [6.04MiB/s]
>  105MiB 0:00:35 [2.95MiB/s]
> 95.2MiB 0:00:36 [2.58MiB/s]
> 74.2MiB 0:00:30 [2.43MiB/s]
> 75.5MiB 0:00:46 [1.61MiB/s]
> 
> This is 9.3 GB of total diffs between all the snapshots I got.
> Plus 15 GB of initial snapshot means there is about 25 GB used,
> while df reports twice the amount, way too much for overhead:
> /dev/sda2        64G   52G   11G  84% /
> 
> 
> # btrfs quota enable /
> # btrfs qgroup show /
> WARNING: quota disabled, qgroup data may be out of date
> [...]
> # btrfs quota enable /                - for the second time!
> # btrfs qgroup show /
> WARNING: qgroup data inconsistent, rescan recommended

Please wait the rescan, or any number is not correct.
(Although it will only be less than actual occupied space)

It's highly recommended to read btrfs-quota(8) and btrfs-qgroup(8) to
ensure you understand all the limitation.

> [...]
> 0/428        15.96GiB     19.23MiB    newly created (now) snapshot
> 
> 
> 
> Assuming the qgroups output is bugus and the space isn't physically
> occupied (which is coherent with btrfs fi du output and my expectation)
> the question remains: why is that bogus-excl removed from available
> space as reported by df or btrfs fi df/usage? And how to reclaim it?

Already explained the difference in another thread.

Thanks,
Qu

> 
> 
> [~/test]#  btrfs device usage /
> /dev/sda2, ID: 1
>    Device size:            64.00GiB
>    Device slack:              0.00B
>    Data,single:             1.07GiB
>    Data,RAID1:             55.97GiB
>    Metadata,RAID1:          2.00GiB
>    System,RAID1:           32.00MiB
>    Unallocated:             4.93GiB
> 
> /dev/sdb2, ID: 2
>    Device size:            64.00GiB
>    Device slack:              0.00B
>    Data,single:           132.00MiB
>    Data,RAID1:             55.97GiB
>    Metadata,RAID1:          2.00GiB
>    System,RAID1:           32.00MiB
>    Unallocated:             5.87GiB
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to