On 2019/6/23 下午3:55, Qu Wenruo wrote:
> 
> 
> On 2019/6/22 下午11:11, Andrei Borzenkov wrote:
> [snip]
>>
>> 10:/mnt # dd if=/dev/urandom of=test/file bs=1M count=100 seek=0
>> conv=notrunc
>> 100+0 records in
>> 100+0 records out
>> 104857600 bytes (105 MB, 100 MiB) copied, 0.685532 s, 153 MB/s
>> 10:/mnt # sync
>> 10:/mnt # btrfs qgroup show .
>> qgroupid         rfer         excl
>> --------         ----         ----
>> 0/5          16.00KiB     16.00KiB
>> 0/258         1.01GiB    100.02MiB
>> 0/263         1.00GiB     85.02MiB
> 
> Sorry, I can't really reproduce it.
> 
> 5.1.12 kernel, using the following script:
> ---
> #!/bin/bash
> 
> dev="/dev/data/btrfs"
> mnt="/mnt/btrfs"
> 
> umount $dev &> /dev/null
> mkfs.btrfs -f $dev > /dev/null
> 
> mount $dev $mnt
> btrfs sub create $mnt/subv1
> btrfs quota enable $mnt
> btrfs quota rescan -w $mnt
> 
> xfs_io -f -c "pwrite 0 1G" $mnt/subv1/file1
> sync
> btrfs sub snapshot $mnt/subv1 $mnt/subv2
> sync
> btrfs qgroup show -prce $mnt
> 
> xfs_io -c "pwrite 0 100m" $mnt/subv1/file1
> sync
> btrfs qgroup show -prce $mnt
> ---
> 
> The result is:
> ---
> Create subvolume '/mnt/btrfs/subv1'
> wrote 1073741824/1073741824 bytes at offset 0
> 1 GiB, 262144 ops; 0.5902 sec (1.694 GiB/sec and 444134.2107 ops/sec)
> Create a snapshot of '/mnt/btrfs/subv1' in '/mnt/btrfs/subv2'
> qgroupid         rfer         excl     max_rfer     max_excl parent  child
> --------         ----         ----     --------     -------- ------  -----
> 0/5          16.00KiB     16.00KiB         none         none ---     ---
> 0/256         1.00GiB     16.00KiB         none         none ---     ---
> 0/259         1.00GiB     16.00KiB         none         none ---     ---
> wrote 104857600/104857600 bytes at offset 0
> 100 MiB, 25600 ops; 0.0694 sec (1.406 GiB/sec and 368652.9766 ops/sec)
> qgroupid         rfer         excl     max_rfer     max_excl parent  child
> --------         ----         ----     --------     -------- ------  -----
> 0/5          16.00KiB     16.00KiB         none         none ---     ---
> 0/256         1.10GiB    100.02MiB         none         none ---     ---
> 0/259         1.00GiB     16.00KiB         none         none ---     ---
> ---
> 
>> 10:/mnt # filefrag -v test/file
>> Filesystem type is: 9123683e
>> File size of test/file is 1073741824 (262144 blocks of 4096 bytes)

My bad, I'm still using 512 bytes as blocksize.
If using 4K blocksize, then the fiemap result matches.

Then please discard my previous comment.

Then we need to check data extents layout to make sure what's going on.

Would you please provide the following output?
# btrfs ins dump-tree -t 258 /dev/vdb
# btrfs ins dump-tree -t 263 /dev/vdb
# btrfs check /dev/vdb

If the last command reports qgroup mismatch, then it means qgroup is
indeed incorrect.

Also, I see your subvolume id is not continuous, did you created/removed
some other subvolumes during your test?

Thanks,
Qu

>> Oops. Where 85MiB exclusive usage in snapshot comes from? I would expect
>> one of
>>
>> - 0 exclusive, because original first extent is still referenced by test
>> (even though partially), so if qgroup counts physical space usage, snap1
>> effectively refers to the same physical extents as test.
>>
>> - 100MiB exclusive if qgroup counts logical space consumption, because
>> snapshot now has 100MiB different data.
>>
>> But 85MiB? It does not match any observed value. Judging by 1.01GiB of
>> referenced space for subvolume test, qrgoup counts physical usage, at
>> which point snapshot exclusive space consumption remains 0.
>>
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to