Re: Can the BTRFS transparent compression mess with RPM's free space checks?

2021-04-19 Thread Chris Murphy
On Mon, Apr 19, 2021 at 4:42 PM Lyes Saadi  wrote:
>
> I think I realized what went wrong: I compressed my filesystem _after_
> already having done some snapshots. I think it then duplicated all my
> files and basically filled my filesystem... And I did

Yeah defragmentation unshares shared extents. So all the deduplication
of reflinks and snapshots is undone. It's best to narrow the
defragmentation to be limited to a particular subvolume, or
directories in that subvolume, rather than all snapshots. Or just
enable compression in fstab and don't compress already written things,
just let it compress new writes.

The autodefrag mount option is a bit different because it's only
triggered on small writes less than 64KiB, either inserts or appends
into an existing file. The target extent size is 128KiB which is also
the max compression block size. So as a file is modified with this
write pattern, it's already unsharing any shared extents from
snapshots or reflinks. Since the whole file isn't (re)defragmented
every time autodefrag is triggered by the write pattern, it's not
going to add much (if at all) to unsharing extents.



-- 
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Can the BTRFS transparent compression mess with RPM's free space checks?

2021-04-19 Thread Lyes Saadi
I think I realized what went wrong: I compressed my filesystem _after_ 
already having done some snapshots. I think it then duplicated all my 
files and basically filled my filesystem... And I did



Sorry for that! I'm happy to be wrong at least. And thank you for this 
great answer!



Le 19/04/2021 à 23:29, Dominique Martinet a écrit :

Lyes Saadi wrote on Mon, Apr 19, 2021 at 10:56:51PM +0100:

It's a bit late to ask this question, but it emerged when I noticed that
after upgrading my PC to Silverblue 34 and after compressing manually my
files, and doing some snapshots, rpm-ostree began complaining about the
absence of free space... While compsize reported that I used only 84G(o/io?)
of my 249Go filesystem... I then realized that because of the compression
and the snapshots, ostree thought that my disk was full. The same problem
happened with gnome-disk. I reported both issues[1][2].

Err, no.
btrfs has been reporting proper things in statfs that programs can rely
on, compsize is only there for you if you're curious and for debugging.
In this case your filesystem is really almost full (around 8GB free
according to your output)

That was a problem very early on and basically everyone complained df
being unuseable would break too many programs.


You probably compressed your / but have snapshots laying around that
still take up space and weren't considered in your compsize command?



If you don't trust df (statfs), you have two btrfs commands to look at
for more details; here's what it gives on my system:

# btrfs fi df /
Data, single: total=278.36GiB, used=274.63GiB
System, DUP: total=32.00MiB, used=48.00KiB
Metadata, DUP: total=9.29GiB, used=6.88GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

# btrfs fi usage /
Overall:
 Device size:330.00GiB
 Device allocated:   297.00GiB
 Device unallocated:  33.00GiB
 Device missing: 0.00B
 Used:   288.39GiB
 Free (estimated):36.73GiB  (min: 20.23GiB)
 Free (statfs, df):   36.73GiB
 Data ratio:  1.00
 Metadata ratio:  2.00
 Global reserve: 512.00MiB  (used: 0.00B)
 Multiple profiles: no

Data,single: Size:278.36GiB, Used:274.63GiB (98.66%)
/dev/mapper/slash278.36GiB

Metadata,DUP: Size:9.29GiB, Used:6.88GiB (74.09%)
/dev/mapper/slash 18.57GiB

System,DUP: Size:32.00MiB, Used:48.00KiB (0.15%)
/dev/mapper/slash 64.00MiB

Unallocated:
/dev/mapper/slash 33.00GiB


And for comparison:
# df -h /
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/slash 330G  289G   37G  89% /


In all cases, the Used column actually corresponds to compressed size --
real blocks on disk and not uncompressed data size.
I have way too many subvolumes but here's an output that lists more than
289G "used"; I'm lazy so without snapshots:
# compsize -x / /home /var /var/lib/machines/ /nix
Processed 2722869 files, 1820146 regular extents (2063805 refs), 1625123 inline.
Type   Perc Disk Usage   Uncompressed Referenced
TOTAL   76%  232G 302G 317G
none   100%  196G 196G 194G
zstd33%   34G 104G 122G
prealloc   100%  1.0G 1.0G 553M

Hm, not very convincing, adding a few (there's more, I guess adding all
of them should bring the Disk Usage column up to 289G but this just
takes too long for this mail -- the "proper" way to track snapshot usage
would be quota but I don't have these enabled here):
# compsize -x / /home /var /var/lib/machines/ /nix /.snapshots/{19,20}*/snapshot
Processed 10803451 files, 2110568 regular extents (7656942 refs), 5960388 
inline.
Type   Perc Disk Usage   Uncompressed Referenced
TOTAL   75%  249G 331G 732G
none   100%  206G 206G 281G
zstd33%   41G 123G 451G
prealloc   100%  1.0G 1.0G 551M



I would suggest finding what subvolumes you may have (btrfs subvolume
list /) and cleanup old ones, I'm not sure what is used by default
nowadays (snapper?) there might be higher level commands

They might not be visible from your mountpoint if your setup mounts a
subvolume by default, in which case you can mount your btrfs volume
somewhere else with -o subvol=/ for example to show everything and play
with compsize if you want.


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 

Re: Can the BTRFS transparent compression mess with RPM's free space checks?

2021-04-19 Thread Dominique Martinet
Lyes Saadi wrote on Mon, Apr 19, 2021 at 10:56:51PM +0100:
> It's a bit late to ask this question, but it emerged when I noticed that
> after upgrading my PC to Silverblue 34 and after compressing manually my
> files, and doing some snapshots, rpm-ostree began complaining about the
> absence of free space... While compsize reported that I used only 84G(o/io?)
> of my 249Go filesystem... I then realized that because of the compression
> and the snapshots, ostree thought that my disk was full. The same problem
> happened with gnome-disk. I reported both issues[1][2].

Err, no.
btrfs has been reporting proper things in statfs that programs can rely
on, compsize is only there for you if you're curious and for debugging.
In this case your filesystem is really almost full (around 8GB free
according to your output)

That was a problem very early on and basically everyone complained df
being unuseable would break too many programs.


You probably compressed your / but have snapshots laying around that
still take up space and weren't considered in your compsize command?



If you don't trust df (statfs), you have two btrfs commands to look at
for more details; here's what it gives on my system:

# btrfs fi df /
Data, single: total=278.36GiB, used=274.63GiB
System, DUP: total=32.00MiB, used=48.00KiB
Metadata, DUP: total=9.29GiB, used=6.88GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

# btrfs fi usage /
Overall:
Device size: 330.00GiB
Device allocated:297.00GiB
Device unallocated:   33.00GiB
Device missing:  0.00B
Used:288.39GiB
Free (estimated): 36.73GiB  (min: 20.23GiB)
Free (statfs, df):36.73GiB
Data ratio:   1.00
Metadata ratio:   2.00
Global reserve:  512.00MiB  (used: 0.00B)
Multiple profiles:  no

Data,single: Size:278.36GiB, Used:274.63GiB (98.66%)
   /dev/mapper/slash 278.36GiB

Metadata,DUP: Size:9.29GiB, Used:6.88GiB (74.09%)
   /dev/mapper/slash  18.57GiB

System,DUP: Size:32.00MiB, Used:48.00KiB (0.15%)
   /dev/mapper/slash  64.00MiB

Unallocated:
   /dev/mapper/slash  33.00GiB


And for comparison:
# df -h /
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/slash 330G  289G   37G  89% /


In all cases, the Used column actually corresponds to compressed size --
real blocks on disk and not uncompressed data size.
I have way too many subvolumes but here's an output that lists more than
289G "used"; I'm lazy so without snapshots:
# compsize -x / /home /var /var/lib/machines/ /nix
Processed 2722869 files, 1820146 regular extents (2063805 refs), 1625123 inline.
Type   Perc Disk Usage   Uncompressed Referenced  
TOTAL   76%  232G 302G 317G   
none   100%  196G 196G 194G   
zstd33%   34G 104G 122G   
prealloc   100%  1.0G 1.0G 553M   

Hm, not very convincing, adding a few (there's more, I guess adding all
of them should bring the Disk Usage column up to 289G but this just
takes too long for this mail -- the "proper" way to track snapshot usage
would be quota but I don't have these enabled here):
# compsize -x / /home /var /var/lib/machines/ /nix /.snapshots/{19,20}*/snapshot
Processed 10803451 files, 2110568 regular extents (7656942 refs), 5960388 
inline.
Type   Perc Disk Usage   Uncompressed Referenced  
TOTAL   75%  249G 331G 732G   
none   100%  206G 206G 281G   
zstd33%   41G 123G 451G   
prealloc   100%  1.0G 1.0G 551M   



I would suggest finding what subvolumes you may have (btrfs subvolume
list /) and cleanup old ones, I'm not sure what is used by default
nowadays (snapper?) there might be higher level commands

They might not be visible from your mountpoint if your setup mounts a
subvolume by default, in which case you can mount your btrfs volume
somewhere else with -o subvol=/ for example to show everything and play
with compsize if you want.

-- 
Dominique
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Can the BTRFS transparent compression mess with RPM's free space checks?

2021-04-19 Thread Lyes Saadi

Hello!


It's a bit late to ask this question, but it emerged when I noticed that 
after upgrading my PC to Silverblue 34 and after compressing manually my 
files, and doing some snapshots, rpm-ostree began complaining about the 
absence of free space... While compsize reported that I used only 
84G(o/io?) of my 249Go filesystem... I then realized that because of the 
compression and the snapshots, ostree thought that my disk was full. The 
same problem happened with gnome-disk. I reported both issues[1][2].



But, in the rpm-ostree issue, jlebon raised an important question: does 
this also happen with RPM? Because if it does, this may mess up quite a 
bit the next Fedora Release...



Note: I am not an expert in BTRFS or RPM, so, I'm sorry if this question 
is irrelevant.



[1]: https://github.com/coreos/rpm-ostree/issues/2761

[2]: https://gitlab.gnome.org/GNOME/gnome-disk-utility/-/issues/211
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure