On Mon, Mar 29, 2021 at 7:17 AM Richard Shaw <hobbes1...@gmail.com> wrote:
>
> So after about 12 forced power offs while copying data (via rsync) this is 
> the only output from dmesg:
>
> # dmesg | grep -i btrfs
> [    0.776375] Btrfs loaded, crc32c=crc32c-generic, zoned=yes
> [    5.497241] BTRFS: device fsid d9a2a011-77a2-43be-acd1-c9093d32125b devid 
> 1 transid 389 /dev/sdc scanned by systemd-udevd (732)
> [    5.521210] BTRFS: device fsid d9a2a011-77a2-43be-acd1-c9093d32125b devid 
> 2 transid 389 /dev/sdd scanned by systemd-udevd (743)
> [    6.743097] BTRFS info (device sdc): disk space caching is enabled
> [    6.743100] BTRFS info (device sdc): has skinny extents


This is the usual case. Btrfs kernel code reads the super which is
only pointing to a completely consistent set of btrees. A variation on
this is if a process making use of fsync was interrupted by the crash,
you might also see this line:

BTRFS info (device vda3): start tree-log replay

That also is normal and fine. No fsck required, nor scrub required.
You don't need to do anything. If the hardware is doing the correct
thing, you can hit it with power failures while writing all day long
and the file system will never care about it. This is by design. But
also Btrfs developers wrote dm-log-writes expressly for power failure
testing, and is now used in xfstests for regression testing all the
common file systems in the kernel. So there's quite a lot of certainty
that write ordering is what it should be.


> [   61.730008] BTRFS info (device sdc): the free space cache file 
> (1833981444096) is invalid, skip it
>
> Should I worry about the last line?

Nope. It's just gone stale, because it didn't get updated prior to the
power fail. They rebuild quickly on their own. And it's not critical
metadata, just an optimization.

(This is a reference to the v1 space cache which exist as hidden files
in data block groups. The stable and soon to be default v2 space cache
tree moves this to metadata block groups and it's quite a lot more
resilient and more performant for very busy and larger file systems.
Anyone can switch to v2 by doing a one time mount with the mount
option space_cache=v2. Currently it needs to be a new mount, there's a
bug that causes it to claim it's using v2 but it's not actually
setting up the v2 tree. Once this mount option is used, a feature flag
is set, and it'll always be used from that point on. It's not
something that goes in fstab. Use it one time and forget about it.
Sorta like a file system upgrade if you will. Note that large file
systems might see a long first time mount with this option being
initially set. I've anecdotally heard of it taking hours for a 40T
file system, because the whole tree must be created and written before
the mount can complete. For me on a full 1T file system it took
*maybe* 1 minute.)

-- 
Chris Murphy
_______________________________________________
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to