An Ubuntu 18.04 desktop system (Linux version 4.15.0-140-generic 
(buildd@lgw01-amd64-054) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) 
#144-Ubuntu SMP Fri Mar 19 14:12:35 UTC 2021 (Ubuntu 4.15.0-140.144-generic 
4.15.18)), that I was logged into, but that was idle at the time, spontaneously 
remounted the primary filesystem readonly, due to an error that occurred during 
automatic balancing. (Naturally, this resulted in many processes failing after 
that.)

Apr 16 16:03:25 kernel: BTRFS: error (device dm-17) in balance_level:1962: 
errno=-117 unknown
Apr 16 16:03:25 kernel: BTRFS info (device dm-17): forced readonly


I have since attempted to recover this filesystem with the latest kernel and 
progs without success; actually seemed to make it worse. 

$ sudo btrfs check —readonly /dev/mapper/ub
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
root 895 inode 24368511 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 4096
root 895 inode 24368517 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 12288
root 895 inode 24368519 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 57344
root 895 inode 24368522 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 20480
root 1386 inode 23147891 errors 100, file extent discount
Found file extent holes:
        start: 32768, len: 90112
ERROR: errors found in fs roots
Opening filesystem to check...
Checking filesystem on /dev/mapper/ub
UUID: 10f30b03-9566-4d46-997f-c80b29dd3589
found 139004862505 bytes used, error(s) found
total csum bytes: 127989452
total tree bytes: 3912400896
total fs tree bytes: 3542646784
total extent tree bytes: 197771264
btree space waste bytes: 932473031
file data blocks allocated: 4659957428224
 referenced 591153811456


$ sudo btrfs check —repair /dev/mapper/ub

enabling repair mode
WARNING:

        Do not use --repair unless you are advised to do so by a developer
        or an experienced user, and then only after having accepted that no
        fsck can successfully repair all types of filesystem corruption. Eg.
        some software or hardware bugs can fatally damage a volume.
        The operation will start in 10 seconds.
        Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1[1/7] checking root items
Fixed 0 roots.
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
root 895 inode 24368511 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 4096
root 895 inode 24368517 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 12288
root 895 inode 24368519 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 57344
root 895 inode 24368522 errors 100, file extent discount
Found file extent holes:
        start: 0, len: 20480
ERROR: errors found in fs roots

Starting repair.
Opening filesystem to check...
Checking filesystem on /dev/mapper/ub
UUID: 10f30b03-9566-4d46-997f-c80b29dd3589
No device size related problem found
cache and super generation don't match, space cache will be invalidated
Fixed discount file extents for inode: 23147891 in root: 1386
found 139004862505 bytes used, error(s) found
total csum bytes: 127989452
total tree bytes: 3912400896
total fs tree bytes: 3542646784
total extent tree bytes: 197771264
btree space waste bytes: 932473031
file data blocks allocated: 4659957428224
 referenced 591153811456


$ sudo btrfs check —readonly /dev/mapper/ub

(see attached—seemed to cause more problems than it solved.)

Attachment: btrfs-check-ro2.out
Description: Binary data



Any ideas about how to fix it?  (I don’t have time to hack the kernel myself.)
And if not, how can I quickly just get a list of the affected files?

Also, is there away to turn off automatic balancing, so something like this 
does not happen at a most inconvenient time like this in the future?  (I know 
someone who decided once to defrag his hard drive the night before taxes were 
due, turning his system into a brick, but personally I’d rather not do anything 
that risky at a critical time!) 
Thanks!


More info:

[liveuser@localhost-live ~]$ sudo mount -t btrfs -o recovery,ro /dev/mapper/ub 
/mnt/ub 
[liveuser@localhost-live ~]$ uname -a
Linux localhost-live 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 16:31:13 UTC 2021 
x86_64 x86_64 x86_64 GNU/Linux
[liveuser@localhost-live ~]$ sudo btrfs --version
btrfs-progs v5.11.1 
[liveuser@localhost-live ~]$ sudo btrfs fi show
Label: none  uuid: 10f30b03-9566-4d46-997f-c80b29dd3589
        Total devices 1 FS bytes used 129.46GiB
        devid    1 size 138.37GiB used 138.37GiB path /dev/mapper/ub

Label: 'docker-data-root'  uuid: 557fa6a5-5212-4d7a-b0af-297673f7bc45
        Total devices 1 FS bytes used 70.14MiB
        devid    1 size 24.00GiB used 4.28GiB path 
/dev/mapper/ultrafastvg-docker--data

[liveuser@localhost-live ~]$ sudo btrfs fi df /mnt/ub
Data, single: total=128.06GiB, used=125.81GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=5.12GiB, used=3.64GiB
GlobalReserve, single: total=352.50MiB, used=0.00B


[liveuser@localhost-live ~]$ dmesg | tail -5
[  787.767187] BTRFS: device fsid 10f30b03-9566-4d46-997f-c80b29dd3589 devid 1 
transid 12553082 /dev/dm-19 scanned by systemd-udevd (3026)
[ 1299.217577] BTRFS warning (device dm-19): 'recovery' is deprecated, use 
'rescue=usebackuproot' instead
[ 1299.217583] BTRFS info (device dm-19): trying to use backup root at mount 
time
[ 1299.217585] BTRFS info (device dm-19): disk space caching is enabled
[ 1299.217587] BTRFS info (device dm-19): has skinny extents



Reply via email to