Lowmem check on my backup pool reports dozens of 'backref lost' on
extents... Excerpt:
----------------------------------------------------------
# ./btrfsck.static check --mode=lowmem /dev/sda1
checking extents
ERROR: data extent[33866182656 4096] backref lost
ERROR: data extent[37102219264 114688] backref lost
ERROR: data extent[37090353152 45056] backref lost
ERROR: data extent[37193342976 114688] backref lost
ERROR: data extent[50151686144 53248] backref lost
ERROR: data extent[49782943744 126976] backref lost
ERROR: data extent[49718861824 77824] backref lost
ERROR: data extent[33853538304 4096] backref lost
ERROR: data extent[41170333696 692224] backref lost
ERROR: data extent[37550239744 4096] backref lost
ERROR: data extent[33866182656 4096] backref lost
ERROR: data extent[37102219264 114688] backref lost
...

ERROR: data extent[37193342976 114688] backref lost
ERROR: data extent[50151686144 53248] backref lost
ERROR: data extent[49782943744 126976] backref lost
ERROR: data extent[49718861824 77824] backref lost
ERROR: data extent[33853538304 4096] backref lost
ERROR: data extent[41170333696 692224] backref lost
ERROR: data extent[37550239744 4096] backref lost
ERROR: data extent[44122832896 8192] backref lost
ERROR: data extent[45874237440 40960] backref lost
ERROR: errors found in extent allocation tree or chunk allocation
checking free space cache
block group 112772251648 has wrong amount of free space
failed to load free space cache for block group 112772251648
Wanted offset 127268339712, found 127268323328
Wanted offset 127268339712, found 127268323328
cache appears valid but isn't 127267766272
block group 133173346304 has wrong amount of free space
failed to load free space cache for block group 133173346304
Wanted offset 142837039104, found 142837022720
Wanted offset 142837039104, found 142837022720
cache appears valid but isn't 142837022720
ERROR: errors found in free space cache
Checking filesystem on /dev/sda1
UUID: 854e1bf5-7a98-4bcb-b971-0d9f2ac9452a
found 200684883968 bytes used, error(s) found
total csum bytes: 186712500
total tree bytes: 40191000576
total fs tree bytes: 39574110208
total extent tree bytes: 359038976
btree space waste bytes: 7605597010
file data blocks allocated: 946099404800
 referenced 1091731312640
----------------------------------------------------------

Is that related to flawed snapshots of the ailing rootfs vol?
Is it safe to try to rollback from it?

Cheers,

  M.


On Mon, Jan 29, 2018 at 2:49 PM, ^m'e <marc...@gmail.com> wrote:
> On Mon, Jan 29, 2018 at 2:04 PM, Qu Wenruo <quwenruo.bt...@gmx.com> wrote:
>>
>>
>> On 2018年01月29日 21:58, ^m'e wrote:
>>> Thanks for the advice, Qu!
>>>
>>> I used the system for a while, did some package upgrades -- writing in
>>> the suspect corrupted area. Then tried a btrfs-send to my backup vol,
>>> and it failed miserably with a nice kernel oops.
>>>
>>> So I went for a lowmem repair:
>>> ----------------------------------------------------------------------------------------
>>> # ./btrfsck.static check --repair --mode=lowmem /dev/sdb3 2>&1 | tee
>>> /mnt/custom/rescue/btrfs-recovery/btrfs-repair.BTR-POOL.1.log
>>> WARNING: low-memory mode repair support is only partial
>>> Fixed 0 roots.
>>> checking extents
>>> checking free space cache
>>> checking fs roots
>>> ERROR: failed to add inode 28891726 as orphan item root 257
>>> ERROR: root 257 INODE[28891726] is orphan item
>>
>> At least I need dig the kernel code further to determine if the orphan
>> inode handling in btrfs-progs is correct or not.
>>
>> So there won't be more dirty fix soon.
>>
>> Hopefully you could get some good backup and restore the system.
>>
>> At least the problem is limited to a very small range, and it's
>> something we could handle easily.
>>
>> Thanks for all your report,
>> Qu
>>
>>
>
> Right.
>
> Meanwhile, could you please suggest the best course of action? btrfs
> rescue or restore?
> I have snapshots of my two subvols (rootfs, home -- now fs-checking
> them  just in case...)
>
> Cheers,
>
>   Marco



-- 
^m'e
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to