Thanks for the Help

I get my data back.

But now I`m thinking... how did it come so far?

Was it luks the dm-crypt?

What did i do wrong? Old Ubuntu Kernel? ubuntu 18.04

What should I do now ... to use btrfs safely? Should i not use it with DM-crypt

Or even use ZFS instead...

Am 11/06/2019 um 15:02 schrieb Qu Wenruo:

On 2019/6/11 下午6:53, claud...@winca.de wrote:
HI Guys,

you are my last try. I was so happy to use BTRFS but now i really hate
it....


Linux CIA 4.15.0-51-generic #55-Ubuntu SMP Wed May 15 14:27:21 UTC 2019
x86_64 x86_64 x86_64 GNU/Linux
btrfs-progs v4.15.1
So old kernel and old progs.

btrfs fi show
Label: none  uuid: 9622fd5c-5f7a-4e72-8efa-3d56a462ba85
         Total devices 1 FS bytes used 4.58TiB
         devid    1 size 7.28TiB used 4.59TiB path /dev/mapper/volume1


dmesg

[57501.267526] BTRFS info (device dm-5): trying to use backup root at
mount time
[57501.267528] BTRFS info (device dm-5): disk space caching is enabled
[57501.267529] BTRFS info (device dm-5): has skinny extents
[57507.511830] BTRFS error (device dm-5): parent transid verify failed
on 2069131051008 wanted 4240 found 5115
Some metadata CoW is not recorded correctly.

Hopes you didn't every try any btrfs check --repair|--init-* or anything
other than --readonly.
As there is a long exiting bug in btrfs-progs which could cause similar
corruption.



[57507.518764] BTRFS error (device dm-5): parent transid verify failed
on 2069131051008 wanted 4240 found 5115
[57507.519265] BTRFS error (device dm-5): failed to read block groups: -5
[57507.605939] BTRFS error (device dm-5): open_ctree failed


btrfs check /dev/mapper/volume1
parent transid verify failed on 2069131051008 wanted 4240 found 5115
parent transid verify failed on 2069131051008 wanted 4240 found 5115
parent transid verify failed on 2069131051008 wanted 4240 found 5115
parent transid verify failed on 2069131051008 wanted 4240 found 5115
Ignoring transid failure
extent buffer leak: start 2024985772032 len 16384
ERROR: cannot open file system



im not able to mount it anymore.


I found the drive in RO the other day and realized somthing was wrong
... i did a reboot and now i cant mount anmyore
Btrfs extent tree must has been corrupted at that time.

Full recovery back to fully RW mountable fs doesn't look possible.
As metadata CoW is completely screwed up in this case.

Either you could use btrfs-restore to try to restore the data into
another location.

Or try my kernel branch:
https://github.com/adam900710/linux/tree/rescue_options

It's an older branch based on v5.1-rc4.
But it has some extra new mount options.
For your case, you need to compile the kernel, then mount it with "-o
ro,rescue=skip_bg,rescue=no_log_replay".

If it mounts (as RO), then do all your salvage.
It should be a faster than btrfs-restore, and you can use all your
regular tool to backup.

Thanks,
Qu


any help

Reply via email to