Le 2016-10-27 03:11, Qu Wenruo a écrit :
At 10/26/2016 07:52 PM, none wrote:
Le 2016-10-26 03:43, Qu Wenruo a écrit :
Unfortunately, low memory mode is right here.

If btrfs-image dump the image correctly, your extent tree is really
screwed up.

And how badly it is screwed up?
It only contains the basic block group info.
Almost empty, without any really useful EXTENT_ITEM/METADATA_ITEM.
You can check it by btrfs-debug-tree -t extent.
Normally, one EXTENT_DATA or tree block should have corresponding
EXTENT_ITEM or METADATA_ITEM in extent tree.

But in your dump, I only find EXTENT_ITEM less than a dozen, which is
totally abnormal for the used size of your fs.
Please note df -h report 55Gb used due to a very high compression ratio.
Basically most of the theoretical used space is done by less than 100
files. I want to delete them
That's why lowmem mode is reporting so many backref lost.
Whithout the lowmem mode, only 3 lines are reported :

Failed to find [75191291904, 168, 4096]
btrfs unable to find ref byte nr 75191291904 parent 0 root 1  owner 1
offset 0
Failed to find [75191316480, 168, 4096]
btrfs unable to find ref byte nr 75191316480 parent 0 root 1  owner 0
offset 1
parent transid verify failed on 75191349248 wanted 3555361 found 3555362
Ignoring transid failure

and then it’s cpu locked.

It's the dead loop make btrfsck only able to check the first several
extents, no method to continue.

If we solve the dead loop, then there won't be less error report from
original btrfsck.
(lowmem mode just avoid the possibility to dead loop by its design)


It's almost a miracle that you can still write data into the fs.
And I heavily doubt the correctness of your existing files.
They are definitely correct. I have several root filesystem and I can
chroot to all of them (though I’m mounting the partition readonly in
order to avoid dangerous writes in that case). In each case I tried
python and ruby cgi scripts.

You should check more, normally scrub will help, but considering the
state of btrfs, scrub may not work at all or make things worse.

As extent tree is screwed up, it's completely possible new write are
overwriting existing data.
Though I only attempted to write to 3 files. But yes, this was something
I suspected : that writing damage things.
The only chance seems to be --init-extent-tree, but that's very
dangerous and I highly suspect the screwed up extent tree is caused by
interrupted extent tree rebuild.
The problem is --init-extent-tree implies --repair which discard
--mode=lowmem and cause the dead lock :
https://bugzilla.kernel.org/show_bug.cgi?id=178781

Yes, that's the problem, and current situation may be caused by
interrupted extent tree rebuild.

Thanks,
Qu

And finally, I found several corrupt directories yesterday.

Do you mean it’s impossible to rescue anything by repairing ? (this is
something I doubt since most files are valid)

Not completely, I'm digging into the dead loop problem, and after that
you may still recover the fs(or part of it) using --init-extent-tree.

Thanks,
Qu


Thank you.

Hello, what’s the status of my report since last October ?

thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to