btrfs rescue chunk-recover -y -vv /dev/sdh
###
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Sunday, 31. March 2019 16:38, berodual_xyz
wrote:
> Adding to last email, being more brave to test out commands now:
>
> Different crash than on last report.
>
>
(__libc_start_main+0xf5)[0x7f348d9a4b35]
btrfs[0x40c509]
[1]1652 abort btrfs check --init-extent-tree --mode=lowmem -p /dev/sdh
‐‐‐ Original Message ‐‐‐
On Sunday, March 31, 2019 4:21 PM, berodual_xyz
wrote:
> Dear list,
>
> following from earlier this weeks case of attemptin
Dear list,
following from earlier this weeks case of attempting data rescue on a corrupt
FS, we have now cloned all devices and can do potentially dangerous rescue
attempts (since we can re-clone the original disks).
On kernel 4.20.17 and btrfs-progs 4.20.2:
##
$ btrfs inspect-internal tree-s
Thanks for the extensive answer, Chris!
> a. Check with the manufacturer of the hardware raid for firmware
> updates for all the controllers. Also check if the new version is
> backward compatible with an array made with the version you have, and
> if not, if downgrade is possible. That way you h
Dear Chris,
correct - the metadata profile was set to single (with the thought of
consolidating metadata updates to a smaller subset of disks instead of creating
IO overhead between "data" operations and "metadata" updates).
It seems that "-o clear_cache" was used early on in an attempt to fix
Wenruo wrote:
> On 2019/3/26 下午4:52, berodual_xyz wrote:
>
> > Thank you both for your input.
> > see below.
> >
> > > > You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
> > > > It's possible to allow kernel to manually assemble
Thank you both for your input.
see below.
> > You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
> > It's possible to allow kernel to manually assemble its device list using
> > "device=" mount option.
> > Since you're using RAID6, it's possible to recover using 2 devices only,
Dear all, I had posted already (excuse for separate mails) that I have a
corrupt filesystem that would be very important to get recovered.
Please note I would really appreciate assistance and am willing to PAY for
consultation and time.
Kernel 4.20.17
btrfs-progs 4.20.2
The filesystem consists
Dear Hugo, please see below.
‐‐‐ Original Message ‐‐‐
On Monday, 25. March 2019 23:56, Hugo Mills wrote:
> On Mon, Mar 25, 2019 at 10:51:24PM +0000, berodual_xyz wrote:
>
> > Running "btrfs check" on the 3rd of the 4 devices the volume consists of
> > cr
6 wanted 60234 found 60230
[33814.361764] BTRFS error (device sdd): failed to read chunk root
[33814.373140] BTRFS error (device sdd): open_ctree failed
##
Again, thank you very much for all help!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, March 25, 2019 11
Dear all,
on a large btrfs based filesystem (multi-device raid0 - all devices okay,
nodatacow,nodatasum...) I experienced severe filesystem corruption, most likely
due to a hard reset with inflight data.
The system cannot mount (also not with "ro,nologreplay" / "nospace_cache" etc.).
Running "b
11 matches
Mail list logo