Dear Chris,

correct - the metadata profile was set to single (with the thought of 
consolidating metadata updates to a smaller subset of disks instead of creating 
IO overhead between "data" operations and "metadata" updates).

It seems that "-o clear_cache" was used early on in an attempt to fix the root 
issue of not being able to mount the filesystem (which was potentially a race 
condition between systemd not having the devices active and the mount process)

I saw the posts regarding clear_cache corrupting filesystems. Could this be the 
case here?

"btrfs restore" has retrieved a lot of the files (but not all) and 
unfortunately most of the seem corrupt after about 1G file length. Smaller 
files seem fine.

My questions now:

* what is the chance of "btrfs rescue" "chunk-recover" / "super-recover" / 
"zero-log" having a positive effect on the filesystem.

* what is the chance of "btrfs check --init-extent-tree" fixing the described 
issues?

It would be really important to try and recover as much as possible from the 
filesystem. The users have learned their lessons reg. backups (they had one but 
it was not up2date, so worthless) but obviously no one would have expected a 
filesystem to just go bang like this.

Thanks again everyone reading and replying!

Marcel


Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, March 26, 2019 6:51 PM, Chris Murphy <li...@colorremedies.com> 
wrote:

> On Tue, Mar 26, 2019 at 11:38 AM Chris Murphy li...@colorremedies.com wrote:
>
> > On Tue, Mar 26, 2019 at 12:44 AM Andrei Borzenkov arvidj...@gmail.com wrote:
> >
> > > He has btrfs raid0 profile on top of hardware RAID6 devices.
> >
> > sys_chunk_array[2048]:
> > item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
> > length 4194304 owner 2 stripe_len 65536 type SYSTEM
> > io_align 4096 io_width 4096 sector_size 4096
> > num_stripes 1
> > Pretty sure the metadata profiles is "single". From the super, I can't
> > tell what profile the data block groups use.
>
> system chunk is on two devices:
> num_stripes 1 sub_stripes 0
> num_stripes 1 sub_stripes 1
>
> Maybe it is raid0, but I thought dump super explicitly shows the
> profile if it's not single. e.g. SYSTEM|DUP or SYSTEM|RAID1
>
> Only my single profile file systems lack a profile designation in the
> super. But I admit I have no raid0 file systems.
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Chris Murphy


Reply via email to