Henk Slager <eye1tm <at> gmail.com> writes:

> You could use 1-time mount option clear_cache, then mount normally and
> cache will be rebuild automatically (but also corrected if you don't
> clear it)

This didn't help, gave me

[  316.111596] BTRFS info (device sda): force clearing of disk cache
[  316.111605] BTRFS info (device sda): disk space caching is enabled
[  316.111608] BTRFS: has skinny extents
[  316.227354] BTRFS info (device sda): bdev /dev/sda errs: wr 180547340, 
rd 592949011, flush 4967, corrupt 582096433, gen 
26993

and still

[  498.552298] BTRFS warning (device sda): csum failed ino 171545 off 
2269560832 csum 2566472073 expected csum 874509527
[  498.552325] BTRFS warning (device sda): csum failed ino 171545 off 
2269564928 csum 2566472073 expected csum 2434927850

> > Do you think there is still a chance to recover those files?
> 
> You can use  btrfs restore  to get files off a damaged fs.

This however does work - thank you!
Now since I'm a bit short on disc space, can I remove the disc that 
previously disappeared (and thus doesn't have all the 
data) from the RAID, format it and run btrfs rescue on the degraded array, 
saving the rescued data to the now free disc?



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to