Re: recovering from "parent transid verify failed"

2019-08-15 Thread Tim Walberg
ckups. tw On 08/15/2019 22:45 +0800, Qu Wenruo wrote: >> >> >> On 2019/8/15 ??10:21, Tim Walberg wrote: >> > 'dump-super -Ffa' from all three devices attached. >> > >> > 'btrfs restore' did app

Re: recovering from "parent transid verify failed"

2019-08-15 Thread Tim Walberg
On 08/15/2019 22:07 +0800, Qu Wenruo wrote: >> >> >> On 2019/8/15 ??9:52, Tim Walberg wrote: >> > Had to wait for 'btrfs recover' to finish before I proceed farther. >> > >> > Kernel is 4.19.45

Re: recovering from "parent transid verify failed"

2019-08-15 Thread Tim Walberg
resolve 229846466560 /dev/sdc1 ERROR: not a btrfs filesystem: /dev/sdc1 On 08/15/2019 10:35 +0800, Qu Wenruo wrote: >> >> >> On 2019/8/15 ??2:32, Tim Walberg wrote: >> > Most of the recommendations I've found online deal with when &q

Re: recovering from "parent transid verify failed"

2019-08-15 Thread Tim Walberg
On 08/15/2019 10:35 +0800, Qu Wenruo wrote: >> >> >> On 2019/8/15 ??2:32, Tim Walberg wrote: >> > Most of the recommendations I've found online deal with when "wanted" >> is >> > greater than "found",

recovering from "parent transid verify failed"

2019-08-14 Thread Tim Walberg
Most of the recommendations I've found online deal with when "wanted" is greater than "found", which, if I understand correctly means that one or more transactions were interrupted/lost before fully committed. Are the recommendations for recovery the same if the system is reporting a "wanted" that

btrfs check --repair question

2016-12-12 Thread Tim Walberg
All - I have a file system I'm having some issues with. The initial symptoms were that mount would run for several hours, either committing or rolling back transactions (primarily due to a balance that was running when the system was rebooted for other reasons - the skip_balance mount option wa

regression in quota rescan, or intentional?

2016-10-20 Thread Tim Walberg
Just updated my kernel and btrfs-tools to 4.8.1 and now it seems that "btrfs quota rescan -w " does not in fact wait for the rescan to finish. Running it a second time immediately after does, however. Was this an intentional change, or is it a regression/bug? -- twalb...@gmail.com, twalb...@com

Re: question re: trim in btrfs

2016-10-18 Thread Tim Walberg
Forgot to mention - this was on a rather crusty 4.2.6 kernel. Just upgraded to 4.8.1 and the issue appears to have been resolved... On 10/18/2016 12:42 -0500, Walberg, Tim wrote: >> Unless I'm misinterpreting something it appears that maybe btrfs >> doesn't pass >> fstrim commands down

question re: trim in btrfs

2016-10-18 Thread Tim Walberg
Unless I'm misinterpreting something it appears that maybe btrfs doesn't pass fstrim commands down to the underlying drives when being used in a RAID-1 config. I have this output from a small script I wrote to run at boot time (and also via cron.weekly), rather than using continous trim in the bo

Re: Size of scrubbed Data

2016-09-17 Thread Tim Walberg
On 09/17/2016 09:34 -0500, Walberg, Tim wrote: >> On 09/15/2016 15:18 -0600, Chris Murphy wrote: >> >> > System, single: total=4.00MiB, used=0.00B >> >> > Metadata, RAID1: total=10.00GiB, used=8.14GiB >> >> > GlobalReserve, single: total=512.00MiB, used=0.00B >>

Re: Size of scrubbed Data

2016-09-17 Thread Tim Walberg
On 09/15/2016 15:18 -0600, Chris Murphy wrote: >> > System, single: total=4.00MiB, used=0.00B >> > Metadata, RAID1: total=10.00GiB, used=8.14GiB >> > GlobalReserve, single: total=512.00MiB, used=0.00B >> >> btrfs balance start -mconvert=raid1,soft Since the single profi

Re: btrfs quota issues

2016-08-16 Thread Tim Walberg
On 08/16/2016 16:33 -0700, Rakesh Sankeshi wrote: >> also is there any timeframe on when the qgroup / quota issues would be >> stabilized in btrfs? >> >> Thanks! This may or may not be of interest to you, but for the record, since at least linux 4.2, I've had pretty good luc

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Tim Walberg
On 10/20/2015 15:59 -0400, Austin S Hemmelgarn wrote: >> . >> With a 32-bit checksum and a 4k block (the math is easier with >> smaller numbers), that's 4128 bits, which means that a random >> single bit error will have a approximately 0.24% chance of >> occurring i