On 2015-07-03 13:51, Chris Murphy wrote:
On Fri, Jul 3, 2015 at 9:05 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I did some more digging and found that I had a lot of errors basically
every drive.
Ick. Sucks for you but then makes this less of a Btrfs problem because
it can really
On Friday 03 July 2015 09:31:03 Duncan wrote:
Donald Pearson posted on Thu, 02 Jul 2015 13:19:41 -0500 as excerpted:
btrfs restore complains that every device is missing except the one that
you specify on executing the command. Multiple devices as a parameter
isn't an option. Specifcy
Thanks for the inputs guys.
Yes I did learn to perform a device scan --all-devices. It seems that
the chunk tree is vital to a lot of functionality and the recovery
tools are no exception.
I suspect that I ran in to the raid56 caveat btrfs does not deal well
with a drive that is present but not
On Fri, Jul 3, 2015 at 9:05 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I did some more digging and found that I had a lot of errors basically
every drive.
Ick. Sucks for you but then makes this less of a Btrfs problem because
it can really only do so much if more than the number of
Donald Pearson posted on Thu, 02 Jul 2015 13:19:41 -0500 as excerpted:
btrfs restore complains that every device is missing except the one that
you specify on executing the command. Multiple devices as a parameter
isn't an option. Specifcy /dev/disk/by-uuid/uuid claims that all
devices are
Hello,
At the bottom of this email are the results of the latest
chunk-recover. I only included one example of the output that was
printed prior to the summary information but it went up to the end of
my screen buffer and beyond.
So it looks like the command executed properly when none of the
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Which is curious because this is device id 2, where previously the
complaint was about device id 1. So can I believe dmesg about which
drive is actually the issue or is the drive that's printed in dmesg
just
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I do see plenty of complaints about the sdg drive (previously sde) in
/var/log/messages from the 28th which is when I started noticing
issues. Nothing is jumping out at me claiming the btrfs is taking
action but
Unfortunately btrfs image fails with couldn't read chunk tree.
btrfs restore complains that every device is missing except the one
that you specify on executing the command. Multiple devices as a
parameter isn't an option. Specifcy /dev/disk/by-uuid/uuid claims
that all devices are missing.
I
On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Unfortunately btrfs image fails with couldn't read chunk tree.
btrfs restore complains that every device is missing except the one
that you specify on executing the command. Multiple devices as a
parameter isn't
I think it is. I have another raid5 pool that I've created to test
the restore function on, and it worked.
On Thu, Jul 2, 2015 at 1:26 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Unfortunately btrfs image
That is correct. I'm going to rebalance my raid5 pool as raid6 and
re-test just because.
On Thu, Jul 2, 2015 at 1:37 PM, Chris Murphy li...@colorremedies.com wrote:
On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I think it is. I have another raid5 pool that
Yes it works with raid6 as well.
[root@san01 btrfs-progs]# ./btrfs fi show
Label: 'rockstor_rockstor' uuid: 08d14b6f-18df-4b1b-a91e-4b33e7c90c29
Total devices 1 FS bytes used 19.25GiB
devid1 size 457.40GiB used 457.40GiB path /dev/sdt3
warning, device 4 is missing
warning,
On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
I think it is. I have another raid5 pool that I've created to test
the restore function on, and it worked.
So you have all devices for this raid6 available, and yet when you use
restore, you get missing device
Small update on this, with no idea if this is useful information or not.
At some point within the last hour iostat shows that /dev/sdg is no
longer under heavy reads.
The other 9 drives however are still reading as fast as they are able.
There is no new output on the `btrfs rescue chunk-recover`
Thanks Chris,
To my shame it turns out darkling didn't drop off IRC after all; I'm
new to all this and learning quickly that I need to sit on my hands.
I admit despite darkling's suggestion that my usertools are probably
fine I pulled down a newer kernel from elrepo so currently I'm running
btrfs-progs version is 4.0, what is the kernel versions you've tried
to mount with?
I suggest running btrfs check (without --repair) and including the
full output. There are a lot of changes in btrfs-progs 4.1, but off
hand I don't know that they'd affect btrfs check results.
Chris Murphy
--
To
Hello,
darkling was helping me on IRC for a while before he had to drop
off, thanks for the help darkling.
To pick up where we left off...
In summary, I have a 10 disk raid6 pool that I cannot mount.
btrfs fi show output is here - http://pastebin.com/aidGV20e
'tank' is the pool in question.
I should have thought to check this to add earlier. I'm seeing errors
for /dev/sdg in dmesg (not surprised, I wanted this drive out of the
pool to begin with because it's sick).
[ 142.612988] BTRFS: open_ctree failed
[11836.105577] sd 0:0:6:0: [sdg] FAILED Result: hostbyte=DID_OK
Here is the result of the attempted rescue chunk-recover
[root@san01 btrfs-progs]# ./btrfs rescue chunk-recover -v /dev/sdc
All Devices:
Device: id = 7, name = /dev/sdl
Device: id = 8, name = /dev/sdm
Device: id = 9, name = /dev/sdn
Device: id = 3, name = /dev/sdf
On Wed, Jul 1, 2015 at 7:38 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
Here's the drive vomiting in my logs after it got halfway through the
dd image attempt.
Jul 1 17:05:51 san01 kernel: sd 0:0:6:0: [sdg] FAILED Result:
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Jul 1 17:05:51
On Wed, Jul 1, 2015 at 3:35 PM, Donald Pearson
donaldwhpear...@gmail.com wrote:
*** Error in `./btrfs': free(): invalid next size (fast): 0x01332100
***
Segmentation fault
Blek. Well that's a bug then too. If you have space somewhere to put a
btrfs-image -c9 -t4, I'd do that now
Thanks Chris.
Everything is/was raid6. Oddly when I created the filesystem there
was a mix of raid1 and raid6 but a balance dconvert mconvert after
creation set everything to raid6.
I did previously try a btrfs-image as I found that as a first thing
to do through some google searching but that
23 matches
Mail list logo