Ok, here's what's happening. A few years ago, I took my old WD green
drives and put them in a box as backups to a new array of Seagate
drives. When one of those seagate drives failed (just out of
warranty, of course), I replaced it with one of the WD's. That was
cooking along just fine until jus
Actually, it didn't resume. The "btrfs delete missing" was using 100%
of the I/O bandwidth but wasn't actually doing any disk reads of
writes. I tried to reboot, but the system wouldn't go down, so after
waiting 10 minutes, I power-cycled. Now I can't mount at all and
here's what dmesg says abou
It resumed on its own. Weird.
On Wed, Aug 12, 2015 at 4:23 PM, Timothy Normand Miller
wrote:
> On Wed, Aug 12, 2015 at 2:10 PM, Chris Murphy wrote:
>
>>
>> Anyway it looks like it's hardware related, but I don't know what
>> device ata4.00 is, so maybe this helps:
>> http://superuser.com/questi
On Wed, Aug 12, 2015 at 2:10 PM, Chris Murphy wrote:
>
> Anyway it looks like it's hardware related, but I don't know what
> device ata4.00 is, so maybe this helps:
> http://superuser.com/questions/617192/mapping-ata-device-number-to-logical-device-name
# ata=4; ls -l /sys/block/sd* | grep $(gre
There are hardware problems here...
[112531.319224] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[112531.319231] ata4.00: failed command: WRITE DMA EXT
[112531.319240] ata4.00: cmd 35/00:00:00:8d:46/00:04:08:00:00/e0 tag 0
dma 524288 out
res 40/00:00:00
I added a new device and then did a delete missing. I lost the
terminal (should have used gnu screen), so I didn't see the stdout,
but the operation aborted at some point. There's ton of output in
dmesg related to this, along with some OOPSes, which I have attached
as "dmesg2" here:
https://bugz
Timothy Normand Miller posted on Tue, 11 Aug 2015 17:32:12 -0400 as
excerpted:
> On Tue, Aug 11, 2015 at 5:24 PM, Chris Murphy
> wrote:
>
>>> There is still data redundancy. Will a scrub at least notice that the
>>> copies differ?
>>
>> No, that's what I mean by "nodatasum means no raid1 self-h
Russell Coker posted on Wed, 12 Aug 2015 13:04:27 +1000 as excerpted:
> Linux Software RAID scrub will copy the data from one disk to the other
> to make them identical, the theory is that it's best to at least be
> consistent if you can't be sure you are right.
>
> Will a BTRFS scrub do this on
On Wed, 12 Aug 2015 07:24:00 AM Chris Murphy wrote:
> > There is still data redundancy. Will a scrub at least notice that the
> > copies differ?
>
> No, that's what I mean by "nodatasum means no raid1 self-healing is
> possible". You have data redundancy, but without checksums btrfs has
> no way
On Tue, Aug 11, 2015 at 5:24 PM, Chris Murphy wrote:
>> There is still data redundancy. Will a scrub at least notice that the
>> copies differ?
>
> No, that's what I mean by "nodatasum means no raid1 self-healing is
> possible". You have data redundancy, but without checksums btrfs has
> no way
On Tue, Aug 11, 2015 at 3:00 PM, Timothy Normand Miller
wrote:
> On Tue, Aug 11, 2015 at 4:48 PM, Chris Murphy wrote:
>
>>
>> The compress is ignored, and it looks like nodatasum and nodatacow
>> apply to everything. The nodatasum means no raid1 self-healing is
>> possible for any data on the ent
On Tue, Aug 11, 2015 at 4:48 PM, Chris Murphy wrote:
>
> The compress is ignored, and it looks like nodatasum and nodatacow
> apply to everything. The nodatasum means no raid1 self-healing is
> possible for any data on the entire volume. Metadata checksumming is
> still enabled.
Ugh. So I need
On Tue, Aug 11, 2015 at 2:32 PM, Timothy Normand Miller
wrote:
> If I lose the array, I won't cry. The backup appears to be complete.
> But it would be convenient to avoid having to restore from scratch,
> and I'm hoping this might help you guys too in some way. I really
> like btrfs, and I woul
On Tue, Aug 11, 2015 at 2:26 PM, Timothy Normand Miller
wrote:
> On Tue, Aug 11, 2015 at 3:47 PM, Chris Murphy wrote:
>
>>
>> Huh. I thought nodatacow applies to an entire volume only, not per
>> subvolume unless you use chattr +C (in which case it can be per
>> subvolume, directory or per file).
On Tue, Aug 11, 2015 at 3:57 PM, Chris Murphy wrote:
> On Tue, Aug 11, 2015 at 12:04 PM, Timothy Normand Miller
> wrote:
>
>> https://bugzilla.kernel.org/show_bug.cgi?id=102691
>
> [7.729124] BTRFS: device fsid ecdff84d-b4a2-4286-a1c1-cd7e5396901c
> devid 2 transid 226237 /dev/sdd
> [7.74
On Tue, Aug 11, 2015 at 3:47 PM, Chris Murphy wrote:
>
> Huh. I thought nodatacow applies to an entire volume only, not per
> subvolume unless you use chattr +C (in which case it can be per
> subvolume, directory or per file). I could be confused, but I think
> you have mutually exclusive mount o
On Tue, Aug 11, 2015 at 12:04 PM, Timothy Normand Miller
wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=102691
[7.729124] BTRFS: device fsid ecdff84d-b4a2-4286-a1c1-cd7e5396901c
devid 2 transid 226237 /dev/sdd
[7.746115] BTRFS: device fsid ecdff84d-b4a2-4286-a1c1-cd7e5396901c
devid
On Tue, Aug 11, 2015 at 11:56 AM, Timothy Normand Miller
wrote:
> On Tue, Aug 11, 2015 at 12:21 AM, Chris Murphy
> wrote:
>> I don't see nodatacow in your fstab, so I don't know why that's
>> happening. That means no checksumming for data.
>
> Sorry. I was dumb. I only showed you the entry fo
On Tue, Aug 11, 2015 at 1:56 PM, Timothy Normand Miller
wrote:
> On Tue, Aug 11, 2015 at 12:21 AM, Chris Murphy
> wrote:
>> The entire dmesg is still useful because it should show libata errors
>> if these aren't fully failed drives. So you should file a bug and
>> include, literally, the entir
On Tue, Aug 11, 2015 at 12:21 AM, Chris Murphy wrote:
> On Mon, Aug 10, 2015 at 7:23 PM, Timothy Normand Miller
> wrote:
>> On Mon, Aug 10, 2015 at 6:52 PM, Chris Murphy
>> wrote:
>
>>> - complete dmesg for the failed mount
>>
>> It really doesn't say much. I have things like this:
>> [8.6
On Mon, Aug 10, 2015 at 7:23 PM, Timothy Normand Miller
wrote:
> On Mon, Aug 10, 2015 at 6:52 PM, Chris Murphy wrote:
>> - complete dmesg for the failed mount
>
> It really doesn't say much. I have things like this:
> [8.643535] BTRFS info (device sdc): disk space caching is enabled
> [
On Mon, Aug 10, 2015 at 6:52 PM, Chris Murphy wrote:
> Four needed things:
> - kernel version
4.1.0-gentoo-r1, although I have also tried 4.1.4.
> - btrfs-progs version
4.1.2
> - complete dmesg for the failed mount
It really doesn't say much. I have things like this:
[8.643535] BTRFS inf
Four needed things:
- kernel version
- btrfs-progs version
- complete dmesg for the failed mount
- complete btrfs check output (you mostly have this but since the
version isn't included, it's not clear this is the entire output)
The last two can be included as attachments in a bugzilla.kernel.org
Hi, everyone,
I have a four-drive RAID1 array, and since yesterday, some problem has
rendered it unmountable (read/write anyhow). One drive reports a read
error, so maybe the drive is failing, but I've had that happen before,
and it was easy to swap in a new drive. This time, two more drives
are
24 matches
Mail list logo