Looks very much like a problem with the SATA controller.
If the repeat look you have shown there is an "infinite" loop, then
presumably some failure is not being handled properly.
I agree, even though the AHCI driver was supposed to be stable. The
loop is not quite infinite btw., it does time ou
On Friday June 30, [EMAIL PROTECTED] wrote:
> More problems ...
>
> As reported I have 4x WD5000YS (Caviar RE2 500 GB) in a md RAID5
> array. I've been benchmarking and otherwise testing the new array
> these last few days, and apart from the fact that the md doesn't shut
> down properly I've had
On Friday June 30, [EMAIL PROTECTED] wrote:
> On Fri, 30 Jun 2006, Francois Barre wrote:
> > Did you try upgrading mdadm yet ?
>
> yes, I have version 2.5 now, and it produces the same results.
>
Try adding '--force' to the -A line.
That tells mdadm to try really hard to assemble the array.
You
More problems ...
As reported I have 4x WD5000YS (Caviar RE2 500 GB) in a md RAID5
array. I've been benchmarking and otherwise testing the new array
these last few days, and apart from the fact that the md doesn't shut
down properly I've had no problems.
Today I wanted to finally copy some data
2006/6/30, Akos Maroy <[EMAIL PROTECTED]>:
On Fri, 30 Jun 2006, Francois Barre wrote:
> Did you try upgrading mdadm yet ?
yes, I have version 2.5 now, and it produces the same results.
Akos
And I suppose there is no change in the various outputs mdadm is able
to produce (i.e. -D or -E).
Ca
On Fri, 30 Jun 2006, Francois Barre wrote:
Did you try upgrading mdadm yet ?
yes, I have version 2.5 now, and it produces the same results.
Akos
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http:
so, the situation seems that my array was degraded already when the power
failure happened, and then it got into the dirty state. what can one do
about such a situation?
Did you try upgrading mdadm yet ?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a mes
On Fri, 30 Jun 2006, Francois Barre wrote:
I'm wondering :
sd[acd] has an Update Time : Thu Jun 29 09:10:39 2006
sdb has an Update Time : Mon Jun 26 20:27:44 2006
When did your power failure happen ?
yesterday (29th). so it seems that /dev/sdb1 failed out on the 26th, and I
just didn't take n
(answering to myself is one of my favourite hobbies)
Yep, this looks like it.
The events difference is quite big : 0.2701790 vs. 0.2607131... Could
it be that the sdb1 was marked faulty a couple of seconds before the
power failure ?
I'm wondering :
sd[acd] has an Update Time : Thu Jun 29 09:1
2006/6/30, Ákos Maróy <[EMAIL PROTECTED]>:
Francois,
Thank you for the very swift response.
> First, what is your mdadm version ?
# mdadm --version
mdadm - v1.12.0 - 14 June 2005
Rather old, version, isn't it ?
The freshest meat is 2.5.2, and can be grabbed here :
http://www.cse.unsw.edu.au/
Francois,
Thank you for the very swift response.
> First, what is your mdadm version ?
# mdadm --version
mdadm - v1.12.0 - 14 June 2005
>
> Then, could you please show us the result of :
>
> mdadm -E /dev/sd[abcd]1
# mdadm -E /dev/sd[abcd]1
/dev/sda1:
Magic : a92b4efc
Versio
2006/6/30, Ákos Maróy <[EMAIL PROTECTED]>:
Hi,
Hi,
I have some issues reviving my raid5 array after a power failure.
[...]
strange - why wouldn't it take all four disks (it's omitting /dev/sdb1)?
First, what is your mdadm version ?
Then, could you please show us the result of :
mdadm -E /
Hi,
I have some issues reviving my raid5 array after a power failure. I'm
running gentoo Linux 2.6.16, and I have a raid5 array /dev/md0 if 4 disks,
/dev/sd[a-d]1. On top of this, I have a crypto devmap with LUKS.
After the power failure, the array sort of starts up and doesn't at the
same time:
13 matches
Mail list logo