I understand what you are saying about this not being an OpenBSD or a
raidframe problem. I will try that tool you pointed me to and see what it
says. Will it permanently mark the blocks as bad? If the worst happens I'm
going to have to rebuild the system, but I don't want it to use those
blocks
Jason Murray writes:
> I've tried, again, to fix my raid array with raidctl -R. I did it on the
> console port this time so I could capture the output from ddb>
>
> Here is some output:
>
> # raidctl -s raid0
> raid0 Components:
> /dev/wd0d: failed
> /dev/wd1d: optimal
>
On 7/17/06, Jason Murray <[EMAIL PROTECTED]> wrote:
I've tried, again, to fix my raid array with raidctl -R. I did it on the
console port this time so I could capture the output from ddb>
Here is some output:
yay!
I then use raidctl -S to monitor the reconstruction. Things go well
until the
I've tried, again, to fix my raid array with raidctl -R. I did it on the
console port this time so I could capture the output from ddb>
Here is some output:
# raidctl -s raid0
raid0 Components:
/dev/wd0d: failed
/dev/wd1d: optimal
No spares.
Parity status: DIRTY
Reconstruc
"Jeff Quast" writes:
>
> My first few months with raidframe caused many kernel panics. With 30
> minutes of parity checking, this was a difficult learning experience.
> I was initialy led to beleive that raidframe was hardly stable (and
> therfor disabled in GENERIC).
>
> However, as I gained exp
On 7/11/06, Jason Murray <[EMAIL PROTECTED]> wrote:
Is it "standard" practice to use raidctl on a raid set while your system is
running from that raid set?
I'm just curious as to what "best practice" might be?
Last night I booted to a different disk so I could run raidctl -R against the
array w
Is it "standard" practice to use raidctl on a raid set while your system is
running from that raid set?
I'm just curious as to what "best practice" might be?
Last night I booted to a different disk so I could run raidctl -R against the
array while it was not being used. That caused a kernel pa
7 matches
Mail list logo