On Wed, Oct 21, 2009 at 12:10 AM, Greg Smith <gsm...@gregsmith.com> wrote:
> On Tue, 20 Oct 2009, Ow Mun Heng wrote:
>
>> Raid10 is supposed to be able to withstand up to 2 drive failures if the
>> failures are from different sides of the mirror.  Right now, I'm not sure
>> which drive belongs to which. How do I determine that? Does it depend on the
>> output of /prod/mdstat and in that order?
>
> You build a 4-disk RAID10 array on Linux by first building two RAID1 pairs,
> then striping both of the resulting /dev/mdX devices together via RAID0.

Actually, later models of linux have a direct RAID-10 level built in.
I haven't used it.  Not sure how it would look in /proc/mdstat either.

>  You'll actually have 3 /dev/mdX devices around as a result.  I suspect
> you're trying to execute mdadm operations on the outer RAID0, when what you
> actually should be doing is fixing the bottom-level RAID1 volumes.
>  Unfortunately I'm not too optimistic about your case though, because if you
> had a repairable situation you technically shouldn't have lost the array in
> the first place--it should still be running, just in degraded mode on both
> underlying RAID1 halves.

Exactly.  Sounds like both drives in a pair failed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to