[Kirk Patton]
> The status should be:
> md0 : active raid5 sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0] 
> 71681024 blocks level 5, 256k chunk, algorithm 0 [5/5] [UUUUU]

5 active, 1 standby (6 raid disks total)

> The status is:
> md0 : active raid5 sdf1[4] sde1[4](F) sdd1[3] sdc1[2] sdb1[1] sda1[0] 
> 71681024 blocks level 5, 256k chunk, algorithm 0 [5/5] [UUUUU]

5 active, 1 failed (6 total).  This is a snapshot after the rebuild has
already occurred (or the drive that failed was the spare, but that's
unlikely given typical ordering conventions)

> I noted the (F) by sde1.  Does this stand for
> failed?  Is there any references to the types of
> errors that will be reported in the syslog or
> /proc/mdstat?

yes, F is failed.

> Personalities : [raid5] 
> read_ahead 1024 sectors
> md0 : active raid5 sdg1[6] sdf1[5] sde1[4] sdd1[3]
> sdc1[2] sdb1[1](F) sda1[0] 106653696 blocks level
> 5, 256k chunk, algorithm 0 [7/6] [U_UUUUU]
> unused devices: <none>
> 
> Reading this status from /proc/mdstat, I am
> thinking that the raid is running in degraded mode
> with "sdb1" as the failed drive.  The [7/6],  does
> that mean that there are 7 devices and only 6 are
> currently running?

yup, that's degraded.  You'll want to raidhotremove the sdb1 and
raidhotadd a new partition (possibly sdb1 after that drive gets replaced,
depending on your controller and other factors) and it'll rebuild onto
the new drive.

James
-- 
James Manning <[EMAIL PROTECTED]>
GPG Key fingerprint = B913 2FBD 14A9 CE18 B2B7  9C8E A0BF B026 EEBB F6E4

Reply via email to