On Fri, 9 Sep 2005 [EMAIL PROTECTED] wrote:

> I'm testing a server before I put it in production, and I've got a problem 
> with
> mdraid.
> 
> The config:
> - Dell PowerEdge 800
> - 4 x 250 Go SATA attached to the mobo
> - /boot 4 x 1 GB (1 GB available)in RAID1, 3 active + 1 spare
> - / 4 x 250 GB (500 GB available) in RAID5, 3 active + 1 spare
> No problems at install, and the server runs OK.
> 
> 
> Then I stop the server and remove /dev/sdb to simulate a hard disk failure 
> that
> has caused a crash and a reboot.

purrfect test .... and do the same for each disk ....

it is pointless to have 1 spare disk in the raid array

- have you evern wondered about other folks that try to build a sata-based
  raid subsystem ??
        - how did their sata pass the "failed disk" test if it 
        reassigns its drive numbers upon reboot

> With the second disk removed the disks names are changed,

exactly.... that is the problem with scsi
        - pull the power cord from it to simulate the disk failure
        or pull the sata cable ...

        - in either case, if the disk drives rename itself, based on
        who's alive, raid won't work to boot after the failed disk
        ( but it will stay running until its booted )

> the 3rd disk /dev/sdc
> becomes /dev/sdb and the 4th disk (that was the spare disk) /dev/sdd becomes
> /dev/sdc.

thta's always been true of scsi

> During the boot process md detects that there is a problem, but then complains
> it can't find the /dev/sdd spare disk and the boot process stops with a kernel
> panic error.

exactly

c ya
alvin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to