> on a server one disk had a "medium error" and the RAID1 (2.2.14-B1)
> disabled one of the mirrors. It looks like this:
> md0 : active raid1 sdb5[1] sda5[0](F) 4739072 blocks [2/1] [_U]

We go a JBoD with 24 disks, and about a quarter of them have failed in just
such a way ....

> If I reboot now, how will the system react?

It will start in degraded mode using sdb5 only (assuming you have PSB !!)

> Will it recognize the failed partition

So long as the good partition is accessible and you are using PSBs,
it will read the configuration info and find that it is just sdb5.

> or (worst case) will it try to overwrite the data on sdb5 with sda5?

So long as youy are using PSBs, this should not happen.
sdb5 will have a later event count.

> BTW, what's the easiest way to replace the failed disk? Will something
> like "dd /dev/sda /dev/sdb count=1"

Yikes !!! this sounds hairy !

No blocksize ...
If the new sdb is identical to sda, it will set up the primary partitions and
the extended partitions, but it will not set up all the logical partitions.
I'd do it by hand, but if you are *SURE* they are the same, use sfdisk ...

> and "raidhotadd /dev/md0 /dev/sdb5" work?

Once the logical partition is set up, that should work.

> (assuming the old sdb becomes sda after the replacement)
(it depends on the SCSI ID and the SCSI bus used. I assume you would replace
sda with a new disk, and leave sdb ASIS, in which case it would remain sdb)

Reply via email to