I have two twinned RAIDs which are working just fine although the 
second drive for both RAIDs is missing.  After all, that's what it is 
supposed to do -- work when things are broken..

The RAIDs are mdadm-style Linux software RAIDs.  One contains a /boot 
partition; the other an LVM partition that contains all the other 
partitions in the system, including the root partition, /usr, /home, 
and the like.

Both drives should contain the mdadm signature information, and the 
same consistent file systems.

Each RAID is spread, in duplicate, across the same two hard drives.


EXCEPT, of course, that one of the drives is now missing.  It was 
physically disconnected by accident while the machine was off, and 
owing to circumstances, has remained disconnected for a significant 
amunt of time.

This means that the missing drive has everything needed to boot the 
system, with valid mdadm signatures, and valid file systems, except, 
of course, that its file system is obsolete.

If I were to manage to reconnect the absent drive, how would the 
boot-time RAID assembly work?  (/boot is on the RAID).  Would it be 
able to figure out which of the two drives is up-to-date, and 
therefore which one to consider defective and not use?

Do I need to wipe the missing drive completely before I connect it?  
(I have another machine to do this on, using a USB-to-SATA interface).

-- hendrik

_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

Reply via email to