Luke Kenneth Casson Leighton wrote:
yes, mdadm names its RAID drives by UUID (as can clearly be seen in
/dev/mdadm/mdadm.conf) but does it *also* refer to its *COMPONENT*
drives (internally, and non-obviously, and undocumentedly) by UUID and
then report to the outside world that it's using whatever name
(/dev/sdX) which can, under these external-drives scenario, change.
l.
The other thing is that both drives in the array have the same UUID, so
you need to be able to tell them apart some way or another and the
/dev/sd* view is just fine.
And this works fine too fwiw :
mdadm -D /dev/disk/by-id/md-uuid-*
So long as mdadm can determine the drives in use, I don't care how it
uses them internally. However, if a drive goes bad, then I need to know
which one.
Let's say that /dev/sda has gone bad of a two drive RAID1 array; I can
visually detect the drive by doing the following:
dd if=/dev/sda of=/dev/null
Go look to see which drive is busy [hopefully it will show with a
flashing activity LED] and I can see which one has failed -- if that
doesn't work, then I can reverse the test and try all drives that are
meant to be okay to eliminate them.
--
Kind Regards
AndrewM
Andrew McGlashan
Broadband Solutions now including VoIP
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4e074a21.6000...@affinityvision.com.au