I have raid set up on my linux computer.
I had a drive fail.

I ran  mdadm -add /dev/md2 /dev/hda2


Which fixed the drive. For the moment. However I'm confused what [0]
or [1] or [2] means at the end of these drive descriptions in some
circumstances.


Fore example ... When I ran the command


cat /proc/mdstat
I get ....


[root@smclinux ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 hda2[0] hdb2[1]
      38973568 blocks [2/2] [UU]


md1 : active raid1 hda1[0] hdb1[1]
      104320 blocks [2/2] [UU]


unused devices: <none>


But when i was rebuilding the drive with the first command mentioned
and then ran the command


# watch -n .1 cat /proc/mdstat (Which basically watches the rebuild)
I saw this ...


Every 0.1s: cat /proc/mdstat                            Tue Jan 31
15:24:05 2012


Personalities : [raid1]
md2 : active raid1 hda2[2] hdb2[1]
      38973568 blocks [2/1] [_U]
      [=>...................]  recovery =  7.8% (3072384/38973568)
finish=40.1min speed=14912K/sec
md1 : active raid1 hda1[0] hdb1[1]
      104320 blocks [2/2] [UU]


unused devices: <none>


I know this looks all very complex but really all Im interested in
understanding is what the [2] in the hda2[2] or [1] in hdb2[1].


I thought it meant the disk in the array, but that doesnt make sense
really. The system only has two drives. but now I see [0] [1] and [2]
(when rebuilding)
thats 3 disks in the above output . So I must be wrong. Can anyone
help me sort out what the [#] means and the end of the drive
description.

thanks.

-- 
You received this message because you are subscribed to the Linux Users Group.
To post a message, send email to [email protected]
To unsubscribe, send email to [email protected]
For more options, visit our group at 
http://groups.google.com/group/linuxusersgroup
References can be found at: http://goo.gl/anqri
Please remember to abide by our list rules (http://tinyurl.com/LUG-Rules or 
http://cdn.fsdev.net/List-Rules.pdf)

Reply via email to