Title: RE: experimenting with raid

You need to use raidhotadd /dev/mdx /dev/lox to add the loop device back onto the raid array.  It will not automatically get sucked back in.

If it happen to be that a hard disk had crashed, you would first need to do a raidhotremove /dev/mdx /dev/hdax to remove the bad drive, then do raidhot add.

You can also simulate a failure by doing raidsetfaulty /dev/mdx /dev/lox to simulate a catastrophic failure of a segment.

Note that raidhotremove, raidhotadd, and raidsetfaulty do NOT need raidtab to be accurate.  They just need an entry in raidtab for the /dev/md device.

mkraid and raidstart are the only ones that actually read and use the contents of raidtab.

Clay

-----Original Message-----
From: Stephen L. Favor [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 28, 1999 2:10 PM
To: [EMAIL PROTECTED]
Cc: Stephen L. Favor
Subject: experimenting with raid


I created 3 files of the same size tied to loop[0-2] and set up a raid 5 fs
to see what happens on a failure.  I then initialized the array, created an
ext2 FS on it, mounted it and copied some files to it.  At that point, I
detached and wiped the file associated with loop1 which was the 2nd device
in the array.  I was happy to find all my files intact when I remounted the
FS and found the expected errors in /var/log/messages about /dev/loop1 not
being available.

I've basically simulated a catastrophic disk crash.  The puppy is dead and
needs to be replaced.  Since I'm using /dev/loop, after a dd(1) and
losetup(1) /dev/loop1 is now again a functioning device.  It is, however,
filled with zeroes and still not recognized as part of the array.

How do I recover from such a failure?  I now want thinks to be like they
were--three devices running as a RAID5 array, but I can't seem to find
anything that will reconstruct the device I purposefully wiped.

/etc/raidtab:
raiddev /dev/md0
  raid-level      5
  nr-raid-disks   3
  nr-spare-disks  0
  chunk-size      4

  device          /dev/loop0
  raid-disk       0
  device          /dev/loop1
  raid-disk       1
  device          /dev/loop2
  raid-disk       2

/proc/mdstat:
Personalities : [raid0] [raid5]
read_ahead 1024 sectors
md0 : active raid5 [dev 07:02][2] [dev 07:00][0] 1792 blocks level 5, 4k
chunk, algorithm 0 [3/2] [U_U]
unused devices: <none>

/var/log/messages after raidstart:
Sep 28 11:08:08 veruca kernel: (read) [dev 07:00]'s sb offset: 896 [events:
0000000e]
Sep 28 11:08:08 veruca kernel: (read) [dev 07:02]'s sb offset: 896 [events:
0000000e]
Sep 28 11:08:08 veruca kernel: autorun ...
Sep 28 11:08:08 veruca kernel: considering [dev 07:02] ...
Sep 28 11:08:08 veruca kernel:   adding [dev 07:02] ...
Sep 28 11:08:08 veruca kernel:   adding [dev 07:00] ...
Sep 28 11:08:08 veruca kernel: created md0
Sep 28 11:08:08 veruca kernel: bind<[dev 07:00],1>
Sep 28 11:08:08 veruca kernel: md0: WARNING: [dev 07:02] appears to be on
the same physical disk as [dev 07:00]. True
Sep 28 11:08:08 veruca kernel:      protection against single-disk failure
might be compromised.
Sep 28 11:08:08 veruca kernel: bind<[dev 07:02],2>
Sep 28 11:08:08 veruca kernel: running: <[dev 07:02]><[dev 07:00]>
Sep 28 11:08:08 veruca kernel: now!
Sep 28 11:08:08 veruca kernel: [dev 07:02]'s event counter: 0000000e
Sep 28 11:08:08 veruca kernel: [dev 07:00]'s event counter: 0000000e
Sep 28 11:08:08 veruca kernel: md0: max total readahead window set to 256k
Sep 28 11:08:08 veruca kernel: md0: 2 data-disks, max readahead per
data-disk: 128k
Sep 28 11:08:08 veruca kernel: raid5: device [dev 07:02] operational as raid
disk 2
Sep 28 11:08:08 veruca kernel: raid5: device [dev 07:00] operational as raid
disk 0
Sep 28 11:08:08 veruca kernel: raid5: md0, not all disks are operational --
trying to recover array
Sep 28 11:08:08 veruca kernel: raid5: allocated 3197kB for md0
Sep 28 11:08:08 veruca kernel: raid5: raid level 5 set md0 active with 2 out
of 3 devices, algorithm 0
Sep 28 11:08:08 veruca kernel: RAID5 conf printout:
Sep 28 11:08:08 veruca kernel:  --- rd:3 wd:2 fd:1
Sep 28 11:08:08 veruca kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:[dev
07:00]
Sep 28 11:08:08 veruca kernel:  disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:[dev
07:02]
Sep 28 11:08:08 veruca kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel: RAID5 conf printout:
Sep 28 11:08:08 veruca kernel:  --- rd:3 wd:2 fd:1
Sep 28 11:08:08 veruca kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:[dev
07:00]
Sep 28 11:08:08 veruca kernel:  disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 2, s:0, o:1, n:2 rd:2 us:1 dev:[dev
07:02]
Sep 28 11:08:08 veruca kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev
00:00]
Sep 28 11:08:08 veruca kernel: md: updating md0 RAID superblock on device
Sep 28 11:08:08 veruca kernel: [dev 07:02] [events: 0000000f](write) [dev
07:02]'s sb offset: 896
Sep 28 11:08:08 veruca kernel: [dev 07:00] [events: 0000000f](write) [dev
07:00]'s sb offset: 896
Sep 28 11:08:08 veruca kernel: .
Sep 28 11:08:08 veruca kernel: ... autorun DONE.
Sep 28 11:08:08 veruca kernel: md: recovery thread got woken up ...
Sep 28 11:08:08 veruca kernel: md0: no spare disk to reconstruct array! --
continuing in degraded mode
Sep 28 11:08:08 veruca kernel: md: recovery thread finished ...

Reply via email to