I've recently set up two RAID5 partitions for experimentation. This is running
on a Debian potato system.... with a 2.2.10 that I patched with the most
recent (08-23?) patch, with raid5 built in. This is on a k6-ii-350 system (on
a no-name VIA MVP3 chipset m/b). I selected the VIA AP2823C (or something like
that) in the kernel config for the VIA bus master ide drivers....

The ide on this chipset seems a bit suspect to me, but, anyway.... I created
the partitions, etc, and mkraided two devices, and it was all good. For some
reason last night there was a reboot (may have been done by accident with all
the power cords down here). Anyway, today I am noticing that /proc/mdstat
gives me:


Personalities : [raid5] 
read_ahead 1024 sectors
md0 : active raid5 hdd1[2] hdc1[1] hdb1[0] 12450176 blocks level 5, 32k chunk, 
algorithm 2 [3/3] [UUU]
md1 : active raid5 hdd2[2] hdb2[0] 12450048 blocks level 5, 128k chunk, algorithm 2 
[3/2] [U_U]
unused devices: <none>

I noticed that md1 only has two partitions in it. hdc2 seems to be missing. So
I checked syslog and there were quite a few error messages in it, which can be
summarised by showing you the output of dmesg after I rebooted once again:

<irrelevant stuff deleted>

VP_IDE: IDE controller on PCI bus 00 dev 39
VP_IDE: not 100% native mode: will probe irqs later
    ide0: BM-DMA at 0xe000-0xe007, BIOS settings: hda:DMA, hdb:DMA
ide0: VIA Bus-Master (U)DMA Timing Config Success
    ide1: BM-DMA at 0xe008-0xe00f, BIOS settings: hdc:DMA, hdd:DMA
ide1: VIA Bus-Master (U)DMA Timing Config Success
hda: QUANTUM FIREBALL CR4.3A, ATA DISK drive
hdb: QUANTUM FIREBALL EX12.7A, ATA DISK drive
hdc: QUANTUM FIREBALL EX12.7A, ATA DISK drive
hdd: QUANTUM FIREBALL EX12.7A, ATA DISK drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
hda: QUANTUM FIREBALL CR4.3A, 4110MB w/418kB Cache, CHS=524/255/63
hdb: QUANTUM FIREBALL EX12.7A, 12159MB w/418kB Cache, CHS=1550/255/63, UDMA
hdc: QUANTUM FIREBALL EX12.7A, 12159MB w/418kB Cache, CHS=24704/16/63, UDMA
hdd: QUANTUM FIREBALL EX12.7A, 12159MB w/418kB Cache, CHS=24704/16/63, (U)DMA
md driver 0.90.0 MAX_MD_DEVS=256, MAX_REAL=12
raid5 personality registered
raid5: measuring checksumming speed
raid5: MMX detected, trying high-speed MMX checksum routines
   pII_mmx   :   741.807 MB/sec
   p5_mmx    :   667.893 MB/sec
   8regs     :   519.684 MB/sec
   32regs    :   354.711 MB/sec
using fastest function: pII_mmx (741.807 MB/sec)
md.c: sizeof(mdp_super_t) = 4096
Partition check:
 hda: hda1 hda2 < hda5 >
 hdb: hdb1 hdb2
 hdc: [PTBL] [1550/255/63] hdc1 hdc2
 hdd: [PTBL] [1550/255/63] hdd1 hdd2
autodetecting RAID arrays
(read) hdb1's sb offset: 6225088 [events: 00000006]
(read) hdb2's sb offset: 6225088 [events: 0000000b]
(read) hdc1's sb offset: 6225088 [events: 00000006]
(read) hdc2's sb offset: 6225088 [events: 00000001]
(read) hdd1's sb offset: 6225088 [events: 00000006]
(read) hdd2's sb offset: 6225088 [events: 0000000b]
autorun ...
considering hdd2 ...
  adding hdd2 ...
  adding hdc2 ...
  adding hdb2 ...
created md1
bind<hdb2,1>
bind<hdc2,2>
bind<hdd2,3>
running: <hdd2><hdc2><hdb2>
now!
hdd2's event counter: 0000000b
hdc2's event counter: 00000001
hdb2's event counter: 0000000b
md: superblock update time inconsistency -- using the most recent one
freshest: hdd2
md: kicking non-fresh hdc2 from array!
unbind<hdc2,2>
export_rdev(hdc2)
md1: max total readahead window set to 1024k
md1: 2 data-disks, max readahead per data-disk: 512k
raid5: device hdd2 operational as raid disk 2
raid5: device hdb2 operational as raid disk 0
raid5: md1, not all disks are operational -- trying to recover array
raid5: allocated 3197kB for md1
raid5: raid level 5 set md1 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdb2
 disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdd2
 disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdb2
 disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdd2
 disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
md: updating md1 RAID superblock on device
hdd2 [events: 0000000c](write) hdd2's sb offset: 6225088
md: recovery thread got woken up ...
md1: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...
hdb2 [events: 0000000c](write) hdb2's sb offset: 6225088
.
considering hdd1 ...
  adding hdd1 ...
  adding hdc1 ...
  adding hdb1 ...
created md0
bind<hdb1,1>
bind<hdc1,2>
bind<hdd1,3>
running: <hdd1><hdc1><hdb1>
now!
hdd1's event counter: 00000006
hdc1's event counter: 00000006
hdb1's event counter: 00000006
md0: max total readahead window set to 256k
md0: 2 data-disks, max readahead per data-disk: 128k
raid5: device hdd1 operational as raid disk 2
raid5: device hdc1 operational as raid disk 1
raid5: device hdb1 operational as raid disk 0
raid5: allocated 3197kB for md0
raid5: raid level 5 set md0 active with 3 out of 3 devices, algorithm 2
RAID5 conf printout:
 --- rd:3 wd:3 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdb1
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdc1
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdd1
 disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
RAID5 conf printout:
 --- rd:3 wd:3 fd:0
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdb1
 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hdc1
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdd1
 disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
 disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00]
md: updating md0 RAID superblock on device
hdd1 [events: 00000007](write) hdd1's sb offset: 6225088
hdc1 [events: 00000007](write) hdc1's sb offset: 6225088
hdb1 [events: 00000007](write) hdb1's sb offset: 6225088
.
... autorun DONE.
VFS: Mounted root (ext2 filesystem) readonly.
Freeing unused kernel memory: 36k freed
Adding Swap: 128484k swap-space (priority -1)
<end>

I'm not sure what happened, but I am using the newest raidtools with this
kernel patch and there doesn't seem to be a ckraid. EVerything, I thought, was
automatic (mdrecoverythread, etc.) Is there a way I can force it to recognize
hdc2 as good and use it again? IT can't be a failed disk, because hdc1 is
working fine in /dev/md0 (hdb1 hdc1 hdd1). 

Oh, and yes, I have two IDE devices on the secondary channel.... this is just
for testing purposes ... I have already ordered a couple of promise
ultra33's....

Any help would be appreciated.

-- 
Kiyan Azarbar                       Sound Foundation Inc.
[EMAIL PROTECTED]                    R.A. Fencing Club
http://anduin.kublai.com/~kiyan     PGPkeyID: 0x9A9EC5EA

Reply via email to