Redhat-6.1, kernel-2.2.13 with ide.2.2.13.19991102.patch (Andre Hedrick) and raid-2.2.14-A2 (Ingo Molnar) patches applied. Maxtor 27 GB IDE drives attached via Promise Ultra66 cards. So I have my sleek raid10 array all commissioned and populated with data: [root@osmin own]# cat /proc/mdstat Personalities : [raid0] [raid1] read_ahead 1024 sectors md2 : active raid1 md1[1] md0[0] 53176768 blocks [2/2] [UU] md1 : active raid0 hdk1[1] hdg1[0] 53176832 blocks 32k chunks md0 : active raid0 hdi1[1] hde1[0] 53176832 blocks 32k chunks unused devices: <none> Then I try unplugging one of the drives, thinking that raid0 array will fail and be marked bad, leaving me with only half of my raid1 array, allowing me to run in degraded mode . . . but instead I see: Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: DMA disabled Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: ide3: reset: master: error (0x00?) Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: hdg: drive not ready for command Nov 24 21:29:27 osmin kernel: ide3: reset: master: error (0x00?) Nov 24 21:29:27 osmin kernel: hdg: status error: status=0x00 { } Nov 24 21:29:27 osmin kernel: end_request: I/O error, dev 22:01 (hdg), sector 32506056 Nov 24 21:29:27 osmin kernel: md: bug in file md.c, line 485 Nov 24 21:29:27 osmin kernel: Nov 24 21:29:27 osmin kernel: ********************************** Nov 24 21:29:27 osmin kernel: * <COMPLETE RAID STATE PRINTOUT> * Nov 24 21:29:27 osmin kernel: ********************************** Nov 24 21:29:27 osmin kernel: md2: <md1><md0> array superblock: Nov 24 21:29:27 osmin kernel: SB: (V:0.90.0) ID:<9a8b4f15.48c3e86c.f14f7969.f8844d0f> CT:383af527 Nov 24 21:29:27 osmin kernel: L1 S53176768 ND:2 RD:2 md2 LO:0 CS:32768 Nov 24 21:29:27 osmin kernel: UT:383c4cd8 ST:0 AD:2 WD:2 FD:0 SD:0 CSUM:e9f1792c E:00000007 Nov 24 21:29:27 osmin kernel: D 0: DISK<N:0,md0(9,0),R:0,S:6> Nov 24 21:29:27 osmin kernel: D 1: DISK<N:1,md1(9,1),R:1,S:6> Nov 24 21:29:27 osmin kernel: D 2: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 3: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 4: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 5: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 6: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 7: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 8: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 9: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 10: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: D 11: DISK<N:0,[dev 00:00](0,0),R:0,S:4> Nov 24 21:29:27 osmin kernel: THIS: DISK<N:1,md1(9,1),R:1,S:6> Nov 24 21:29:27 osmin kernel: rdev md1: O:md1, SZ:53176768 F:0 DN:1 rdev superblock: Nov 24 21:29:27 osmin kernel: SB: (V:0.90.0) ID:<9a8b4f15.48c3e86c.f14f7969.f8844d0f> CT:383af527 Nov 24 21:29:27 osmin kernel: L1 S53176768 ND:2 RD:2 md2 LO:0 CS:32768 Nov 24 21:29:27 osmin kernel: UT:383c4cd8 ST:0 AD:2 WD:2 FD:0 SD:0 CSUM:e9f1797e E:00000007 Nov 24 21:29:27 osmin kernel: D 0: DISK<N:0,md0(9,0),R:0,S:6> Alas, nothing appears to be wrong according to raidtab: [root@osmin src]# cat /proc/mdstat Personalities : [raid0] [raid1] read_ahead 1024 sectors md2 : active raid1 md1[1] md0[0] 53176768 blocks [2/2] [UU] md1 : active raid0 hdk1[1] hdg1[0] 53176832 blocks 32k chunks md0 : active raid0 hdi1[1] hde1[0] 53176832 blocks 32k chunks unused devices: <none> Is this this expected behaviour? -Darren