Hello,

Summary:

1) I made a disk killing device to test software raid1

2) There is a problem with 'raidhotadd' in that the device is not
initialized if it is added to the chain after the machine boots. (I
think someone else reported that in the last few weeks.) 

3) I think there is a more serious problem that causes raid1 resync
to keep going after one of the disks fails.


NOTE: After I performed procedure 1b and rebooted the system, my root
filesystem was really messed up acc'd to fsck.  The RAID disks (2 SCSI
disks on an Adaptec controller) are nowhere near my root disk (1 IDE
disk)  I had to hold down the 'Y' button for about 5 minutes in single
user mode.


---------------------------------------------------------------------------

Details:


I'm experimenting with the md driver configured with RAID 1
personality and an Adaptec SCSI card. I'm using the Redhat 6.2
distribuiton with the 2.2.16 kernel as distributed in the redhat
updtes (which by the way includes the .90 raid patch, and some SCSI patches.) 


[cgi@dru1a cgi]$ cat /etc/raidtab.md0 
raiddev             /dev/md0
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
#nr-spare-disks     0
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdb1
    raid-disk     1


I'm trying to build a set of automated procedures for replacing failed
disks, and also trying to see what will happen when a disk fails in
general.  I'm simulating disk failure with my 'disk killer' setup.
I've rigged up a standard PC power connector with a little switch


  +----+                      /           +----+
 /|    |----------- +5 ------o   o------- |    |\  
|.|    |----------- GND ----------------- |    |.| 
|.|    |----------- GND ----------------- |    |.|  
|.|    |----------- +12 ----------------- |    |.| 
|.+----+                                  +----+.|
|/----/                                    \----\|



When I hit the switch, the circuitry in the drive goes silent.

Unfortunately, performing this little experiment while the md device
is being written to results in a machine that is unusable (but it
doesn't crash or completely freeze.)

What I have noticed is that after I kill one of the disks, the kernel
keeps trying to write to the disk, even after lots of error messages
from the driver saying that the disk is dead.  When I reboot the
machine, the dead drive is marked bad and not used anymore.  The
machine is not dead, but it is extremely unresponsive and the VT
logins are unusable (error messages are pouring by.)

Now, just to throw a wrench in the works, what I have is a shared SCSI
arrangement.  If I power off machine one which is by now gasping for
breath. Machine two has been running all this time.  I mount the raid
device on machine two and everything is fine (other than the fact that
one of the disks is now absent from the mirror set.

I'm beginning a set of controlled experiments with this setup to try
to track down some of the problems.  The first problem I've found is
that the logic in the HOT_ADD_DISK ioctl (from
'raidhotadd'?) does not re-initialize the lun... BUT, 

---------------------------------------------------------------------------

Experiment 1a:  Very small amount of I/O, failure occurs between writes
Result:        System is still usable after disk failure, disk falls out of RAID


$ touch /data1/prefail
(nothing happens)
$ sync
(the lites flash on the 2 disks)
$ # Now, I hit the switch attached to /dev/sda
$ touch /data1/postfail
(nothing happens)
$ sync
(a few error messages spew from the console)
$ cat /proc/mdstat 
(shows /dev/sdb1 marked as 'failed')

# After these errors, the disk can be sucessfully accessed with no further 
# errors. 


Error messages from Experiment 1:

Summary: 
  2 bus resets
  5 scsidisk I/O error
  5 pairs of 'md: recovery thread got woken up', 'md: recovery thread finished '
  4 sets of 'md0: no spare disk to reconstruct array!'
  

---------------------------------------------------------------------------

Experiement 1b: Performed after Experiment 1a

$ # The disk is powered back on

(the red light goes on for a little while and then goes off)

$ /sbin/raidhotremove --configfile /etc/raidtab.md0 /dev/md0 /dev/sda1
( no problems)
$ /sbin/raidhotadd --configfile /etc/raidtab.md0 /dev/md0 /dev/sda1
( the error messages go nuts! consoles are unusable)
the light on /dev/sda flashes weakly (not like during a RAID rebuild)

CTRL-ALT-DEL fails to restart the system

After the raid hot add there are some messages about the state of the array,


Then the following stanza repeated in the log file for over 30 minutes
(the sector number changes):  

Sep 14 06:15:58 dru1a kernel: scsidisk I/O error: dev 08:11, sector 104 
Sep 14 06:15:58 dru1a kernel: interrupting MD-thread pid 6 
Sep 14 06:15:58 dru1a kernel: raid1: only one disk left and IO error. 
Sep 14 06:15:58 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 retu
rn code = 28000002 
Sep 14 06:15:58 dru1a kernel: [valid=0] Info fld=0x0, Current sd08:11: sense key
 Not Ready 
Sep 14 06:15:58 dru1a kernel: Additional sense indicates Logical unit not ready,
 initializing command required 


OK, obviously the proper SCSI init sequence  should have been sent to this
device before adding it back and it wasn't.

---------------------------------------------------------------------------

Besides not re-initializing the LUN, there seems to be a problem that
the recovery thread is not giving up trying to reconstruct the disk
when it is plainly bad.  I think there is a problem with the logic in
raid1.c.  Looking in  raid1_diskop(), the number of working disks is not
incremented when you hot add a disk, so we'll always hit the first
part of the if stmt in raid1_error()


static int raid1_error (mddev_t *mddev, kdev_t dev)
{
        raid1_conf_t *conf = mddev_to_conf(mddev);
        struct mirror_info * mirrors = conf->mirrors;
        int disks = MD_SB_DISKS;
        int i;

>>>>>
>>>>> Somehow, we have gotten to this point in the reconstruct, but
>>>>> the number of working disks is only 1.
>>>>> Unfortunately, the working disk is NOT the one that is returning
>>>>> the error - it's the recently replaced mirror!  
>>>>>
        if (conf->working_disks == 1) {
                /*
                 * Uh oh, we can do nothing if this is our last disk, but
                 * first check if this is a queued request for a device
                 * which has just failed.
                 */
                for (i = 0; i < disks; i++) {
                        if (mirrors[i].dev==dev && !mirrors[i].operational)
                                return 0;
                }
                printk (LAST_DISK);
        } else {
                /*
                 * Mark disk as unusable
                 */
                for (i = 0; i < disks; i++) {
                        if (mirrors[i].dev==dev && mirrors[i].operational) {
                                mark_disk_bad (mddev, i);
                                break;
                        }
                }
        }
        return 0;
}


-----------------------------------------------------------------------------


I'm kind of stumped over what the correct logic should be, but I think
it is clear that should be done in raid1_error() to stop the recovery
thread if an error occurs during a reconstruction.  I'll be happy to
help test this further if someone can give some direction!  I guess
the next thing I will try is to add some printk()'s...

Below are the full text of the messages I'm getting after 1a and 1b

-Eric.

---------------------------------------------------------------------------
Experiment 1a:

Sep 14 06:13:10 dru1a kernel: scsi0 channel 0 : resetting for second half of retries. 
nSep 14 06:13:10 dru1a kernel: SCSI bus is being reset for host 0 channel 0. 
Sep 14 06:13:14 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 26030000 
Sep 14 06:13:14 dru1a kernel: scsidisk I/O error: dev 08:11, sector 0 
Sep 14 06:13:14 dru1a kernel: raid1: Disk failure on sdb1, disabling device.  
Sep 14 06:13:14 dru1a kernel:        Operation continuing on 1 devices 
Sep 14 06:13:14 dru1a kernel: md: recovery thread got woken up ... 
Sep 14 06:13:14 dru1a kernel: md0: no spare disk to reconstruct array! -- continuing 
in degraded mode 
Sep 14 06:13:14 dru1a kernel: md: recovery thread finished ... 
Sep 14 06:13:14 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 26030000 
Sep 14 06:13:14 dru1a kernel: scsidisk I/O error: dev 08:11, sector 4040 
Sep 14 06:13:14 dru1a kernel: md: recovery thread got woken up ... 
Sep 14 06:13:14 dru1a kernel: md0: no spare disk to reconstruct array! -- continuing 
in degraded mode 
Sep 14 06:13:14 dru1a kernel: md: recovery thread finished ... 
Sep 14 06:13:14 dru1a kernel: (scsi0:0:1:0) Synchronous at 10.0 Mbyte/sec, offset 15. 
Sep 14 06:13:14 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 26030000 
Sep 14 06:13:14 dru1a kernel: scsidisk I/O error: dev 08:11, sector 24 
Sep 14 06:13:14 dru1a kernel: md: recovery thread got woken up ... 
Sep 14 06:13:14 dru1a kernel: md0: no spare disk to reconstruct array! -- continuing 
in degraded mode 
Sep 14 06:13:14 dru1a kernel: md: recovery thread finished ... 
Sep 14 06:13:16 dru1a kernel: scsi0 channel 0 : resetting for second half of retries. 
Sep 14 06:13:16 dru1a kernel: SCSI bus is being reset for host 0 channel 0. 
Sep 14 06:13:19 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 26030000 
Sep 14 06:13:19 dru1a kernel: scsidisk I/O error: dev 08:11, sector 8 
Sep 14 06:13:19 dru1a kernel: md: recovery thread got woken up ... 
Sep 14 06:13:19 dru1a kernel: md0: no spare disk to reconstruct array! -- continuing 
in degraded mode 
Sep 14 06:13:19 dru1a kernel: md: recovery thread finished ... 
Sep 14 06:13:19 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 26030000 
Sep 14 06:13:19 dru1a kernel: scsidisk I/O error: dev 08:11, sector 32 
Sep 14 06:13:19 dru1a kernel: md: recovery thread got woken up ... 
Sep 14 06:13:19 dru1a kernel: md0: no spare disk to reconstruct array! -- continuing 
in degraded mode 
Sep 14 06:13:19 dru1a kernel: md: recovery thread finished ... 
Sep 14 06:13:34 dru1a PAM_pwdb[569]: (login) session opened for user root by 
LOGIN(uid=0)
Sep 14 06:13:38 dru1a kernel: (scsi0:0:1:0) Synchronous at 10.0 Mbyte/sec, offset 15. 

That's all!

---------------------------------------------------------------------------

Experiment 1b:

Sep 14 06:13:38 dru1a kernel: (scsi0:0:1:0) Synchronous at 10.0 Mbyte/sec, offset 15. 
Sep 14 06:15:50 dru1a kernel: trying to remove sdb1 from md0 ...  
Sep 14 06:15:50 dru1a kernel: RAID1 conf printout: 
Sep 14 06:15:50 dru1a kernel:  --- wd:1 rd:2 nd:2 
Sep 14 06:15:50 dru1a kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 
Sep 14 06:15:50 dru1a kernel:  disk 1, s:0, o:0, n:1 rd:1 us:1 dev:sdb1 
Sep 14 06:15:50 dru1a kernel:  disk 2, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel: RAID1 conf printout: 
Sep 14 06:15:50 dru1a kernel:  --- wd:1 rd:2 nd:1 
Sep 14 06:15:50 dru1a kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 
Sep 14 06:15:50 dru1a kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 2, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:50 dru1a kernel: unbind<sdb1,1> 
Sep 14 06:15:50 dru1a kernel: export_rdev(sdb1) 
Sep 14 06:15:50 dru1a kernel: md: updating md0 RAID superblock on device 
Sep 14 06:15:50 dru1a kernel: sda1 [events: 0000000a](write) sda1's sb offset: 1024896 
Sep 14 06:15:50 dru1a kernel: . 
Sep 14 06:15:55 dru1a kernel: trying to hot-add sdb1 to md0 ...  
Sep 14 06:15:55 dru1a kernel: bind<sdb1,2> 
Sep 14 06:15:55 dru1a kernel: RAID1 conf printout: 
Sep 14 06:15:55 dru1a kernel:  --- wd:1 rd:2 nd:1 
Sep 14 06:15:55 dru1a kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 
Sep 14 06:15:55 dru1a kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 2, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:55 dru1a kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel: RAID1 conf printout: 
Sep 14 06:15:56 dru1a kernel:  --- wd:1 rd:2 nd:2 
Sep 14 06:15:56 dru1a kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 
Sep 14 06:15:56 dru1a kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 2, s:1, o:0, n:2 rd:2 us:1 dev:sdb1 
Sep 14 06:15:56 dru1a kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel: md: updating md0 RAID superblock on device 
Sep 14 06:15:56 dru1a kernel: sdb1 [events: 0000000b](write) sdb1's sb offset: 1024896 
Sep 14 06:15:56 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 28000002 
Sep 14 06:15:56 dru1a kernel: [valid=0] Info fld=0x0, Current sd08:11: sense key Not 
Ready 
Sep 14 06:15:56 dru1a kernel: Additional sense indicates Logical unit not ready, 
initializing command required 
Sep 14 06:15:56 dru1a kernel: scsidisk I/O error: dev 08:11, sector 2049792 
Sep 14 06:15:56 dru1a kernel: sda1 [events: 0000000b](write) sda1's sb offset: 1024896 
Sep 14 06:15:56 dru1a kernel: . 
Sep 14 06:15:56 dru1a kernel: md: recovery thread got woken up ... 
Sep 14 06:15:56 dru1a kernel: md0: resyncing spare disk sdb1 to replace failed disk 
Sep 14 06:15:56 dru1a kernel: RAID1 conf printout: 
Sep 14 06:15:56 dru1a kernel:  --- wd:1 rd:2 nd:2 
Sep 14 06:15:56 dru1a kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 
Sep 14 06:15:56 dru1a kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 2, s:1, o:0, n:2 rd:2 us:1 dev:sdb1 
Sep 14 06:15:56 dru1a kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel: RAID1 conf printout: 
Sep 14 06:15:56 dru1a kernel:  --- wd:1 rd:2 nd:2 
Sep 14 06:15:56 dru1a kernel:  disk 0, s:0, o:1, n:0 rd:0 us:1 dev:sda1 
Sep 14 06:15:56 dru1a kernel:  disk 1, s:0, o:0, n:1 rd:1 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 2, s:1, o:1, n:2 rd:2 us:1 dev:sdb1 
Sep 14 06:15:56 dru1a kernel:  disk 3, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 4, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel:  disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 
Sep 14 06:15:56 dru1a kernel: md: syncing RAID array md0 
Sep 14 06:15:56 dru1a kernel: md: minimum _guaranteed_ reconstruction speed: 100 
KB/sec. 
Sep 14 06:15:56 dru1a kernel: md: using maximum available idle IO bandwith for 
reconstruction. 
Sep 14 06:15:56 dru1a kernel: md: using 128k window. 
Sep 14 06:15:56 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 28000002 
Sep 14 06:15:56 dru1a kernel: [valid=0] Info fld=0x0, Current sd08:11: sense key Not 
Ready 
Sep 14 06:15:56 dru1a kernel: Additional sense indicates Logical unit not ready, 
initializing command required 
Sep 14 06:15:56 dru1a kernel: scsidisk I/O error: dev 08:11, sector 0 
Sep 14 06:15:56 dru1a kernel: interrupting MD-thread pid 6 
Sep 14 06:15:56 dru1a kernel: raid1: only one disk left and IO error. 
Sep 14 06:15:56 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 28000002 
Sep 14 06:15:56 dru1a kernel: [valid=0] Info fld=0x0, Current sd08:11: sense key Not 
Ready 
Sep 14 06:15:56 dru1a kernel: Additional sense indicates Logical unit not ready, 
initializing command required 
Sep 14 06:15:56 dru1a kernel: scsidisk I/O error: dev 08:11, sector 128 
Sep 14 06:15:56 dru1a kernel: interrupting MD-thread pid 6 
Sep 14 06:15:56 dru1a kernel: raid1: only one disk left and IO error. 
Sep 14 06:15:56 dru1a kernel: SCSI disk error : host 0 channel 0 id 2 lun 0 return 
code = 28000002 
Sep 14 06:15:56 dru1a kernel: [valid=0] Info fld=0x0, Current sd08:11: sense key Not 
Ready 

... repeated for 30 minutes - use your imagination...
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to