Hi, I seem to have the same problem on my system.
I have copy-pasted all information I think relevant (sorry, long paste), and I also have some time to tinker with it if necessary. Thanks, -kurt -------------- mdadm --version ----------------------- Nasqueron:/home/oberon# mdadm --version mdadm - v3.1.1 - 19th November 2009 Nasqueron:/home/oberon# -------------- daemon.log ----------------------- Nasqueron:/home/oberon# grep md /var/log/daemon.log Mar 22 20:51:02 Nasqueron mdadm[3204]: DegradedArray event detected on md device /dev/md3 Mar 22 20:51:02 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 20:51:02 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 20:51:03 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 20:51:07 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 20:52:43 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 20:52:51 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 20:55:39 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 21:00:39 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 21:00:40 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 22 21:05:40 Nasqueron mdadm[3204]: NewArray event detected on md device /dev/md1 Mar 23 06:31:17 Nasqueron mdadm[12306]: DegradedArray event detected on md device /dev/md3 Mar 23 06:31:17 Nasqueron mdadm[12306]: NewArray event detected on md device /dev/md1 Mar 23 06:31:25 Nasqueron mdadm[12306]: NewArray event detected on md device /dev/md1 Mar 23 06:31:37 Nasqueron mdadm[12306]: NewArray event detected on md device /dev/md1 Mar 23 06:33:03 Nasqueron mdadm[16824]: DegradedArray event detected on md device /dev/md3 Mar 23 06:33:03 Nasqueron mdadm[16824]: NewArray event detected on md -------------- mdstat ----------------------- Nasqueron:/home/oberon# cat /proc/mdstat Personalities : [raid0] [raid1] md2 : active raid1 sdb1[0] sdd1[1] 488386432 blocks [2/2] [UU] md3 : active (auto-read-only) raid1 sdc7[1] 1951744 blocks [2/1] [_U] md1 : inactive sdc6[1] 85939584 blocks md0 : active raid1 sda1[0] sdc5[1] 29294400 blocks [2/2] [UU] unused devices: <none> -------------- mdadm --detail ----------------------- Nasqueron:/home/oberon# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Thu Sep 25 01:10:51 2008 Raid Level : raid0 Raid Devices : 2 Total Devices : 1 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue May 12 21:50:23 2009 State : active, degraded, Not Started Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 12a57a1a:bb75e74a:e6dc959d:b20685c9 (local to host Nasqueron.wattstrasse.ch) Events : 0.13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 38 1 active sync /dev/sdc6 -------------- fdisk ----------------------- Nasqueron:/home/oberon# fdisk /dev/sdc The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xffffffff Device Boot Start End Blocks Id System /dev/sdc1 1 14589 117186111 5 Extended /dev/sdc2 14590 19457 39102210 83 Linux /dev/sdc5 1 3647 29294464+ fd Linux raid autodetect /dev/sdc6 3648 14346 85939686 83 Linux /dev/sdc7 14347 14589 1951866 83 Linux Command (m for help): q -------------- mdadm.conf ----------------------- Nasqueron:/home/oberon# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays ARRAY /dev/md0 level=raid1 num-devices=2 UUID=10871c4b:9ce84f01:e6dc959d:b20685c9 #ARRAY /dev/md1 level=raid0 num-devices=2 UUID=12a57a1a:bb75e74a:e6dc959d:b20685c9 ARRAY /dev/md3 level=raid1 num-devices=2 UUID=18648164:c7425e5b:6432fcc9:a8d99cf3 ARRAY /dev/md2 level=raid1 num-devices=2 UUID=5ca2a15c:080e2332:13a9eb2f:868bdcc4 -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org