Public bug reported:

Since Feisty when I implemented this raid configuration mdadm does not
always boot correctly even though it seems setup perfectly. I just did a
format and install of fresh Gutsy and it still is doing it. Right during
early boot the status bar will start to load and gets about 1/5th of the
way and locks (disk activity light on solid/high I/O). I can goto a
console, and I know at this time todo ctrl/alt/del to have it reboot or
else it will eventually fail out. Eventually my system will boot.

Once I was able to get a terminal: ps aux | grep mdadm
root      4508  0.0  0.0  10360   608 ?        S<   00:49   0:00 
/lib/udev/watershed /sbin/mdadm --assemble --scan --no-degraded
root      4509  0.0  0.0  10364   580 ?        S<   00:49   0:00 
/lib/udev/watershed /sbin/mdadm --assemble --scan --no-degraded
root      8436  0.0  0.0  12392   528 ?        Ss   00:54   0:00 /sbin/mdadm 
--monitor --pid-file /var/run/mdadm/monitor.pid --daemonise --scan --syslog
root      8892 34.1 26.6 562048 550600 ?       D<   00:56   0:34 /sbin/mdadm 
--assemble --scan --no-degraded
chris     8951  0.0  0.0   5124   836 pts/2    S+   00:57   0:00 grep mdadm

Notice, the last mdadm process kicked off during boot is using 27% of my
memory and creeping up fast, in a matter of a minute all my ram is gone
(2GB) and it hits swap until it consumes it all and the system becomes
unusable. This occurs half the time I start my computer and the drives
are always detected during POST. I even can killall the mdmadm processes
and they reappear with the same behavior. I feel it is some kind of race
condition with udev/mount/mdadm placing my drives in a state that will
not work. I even tried adding the udevsettle timeout in the init but it
didnt help.

I either ran mdadm --assemble --scan manually or checked dmesg and it would 
flood with this:
md: array md0 already has disks!

Until recently the --no-degraded option was not on by default and I'd
always seem to loose a drive during boot if I restarted during the mount
attempt by using ctrl alt del. I would have to re add it.

Setup:
Latest Gutsy Ubuntu
AMD Opteron 170 (X2)

/dev/md0:
4x300GB drives

/dev/md1:
5x500GB drives

md0 : active raid5 sde[0] hdd[3] hdb[2] sdf[1]
      879171840 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid5 sdh1[0] sdd1[4] sdc1[3] sdb1[2] sda1[1]
      1953535744 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

[EMAIL PROTECTED]:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#ARRAY /dev/md0 level=raid0 num-devices=2 
UUID=db874dd1:759d986d:cc05af5c:cfa1abed
ARRAY /dev/md1 level=raid5 num-devices=5 
UUID=9c2534ac:1de3420b:e368bf24:bd0fce41
ARRAY /dev/md2 level=raid0 num-devices=2 
UUID=0cc8706d:e9eedd66:33a70373:7f0eea01
ARRAY /dev/md3 level=raid0 num-devices=2 
UUID=af45c1d8:338c6b67:e4ad6b92:34a89b78

# This file was auto-generated on Wed, 06 Jun 2007 19:58:28 -0700
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $

** Affects: ubuntu
     Importance: Undecided
         Status: New

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to