** Description changed: Binary package hint: mdadm In testing raid1 according to http://testcases.qa.ubuntu.com/Install/ServerRAID1, the system installed fine and both disks show up in /proc/mdstat. I installed using my procedure found in https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/457687/comments/6. I anwered 'yes' to booting in degraded mode. I noticed during install the grub-install did reference both /dev/vda and /dev/vdb However, if I remove the first disk (vda), the system drops to the initramfs telling me the disk could not be found. If I remove the second disk (after adding the first disk back), I'm told about the 'bootdegraded=true' option. This ends up being a usability issue as well, because now 'Shift' is needed to get to the hidden boot menu since this was the only operating system installed. I tried adding bootdegraded=true to the kernel command line when vda was removed, and got the same message in initramfs. ================= Karmic release note: Automatic boot from a degraded RAID array is not configured correctly - after installation or a mdadm package reconfiguration + after installation Even if "boot from a degraded array" is set to yes during the install the system will not be properly configured to automatically boot from a degraded array. A workaround is to edit /etc/initramfs- tools/conf.d/mdadm after installation to set BOOT_DEGRADED to yes (BOOT_DEGRADED=yes) and then update the ramdisk with "sudo update- initramfs -u" (462258). =================
-- installer's boot-degraded debconf answer not written to installed disk https://bugs.launchpad.net/bugs/462258 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs