I have a system that failed to boot after the most recent kernel update.
 It took a while, but I eventually traced it to the initramfs not having
raid1 included.  I had to manually do a "mkinitrd --preload raid1" for
the new kernel to get the system back up.  Oddly, the previous kernel's
ram image was also similarly broken (and the time stamp indicated that
it had been updated at about the same time as the new one) so I couldn't
even revert to it but had to boot from a usb drive to do the repair.
Has something changed in the post install or in mkinitrd that would
explain this?  Am I the only one who has had this problem (am I the only
one using a raid 1 root disk with no volume management)?

For the record the system is SL 6.3 x86_64, the mkinitrd comes from
dracut-004-283.el6.noarch and the kernels in question are
vmlinuz-2.6.32-279.11.1.el6.x86_64
vmlinuz-2.6.32-279.14.1.el6.x86_64

Oddly I see that dracut is "a new, event-driven initramfs infrastructure
based around udev".  How does that work on a system with a raid 1 root
drive?  In my case the boot fails because the root file system
(identified by a UUID on /dev/md0) can't be found.  It seems like udev
is not going to be very functional until mkinitrd has already been used
and the update of the previous kernel is likely related to how this is
being done.  Maybe someone has some insight into this?

Thanks,
Bob Blair

Reply via email to