Managed to get the system up and running again. Had to reassemble all
mdadm raid arrays using a live CD. After reassembling all arrays I
rebooted back into the operating system and had to reboot three times to
get it to work again.

First reboot, it prompted it noticed a size change from 0 to <human
unreadble high number> for /dev/md0, it did a fsck and then hung.
Rebooted the system using crtl+alt+del. This prompted the system to
reboot cleanly luckily, and unmounted the "newly discovered" /dev/md0.
At this reboot, the same steps where repeated for the swap volume
(/dev/md1). Last reboot was for the /var/ volume (/dev/md2). The system
is now back up and running and probably will go down again in a few
days...

It seems mdadm lost it's config in the original system and it was trying
to boot the system using /dev/sda1 (as it did find the "/" filesystem)
instead of /dev/md0. After reconstruction using the live CD, the bootlog
also showed the RAID arrays and their printouts again. One other thing I
did as a precaution is trashing the contents of the "/var/" folder on
the /dev/md0 array. It somehow made a lock and run file in there
(probably when it lost the RAID5 set containing /var/).

I think the bug is not necesarily in the kernel but could be in mdadm,
since I manually had to reassemble the arrays using a live CD.

-- 
md raid5 set inaccessible after some time.
https://bugs.launchpad.net/bugs/613872
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to