I thought my upgrade to natty was successful: It had gone through
without problem.
Then, after reboot, I decided to remove the old kernels with
sudo apt-get remove --purge ....
, as I had done over the last years.
Only this time, at reboot, the system looked totally broken, it had like
modprobe vboxdrv failed. Please use 'dmesg' to find out why
and, worse for me here:
failed to load /lib/modules/2.6.38-8-generic/modules.dep: no such file
or directory
[Yes, don't worry, grub will come into the picture!]
Digging further, the whole matter went ever more strange: there was
uname -a to be 2.6.38-8. But the installed packages (dpkg -) were
*only* old ones. No trace of 2.6.38-8, the kernel of natty. No chance
to apt-get install, because of a segfault.

Very long story, short end: This time around, and for the first time,
during the removal of the old kernels, something decided to change the
active device to a second disk that had been the poor man's backup by
doing a 'dd' twice a year. The last 'dd' was in February, and we had a
large number of kernel updates since then. I 'solved' the problem by
simply pulling out that second drive and reboot.

Question, and I am not fully able to answer it: apt-get remove --purge
did remove the old kernels from the drive booted off for the last
months without any hitch. This time however, the initramfs / grub /
??? was one way or another made up to evoke the dormant second drive.
The grub boot menu has the new (Debian) background, so the BIOS points
to the right location. Also, the first HD is SATA1, the dd-ed is SATA4

Question: Why now? Why not earlier? How to avoid this? Who is the
culprit that decides to mount / of the second drive after the
initramfs has been loaded from the first drive?

Uwe

_______________________________________________
Bug-grub mailing list
Bug-grub@gnu.org
https://lists.gnu.org/mailman/listinfo/bug-grub

Reply via email to