> If say your root drive is sda1 which is RAIDed to sdb1 then all you
> need to do is change the boot= & root= lines once.
> boot=/dev/sdb
> map=/boot/map
> install=/boot/boot.b
> prompt
> timeout=50
> image=/boot/vmlinuz
> label=linux
> root=/dev/sdb1
> initrd=/boot/initrd-2.0.36-0.7.img
> read-only
Am I right in thinking this covers corruption of /dev/sda1, but not
wider corruption or failure of /dev/sda? It looks to me like this
still points to the kernel, the initrd, map and boot.b in whatever
partition /boot is currently mapped to (presumably a partition on
/dev/sda in the normal run of events). If this is the case, failure of
/dev/sda will still lead to an inability to boot.
Moreover, in the event that sda fails catastrophically so that it is
not recognized at reboot, /dev/sdb will slip to /dev/sda. In this
case, lilo will be unable to find /dev/sdb1 because it is now
/dev/sda1. In fact, lilo may not be able to cope at all, because it
expected the boot device to be /dev/sdb, when it is now actually
/dev/sda.
We used to have to add a "disk=/dev/sdb bios=0x80" line after the boot
line to point out to lilo that, in the event that lilo were to run
from /dev/sdb, it would actually be /dev/sda (because the orginal
/dev/sda would no longer be visible, having failed). Is this no longer
necessary? Does /dev/sdb not slide to /dev/sda if the latter fails? Or
does lilo automatically handle this now?
The following scheme is an old way to allow for catastrophic failure
of your first disk:
Firstly, you will need exact, up-to-date copies on the first two disks
of whatever partition contains /boot, assuming that is where you keep
your kernel, initrd, map etc. (one reason why it is probably a good
idea to have a small separate partition for /boot on each disk).
While running normally, mount the copied partition on /dev/sdb that
contains /boot. Let's say we call the mount point /mnt/spare. So if we
have a separate partition for /boot, we have the copy as /mnt/spare,
and if /boot is on the root partition, we can refer to the /boot part
of the copy as /mnt/spare/boot. From this point on, I will assume the
former (that /boot is on a separate partition). If this is not your
case, you will need to replace /mnt/spare with /mnt/spare/boot in
subsequent instructions.
Create a new lilo.conf file for /dev/sdb - let's call it
/etc/lilo.conf.sdb. This should contain the following:
boot=/dev/sdb
disk=/dev/sdb bios=0x80
map=/mnt/spare/map
install=/mnt/spare/boot.b
prompt
timeout=50
image=/mnt/spare/vmlinuz
label=linux
root=/dev/sda1
initrd=/mnt/spare/initrd-2.0.36-0.7.img
read-only
You will need to make the relevant changes for the kernel name, the
root device (remembering to name it on the basis of what it will be if
/dev/sdb slides to /dev/sda) and the initrd version (if you _really_
have to use initrd).
Now run "lilo -C /etc/lilo.conf.sdb", and ignore the warnings, as
David says.
I may be wrong, but I think David's fundamental misconception is in
the following lines:
> > I have setup my swap on both drives but not as a RAID partition.
If
> one drive dies the system will crash but after a reboot my system
> will just boot off the working drive as I have put the kernel in the
> Master Boot Record on both Drives.
But you don't "put the kernel in the Master Boot Record". It's way too
big to fit. You put pointers to the physical location on disk of the
kernel, the map etc. And the pointers in the suggested lilo.conf are
to /boot on an un-degraded system. This won't cause a problem in tests
if you pull the plug on the first drive to force RAID into degraded
mode, and then reconnect it before rebooting the system. But, David,
has this really worked for you if you pull the plug on the first drive
and leave it disconnected while you reboot? If so, I'm going to have
to revise my understanding of what's going on here.
This example uses SCSI disks, but the original question was about IDE
disks. I believe IDE disks do not slide device names in the same way
as SCSI disks. In particular, I believe that a master device will
always be first (hda, hdc, hde etc.) and a slave device will always be
second (hdb, hdd, hdf etc.). Is this right? And can someone with more
experience of IDE disks tell me what happens if no disks on the first
channel are visible? Does hdc (first disk on the second channel)
become hda, or does it stay at hdc? What happens if you have extra
controllers?
I am currently putting together a system with four 17Gb IDE disks, two
on the two channels on the motherboard, and two on the two channels on
a Promise UDMA-33 controller. Say I put root on a RAID-1 of /dev/hda1
and /dev/hde1. If /dev/hda fails, what happens to hdc, hde and hdg?
And if the controller for hda/hdc fails, what happens to hde and hdg?
As for the original problem, it seemed to encompass a couple of
issues, for which there are a variety of solutions.
1. The kernel crashes when you pull the plug on /dev/hda. This could
be because (a) you've killed the swap partition, as David says, or (b)
significant parts of the filesystem (/bin, /sbin, /dev, etc.) are on a
partition on hda and not on your hda1/hdc1 mirror, or (c) the kernel
can't cope with how you've pulled the plug on /dev/hda.
I don't know much about linux handling of IDE disks and channels, so
I'll have to leave it upto someone else to say whether (c) could be a
problem, and, if so, how to fix it.
But (a) is fixable by putting swap on RAID-1 (i.e. mirror the
partitions on hda and hdc which you use for swap). This will work fine
with the latest raidtools and patch (can't remember how up-to-date
RH6.0's versions of these are). Of course, if you do this, you will
halve your available swap space.
(b) is soluble by putting all significant partitions on RAID-1. The
easiest way to do this is to have all the main sections of the
filesystem on the root partition and put root on RAID-1. Don't be put
off by comments on this list. If you get it right, it isn't difficult,
and offers increased resilience to disk failure. The main tip is to
put /boot on a separate partition and keep an exact copy on the second
disk with lilo installed on both MBRs to boot from either disk.
2. Problems rebooting when one of the mirrored disks is not available.
If you want to understand what the messages mean when lilo fails to
boot the system, look in /usr/doc/lilo-{ver}/README. "LI", for
instance, means that "the first stage boot loader was able to execute
the second stage boot loader, but has failed to execute it", which is
apparently commonly due to /boot/boot.b being moved without running
lilo. And "01" indicates an "Illegal command", which could mean that
you have tried to access a disk not supported by the BIOS.
The former explanation is probably bang on for Steve's problem. If
/dev/hda (which contains /boot/boot.b) fails, then LILO will obviously
not be able to find /boot/boot.b. The problem when /dev/hdc fails is
more puzzling. If any file or partition on /dev/hdc is referenced in
lilo.conf, then it is clear what the problem is. If not, I'm
bamboozled. Unless Steve has run David's amended lilo.conf to boot
from hdc (which points to the kernel etc. on hda) and then managed to
wipe out or corrupt the lilo installation on hda. But the problem
precedes David's suggestion, doesn't it? SO that can't be right. It
would be useful to see Steve's lilo.conf and fstab.
The first step to put the hda problem right would be to re-run lilo
with a normal lilo.conf so that linux is booting purely from /dev/hda,
and then reboot into a normally running system. Then follow the steps
above for installing lilo to the second disk but substituting /dev/hdc
for /dev/sdb. If /dev/hdc slips to /dev/hda when hda fails, you will
need someone to tell you what the bios address of the first IDE disk
is, and replace 0x80 in /etc/lilo.conf.hdc with that. If it doesn't
slip, simply remove the "disk=..." line altogether.
Hope this hasn't been an irrelevant ramble.
Cheers,
Bruno Prior [EMAIL PROTECTED]