Thanks for the replys,
It looks as though it should be easy.

I want to be able to reboot using md0 for the root boot partition,
then I can dispose of the ext2 fs on the /dev/sda1 and raidhotadd
sda1 to md0.

Is this possible, what does this lilo error message mean?

[root@otherweb /root]# df
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/sda1               127902     40984     80314  34% /
/dev/md1              10079980     37028   9530908   0% /home
/dev/md2               4032448    416792   3410812  11% /usr
/dev/md4                254699        15    241532   0% /tmp
/dev/md5                254699      6844    234703   3% /var
/dev/md3               2016656      1992   1912220   0% /usr/local
/dev/md0                127790     40988     80204  34% /stage
[root@otherweb /]# lilo -V
LILO version 21.4-3
[root@otherweb /]# lilo -r /stage
boot = /dev/sdb, map = /boot/map.0811
Added bzImage *
Syntax error near line 2 in file /etc/lilo.conf
[root@otherweb /]# cat /stage/etc/lilo.conf
boot=/dev/md0
default=Linux
delay=50
vga=normal
read-only
image=/boot/bzImage
        label=Linux
        root=/dev/md0
image=/boot/vmlinuz
        label=lin
        root=/dev/md0
[root@otherweb boot]# head -9 /etc/raidtab
raiddev /dev/md0
        raid-level              1
        nr-raid-disks           2
        persistent-superblock   1
        chunk-size              4
        device                  /dev/sda1
        failed-disk             0
        device                  /dev/sdb1
        raid-disk               1

I want to be able to reboot using md0 for the root boot partition,
then I can dispose of the files on the /dev/sda1 and raidhotadd
sda1 to md0.

Is this possible, what does the lilo error message mean?

-------- Original Message --------
Subject: RE: root+boot on Raid1 + lilo
Date: Wed, 28 Jun 2000 09:46:44 -0500
From: "Diegmueller, Jason (I.T. Dept)" <[EMAIL PROTECTED]>
To: 'Hugh Bragg' <[EMAIL PROTECTED]>

Hugh--

It's actually relatively easy to move a root filesystem over to a 
RAID.  I don't have a HOWTO or step-by-step, but here's what I did
just one week ago for a 3-disk RAID1:

- Install System as usual
- Establish /dev/md0, but mark the used disk as "failed-disk" in 
  the /etc/raidtab (as opposed to raid-disk).  This will have the
  software RAID mark this disk as bad, but since it's RAID1, it will
  continue in degraded mode.  Also, make sure you ARE using the
  persistent-superblocks.
- Format the filesystem (I use 4096-byte block e2fs, benchmarks show 
  that to be quicker)
- Mount the /dev/md0 filesystem.  I always use "/newraid".
- Copy the current filesystem over to the new filesystem (I used
  cp -a /bin /dev /usr /var etc etc etc /newraid".  No need to
  copy /proc.
- mkdir a /newraid/proc (so there is a /proc under the new filesystem
  for when it boots up)
- Modify the /newraid/etc/fstab to utilize /dev/md0 as the root
  (as opposed to /dev/sda1 or whatever)

Here's where it gets subjective.  Some people use the RedHat-patched
RAID1-bootable lilo, me personally .. I do this:

- I have separate 16MB boot partitions, /mnt/boot1, /mnt/boot2, and
  /mnt/boot3.  These are exactly identical (/tmp, /etc, /boot, /dev)
  with /etc/lilo.conf's underneath which tell lilo to install on it's
  corresponding drive (ie, /mnt/boot1 == /dev/sda1, and has a
  "boot = /dev/sda" line, /mnt/boot2 == /dev/sdb1, and has a 
  "boot = /dev/sdb" line, etc).  This way, any drive can die, and
booting
  will occur.
- I then run "lilo -r /mnt/boot1", "lilo -r /mnt/boot2", and
  "lilo -r /mnt/boot3".  The -r tells lilo to chroot, which is why you
  must have a directory structure under /mnt/boot1, etc.  You must have
  a /mnt/boot1/dev with the devices referred to in
/mnt/boot1/etc/lilo.conf
  (such as /mnt/boot1/dev/md0, /mnt/boot1/dev/sda

Also ..

- Make sure the RAID'd partitions are marked type "fd", for RAID 
  autodetect.  Otherwise you'll have trouble booting.

Reboot.

- If you're up and running on the /dev/md0 root, you now need to
integrate
  the "old root" in to your RAID.  Modify /etc/raidtab and change the 
  "failed-disk" to "raid-disk", then "raidhotadd /dev/md0 /dev/sda5"
replacing
  sda5 with the old root.  It might make you --force it, I can't
remember.

When it finishing rebuilding, mark the old root partition as fd as well
so 
it will autodetect.  You should then be good.

I realize this is a long diatrabe and not a simple HOWTO, but hopefully
it
will help.  Feel free to email ANY questions ... I'd love to help.  I'm
not
a RAID maintainer, just a happy user. =)  I'm doing root-RAID5,
root-RAID1,
and even root-RAID0 on 5 different machines, 3 of those I'd consider
"production" machines .. one of those being up over 300 days now without
a 
problem.

Reply via email to