Hi Michael,

 if you have a look on the mailing you'll find an answer.....
 
 but anyway..

>   /dev/sda1    16368    /boot
>   /dev/sda2  8809472    /
>   /dev/sda3    65536    <swap>
>
>   /dev/sdb1    16368    /boot
>   /dev/sdb2  8809472    /
>   /dev/sdb3    65536    <swap>
>
>   Combine /dev/sda1 and /dev/sdb1 into /dev/md0 and combine /dev/sda2 with 
>/dev/sdb2 into /dev/md1.  LILO would have root=/dev/md1 for both kernel
images 
>on the /boot partition.
>
>   Will the new "AUTORAID" configurations handle this situation for a
booting 
>kernel?  I LOATHE the idea of using initrd.

Lilo just doesn't manage that situation (can't boot from /dev/mdx )
....you need to have the /boot0 and /boot1 (with same content !) and setup
lilo.conf to take care of the dual boot

Again ... looking back on the mailing list you'll find...

But for your plasure:

-----------------------------------------------

Hi Jack,

At 17:59 04.01.99 -0000, you wrote:
>I can sympathise. I decided to postpone trying to RAID the root partition
>until later - much later. At least until I fully understand Linux Software
>RAID. The Software RAID Howto is less of a HOWTO than a FAQ for experienced
>RAIDers... Took me about 20 minutes to figure out that mdadd and raidadd are
>actually the same thing. :)

Been there, done that :-)

1) The docs are way out of sync with reality, there was a mayor overhaul of
the raid code which is available on ftp://ftp.kernel.org/pub/daemons/raid
(or on one of the mirrors). Look for raidtools 0.90 at
alpha/raidtools-19981214-0.90.tar.gz and the coresponding kernel stuff at
alpha/raid-0145*

2) The new code has much better support for root on raid devices -
activation & shutdown of raid devices is included in the kernel. You will
still need a small non-raid partion or disk for storing the kernel and
other stuff needed by lilo at boottime. Also the new raidtools use a config
file /etc/raidtab which really helps in making things easier to manage.

Here's a short account on how I got my raid stuff configured. Partitioning
of disks was done previously, here's what my partition table(s) look like:

   Device Boot   Begin    Start      End   Blocks   Id  System
/dev/sda1            1        1        1     8001   83  Linux native
/dev/sda2   *        2        2      197  1574370   fd  Unknown
/dev/sda3          198      198     1095  7213185   fd  Unknown
/dev/sda4         1024     1096     1111   128520   82  Linux swap

   Device Boot   Begin    Start      End   Blocks   Id  System
/dev/sdb1            1        1        1     8001   83  Linux native
/dev/sdb2   *        2        2      197  1574370   fd  Unknown
/dev/sdb3          198      198     1095  7213185   fd  Unknown
/dev/sdb4         1024     1096     1111   128520   82  Linux swap

/dec/sda1 and /dev/sdab1 are mounted on /boot0 and /boot11 and hold just
the stuff needed by lilo to load the kernel.

For me installation was fairly easy since I've already got serveral systems
running linux; I just built raid tools & a raid kernel on one of the old
systems and copied tools+kernel onto the setup disk; this way you can set
up your raid devices while running off a floppy and install right onto the
/dev/md* device.

* apply the raid0145-19981215-2.0.36 patch to a kernel 2.0.36 to include
the new raid stuff.
* Configure the kernel, say yes to "autodetect RAID partition" and to the
RAID modes you want to use. build & install the kernel. 
* compile and install the raid tools v0.90 (./configure; make; make install)
* prepare a configuration file for the raid devices, this goes in
/etc/raidtab by default. Here's what mine looks like:
-----------------------------------------------------------------------------
# /etc/raidtab raid config file
#
raiddev                 /dev/md0
raid-level              1
nr-raid-disks           2
nr-spare-disks          0
chunk-size              64

device                  /dev/sda2
raid-disk               0

device                  /dev/sdb2
raid-disk               1

#
raiddev                 /dev/md1
raid-level              1
nr-raid-disks           2
nr-spare-disks          0
chunk-size              64

device                  /dev/sda3
raid-disk               0

device                  /dev/sdb3
raid-disk               1
-----------------------------------------------------------------------------

This configures two raid-1 mirrors, with two disks each.

* Now, initialize your raid devices: 
    mkraid /dev/md0
    mkraid /dev/md1

Your raid devices are online and useable now, hovever it'll take a while
for initialisatzion of mirrors to finish (in background). you can check the
status of your raid devices by doing cat /proc/mdstat

* put a filesystem on your devices
    mke2fs /dev/md0
    mke2fs /dev/md1

* enable automatic initialisatzion of raid devices by kernel. Use fdisk to
change the type of the partitions which are part of raid devices to FD:
  fdisk /dev/sda
  Command (m for help): t
  Partition number (1-4): 2
  Hex code (type L to list codes): fd

You'll neither have to use raidstart on startup, nor raidstop on shutdown
with this configuration, both are performed automagically by the  kernel.

If you don't want to have your devices automatically initialized, you can
of course leave the partition type alone and use raidstart /raidstop to
initialize & shutdown the devices from system startup scripts.
  
That's it for setup of kernel & partitions. 

What took me quite some time was finding a reasonable configuration for
lilo; the aim is to have BOTH sda and sdb bootable so the system still
starts in case one of the disks fails, and to be able to select a kernel
image from either disk on bootup in case just a kernel image somehow gets
damaged. I ended up with two lilo config files, one for each disk, stored
as /etc/lilo.conf.sda and /etc/lilo.conf.sdb.

-----------------------------------------------------------------------------
# LILO configuration file /etc/lilo.conf.sda
#
# Start LILO global section
boot     = /dev/sda
disk     = /dev/sda
map      = /boot0/boot/map
install  = /boot0/boot/boot.b
backup   = /boot0/boot/boot.0800
message  = /boot0/boot/boot_message.txt
prompt
timeout  = 50
vga      = normal
password = xxxxxxxxxx
restricted
# End LILO global section
# Linux bootable partition config begins
# Images in Disk 0
image    = /boot0/vmlinuz
  root   = /dev/md0
  label  = Linux_Disk0
  read-only
image    = /boot0/vmlinuz.001
  root   = /dev/md0
  label  = Linux_Disk0_old
  read-only
# Images in Disk 1
image    = /boot1/vmlinuz
  root   = /dev/md0
  label  = Linux_Disk1
  read-only
image    = /boot1/vmlinuz.001
  root   = /dev/md0
  label  = Linux_Disk1_old
  read-only
# Linux bootable partition config ends
-----------------------------------------------------------------------------

In the file for the 2nd disk, notice the disk = /dev/sdb bios=0x80 line.
this thells lilo that on bootup, this is acutually going to be the first
disk, even though it's currently the second disk.

-----------------------------------------------------------------------------
# LILO configuration file /etc/lilo.conf.sdb
#
# Start LILO global section
boot     = /dev/sdb
disk     = /dev/sdb bios=0x80
map      = /boot1/boot/map
install  = /boot1/boot/boot.b
backup   = /boot1/boot/boot.0800
message  = /boot1/boot/boot_message.txt
prompt
timeout  = 50
vga      = normal
password = xxxxxxxxxx
restricted
# End LILO global section
# Linux bootable partition config begins
# Images in Disk 1
image    = /kernel1/vmlinuz
  root   = /dev/md0
  label  = Linux_Disk1
  read-only
image    = /kernel1/vmlinuz.001
  root   = /dev/md0
  label  = Linux_Disk1_old
  read-only
# Images in Disk 0
image    = /kernel0/vmlinuz
  root   = /dev/md0
  label  = Linux_Disk0
  read-only
image    = /kernel0/vmlinuz.001
  root   = /dev/md0
  label  = Linux_Disk0_old
  read-only
# Linux bootable partition config ends
-----------------------------------------------------------------------------

Now some hints on maintenance of your raid devices: The raid devices are
automatically checked by the kernel; in case of problems (power down
without proper shutdown for example) the devices are automatically
resynchronized, again see /proc/mdstat for current status.

If one of your disks actually fails and has to be replaced you can use the
raidhotremove and raidhotadd programs (actually just symlinks to
/sbin/raidstart, automatically created on installation of raidtools) to
remove defect  drives and/or to add new devices. say your disk /dev/sdb
died and had to be replaced; your system is now running solely off
/dev/sda, /proc/mdstat shows that only the first disk in the mirror(s) is
active.

  raidhotremove /dev/md0 /dev/sdb2
  raidhotremove /dev/md1 /dev/sdb3

Now you've got mirrors consisting of just one disk each, but at least
they're healty again.

  raidhotadd /dev/md0 /dev/sdb2
  raidhotadd /dev/md1 /dev/sdb3

Will add the new disk to the mirror(s) and initialize the new disk so you
end up with a consistant mirror again. Syschronisation runs in the
background, again see /proc/mdstat for current status.

Hope this helps,

Martin
--------------------------------------------------
 Martin Bene               vox: +43-664-3251047
 simon media               fax: +43-316-813824-6
 Andreas-Hofer-Platz 9     e-mail: [EMAIL PROTECTED]
 8010 Graz, Austria        
--------------------------------------------------
finger [EMAIL PROTECTED] for PGP public key


-*************************
Luca Perugini
Laureando in Ing.Informatica
LinuxManship

WebMaster http://www.uisp.it
System Manager at uisp.it

mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]

-**************************

Reply via email to