Hey;

        What follows is my own raid1 howto, the info below I derived from
reading several hundred emails in the archives. One was especially
helpful(and provided most of the content for this howto), but I forget the
original authors name (hey, credit where credit is due).


Quick & Dirty Raid howto for RedHat 5.2 on x86 and kernel 2.0.36
(jan25/99)

1) cd /usr/src/ and rm -rf the current linux source tree. ftp 'pristine'
2.0.36 kernel sources to /usr/src and untar. I tried skipping this step,
but this process seemingly *must* have pristine sources.

2) while still in /usr/src, get the most recent dated raid0145-1999xxxx.gz
file from ftp.kernel.org/pub/linux/daemons/raid/alpha, ungzip it, and
patch -p0 < raid0145-1999xxxx  into the linux source tree

3) cd linux, make menuconfig, and under 'Floppy, IDE and other block
devices'
select:'multiple device driver support', 'autodetect RAID partitions'
'Linear (append) mode', 'RAID 0', 'RAID 1', 'RAID4/5', 'Translucent' and
'Logical Volume Maneger Support'. *NOTE* I did not select these as
modules,
so if you choose to do so, your process/results may be different than
mine.

4) make zImage, make install, make modules, make modules_install

4.5) We need to remove the raidtools rpm that shipped with Redhat5.2
(raidtools ver 0.50 beta) before we make the new raidtools (ver 0.90).
I accomplished this with a simple rpm -qa|grep raid to get the name
of the raid package, and then a rpm -e to uninstall the package.

5) cd to your fav working directory ( I did this in /root), and retreive
via
ftp from ftp.kernel.org/pub/linux/daemons/raid/alpha/, the file:
raidtools<latestdate>.tar.gz, once you have the file, tar zxvf it into
your
working directory. cd raidtools<latestdate> and ./configure, make and
make install.

6) fdisk your scsi disks into identical partitions (so I have done, as it
was what I read). In my case I had two Segate Hawk 4gb drives: /dev/sdb
and /dev/sdc. While in fdisk program, use the 't' option to change file
type
fd (just enter 'fd' when it asks for filesystem type instead of usual
83 or 82 or etc). Each raid partition *MUST* be of type 'fd' for raid to
work at boottime.

7) reboot with new kernel. You will see (at least I did) complaints about
bad
raid superblocks, which it should since we have not yet created raid
superblocks on our raid drives.

8) Create the file /etc/raidtab, here is my ultra-simple version:
<begin>
ls -l /etc/raidtab
-rw-r--r--   1 root     root          346 Jan 24 18:12 /etc/raidtab

more /etc/raidtab
raiddev                 /dev/md0
        # General parameters
        raid-level              1
        nr-raid-disks           2
        chunk-size              256

        # RAID disks
        device                  /dev/sdb1
        raid-disk               0
        device                  /dev/sdc1
        raid-disk               1
<end>
        If you want raid-0 or raid4/5, you will have to read the man pages
on raidtab as there are a myriad of values you may have to insert into
/etc/raidtab.

9) Run 'mkraid /dev/md0' (and thru to /dev/mdX if your have more than one
raid device to setup - my case was a simple 'mkraid /dev/md0'). You *WILL*
see
an error message. Just read the error message and do what it tells you to
do.

10) do a 'raidstart /dev/md0' (again thru to /dev/mdX of that is the case)

11) Raid only supports the ext2 filesystem currently, so do not try to
make
anything else -> 'mke2fs -c /dev/md0' (or whatever checking switches you
like)

12) if you got this far, 'mount -t ext2 /dev/md0 <mountpoint>' on your fav
test mount point (in my case, /raidtest).

13) I did a 'dd if=/dev/zero of=/raidtest/bigfile count=1000000' to create
a
512mb file in /raidtest. I then copied this back and forth a couple times,
deleted it, created a smaller one, deleted it and so on until I felt
comfortable that my raid1 device was error free.

14) I guess the last point would be to edit /etc/fstab to permanently
mount
the drive for every reboot.

*NOTE* Everything I have read about RAID (and more specifically, Linux
RAID
says that a system with a raid array *OF ANY KIND* should be UPS protected
with some sort of UPS montoring software enabled. No sense in going to the
effort of building a raid array only to lose all/some of your data to a
power failure.

Of course, YMMV....
Chris Price
Sysadmin
Western Computer Link
Saskatoon, Saskatchewan

Reply via email to