[ Please reply in person to [EMAIL PROTECTED], I am no longer on this
  mailing list. ]

This is actually the second time that I'm posting this problem. I tried
earlier with some 2.1.higher versions and 2.2.1, and today I tried again
with 2.2.3. If you know what I am doing wrong but are keeping silent
because it is some obvious FAQ, please take a moment to be kind and clue me
in. :-)

Here's the situation. Software RAID0 works fine for me on linux-2.1.120. I
patched that kernel with the appropriate raid0145 patch from that era, got
the matching raidtools (raid0145-19980905-2.1.120-B and
raidtools-0.51beta2), compiled in RAID0 support (no autodetection or
whatnot) and got things set up with an initial ramdisk for booting. Here's
how it works:

The initial ramdisk image contains a statically linked copy of mkraid and
raidstart and linuxrc (the script run by initd) calls them in this order:

    /bin/mkraid --only-superblock -f /dev/md0
    /bin/raidstart /dev/md0

I realize that some of these flags don't even exist in modern raidtools,
but it works fine for these legacy versions. And then the script exits and
*lo!* everything works (that is, /dev/md0 contains a single ext2 filesystem
that serves as the root directory). Such a deal. (This array was first
'made' long before 2.1.120).

Here's the contents of raidtab:
    raiddev                 /dev/md0
    raid-level              0
    nr-raid-disks           4
    nr-spare-disks          0
    #persistent-superblock   1          # note comment marker
    chunk-size              4

    device                  /dev/hda1
    raid-disk               0
    device                  /dev/sda1
    raid-disk               1
    device                  /dev/hdc1
    raid-disk               2
    device                  /dev/sdc1
    raid-disk               3

When it boots up, it prints out messages like this:
        adding /dev/hda1        6273351 blocks
        non-persistent superblock
        adding /dev/sda1        2080386 blocks
        non-persistent superblock
        adding /dev/hdc1        6273351 blocks
        non-persistent superblock
        adding /dev/sdc1        2080386 blocks
        non-persistent superblock

And /proc/mdstat reads:
    Personalities : [2 raid0] 
    read_ahead 128 sectors
    md0 : active raid0 hda1 sda1 hdc1 sdc1 16707464 blocks 4096k chunks
    md1 : inactive ...

Everything works fine, but I would like to upgrade beyond 2.1.120. When I
tried this a while back with 2.2.1, it was able to get pretty far but
complained the /dev/md0 was not the right size (compared to the ext2fs
record kept stored on it) --> did the size calculation algorithm for md
device chance somewhere? It let me boot but the filesystem was hideous
(half of the files could not be found, running e2fsck took forever and it
spit out a lot of messages). Instantly frightened, I shut down and when back
to 2.1.120: presto, all of the damaged healed after an fsck and life was
good again.

Today I tried again with 2.2.3 (since it was the latest version that
appeared to have an associated set of raid/raidtools patches: the 0309
ones). And the error message from (the new) mkraid was different:
        
    /dev/hda1 block size blah-blah raid superblock at blah2-blah2
    /dev/hda1 looks like an ext2fs, use -f to continue

Being somewhat afraid of data loss I decided not to force things. So I
recompiled with raid autodetection and boot support. Unfortunately, 

    lilo: linux root=/dev/md0 md=0,0,1,0,/dev/hda1,/dev/sda1,/dev/hdc1,
        /dev/sdc1 ro 

Did not work. I seem to recall problems in the past relating to having
created this array before the new superblock format: if I allow it to
create a new-style superblock it reports less total space on the block
device than when the filesystem was created and ext2 goes berserk. But that
was a while ago and may no longer be an issue.

I am happy to provide detailed bootup output, df copies of appropriate
parts of the disks, or try any (non-lethal) experiments that anyone would
like to suggest. Can anyone out there help me? Thanks!

        -Wes Weimer
        [EMAIL PROTECTED]

Reply via email to