Hi,
I found the bug: apparently something went wrong within the patch from 2.2.9
to 2.2.10. After installing the patch on the real 2.2.10 everything works
fine.
Will there be a patch for lilo, so it recognizes the boot partition
correctly when building a new kernel ?

Regards,
Thomas.

----- Original Message -----
From: Thomas Laun <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, July 16, 1999 7:18 PM
Subject: Latest Raid code (17-JUL-99) broken ?


> Hi,
> I am setting up RAID on two IBM IDE-Disks, one is 8.4 MB, the other 6.4MB.
I
> created partitions that are approximately equal in size and made up the
> following raidtab:
>
> raiddev /dev/md0
> raid-level 1
> nr-raid-disks 2
> nr-spare-disks 0
> chunk-size 4
> persistent-superblock 1
> device /dev/hda2
> raid-disk 0
> device /dev/hdc2
> raid-disk 1
> raiddev /dev/md1
> raid-level 1
> nr-raid-disks 2
> nr-spare-disks 0
> chunk-size 4
> persistent-superblock 1
> device /dev/hda5
> raid-disk 0
> device /dev/hdc5
> raid-disk 1
> raiddev /dev/md2
> raid-level 1
> nr-raid-disks 2
> nr-spare-disks 0
> chunk-size 4
> persistent-superblock 1
> device /dev/hda6
> raid-disk 0
> device /dev/hdc6
> raid-disk 1
> raiddev /dev/md3
> raid-level 1
> nr-raid-disks 2
> nr-spare-disks 0
> chunk-size 4
> persistent-superblock 1
> device /dev/hda7
> raid-disk 0
> device /dev/hdc7
> raid-disk 1
> raiddev /dev/md4
> raid-level 1
> nr-raid-disks 2
> nr-spare-disks 0
> chunk-size 4
> persistent-superblock 1
> device /dev/hda8
> raid-disk 0
> device /dev/hdc8
> raid-disk 1
> raiddev /dev/md5
> raid-level 1
> nr-raid-disks 2
> nr-spare-disks 0
> chunk-size 4
> persistent-superblock 1
> device /dev/hda9
> raid-disk 0
> device /dev/hdc9
> raid-disk 1
>
> Doing a mkraid works, I can see in /proc/mdstat that the mdX-devices get
> created. When I try to format the mdX-devices, everything seems to work as
> well. If I look at /proc/mdstat again, the devices are not mirrored:
>
> tintin:/ # mkraid --really-force /dev/md2
> DESTROYING the contents of /dev/md2 in 5 seconds, Ctrl-C if unsure!
> handling MD device /dev/md2
> analyzing super-block
> disk 0: /dev/hda6, 208813kB, raid superblock at 208704kB
> disk 1: /dev/hdc6, 209128kB, raid superblock at 209024kB
> tintin:/ # cat /proc/mdstat
> Personalities : [raid1]
> read_ahead 1024 sectors
> md2 : active raid1 hdc6[1] hda6[0] 208704 blocks [2/2] [UU] resync=7%
> finish=12.
> 6min
> unused devices: <none>
> tintin:/ # mke2fs /dev/md2
> mke2fs 1.14, 9-Jan-1999 for EXT2 FS 0.5b, 95/08/09
> Linux ext2 filesystem format
> Filesystem label=
> 52208 inodes, 208704 blocks
> 10435 blocks (5.00%) reserved for the super user
> First data block=1
> Block size=1024 (log=0)
> Fragment size=1024 (log=0)
> 26 block groups
> 8192 blocks per group, 8192 fragments per group
> 2008 inodes per group
> Superblock backups stored on blocks:
>         8193, 16385, 24577, 32769, 40961, 49153, 57345, 65537, 73729,
81921,
>         90113, 98305, 106497, 114689, 122881, 131073, 139265, 147457,
> 155649,
>         163841, 172033, 180225, 188417, 196609, 204801
>
> Writing inode tables: done
> Writing superblocks and filesystem accounting information: done
> tintin:/ # cat /proc/mdstat
> Personalities : [raid1]
> read_ahead 1024 sectors
> md2 : active raid1 hdc6[1] hda6[0] 208704 blocks [2/2] [UU] resync=7%
> finish=18.
> 3min
> unused devices: <none>
> tintin:/ # cat /proc/mdstat
> Personalities : [raid1]
> read_ahead 1024 sectors
> md2 : active raid1 hdc6[1] hda6[0] 208704 blocks [2/2] [UU] resync=7%
> finish=18.
> 9min
> unused devices: <none>
>
> If I keep on watching /proc/mdstat, the time for finish increments while
the
> % resync keeps the same. I already recompiled the kernel (version 2.2.10).
> If I try to do a raidstop /dev/md2, the process hangs, on shutdown the
> system hangs totally, apparently waiting for the resync to finish.
> When I mount the two partitions after the reboot, I see two empty normal
> ext2fs filesystems.
> Do I do something wrong or is this a bug in the latest release of the
> kernelpatches ?
>
> Bye,
> Thomas.
>
>
>

Reply via email to