hi mike...
> I've built 10-20 Linux software SCSI RAIDs on 5-10 systems under various
> 2.0.x kernels (but none under 2.2.x and none using IDE).
wow...
> One of the things I've found is that the hardware has to be *very*
> reliable. A recent system with two RAID-5 and one RAID-1 took over a month
> of swapping components until it was solid.
>
> Typically, for each RAID and available partition, I start the following on
> a virtual terminal:
>
> while true; do mke2fs /dev/...; e2fsck -f /dev/...; done
and I assume that there is an if statement that flags that e2fsck failed ??
and I assume that there are other tests in there too like:
mount /dev/md0 /mnt
dd if=/dev/null of=/mnt/test1 count=1000000
dd if=/dev/null of=/mnt/test2 count=1000000
diff /mnt/tes1 /mnt/test2
umount /mnt
( when we hit 2Gb range, it will do wierd things even on hardware raids )
( not necessarily hitting/past the max file size limit )
have fun
alvin
http://www.linux-consulting.com/Raid
( my raid stuff before the lastest/greates Software-Raid howto docs )
> I also run these on the raw partitions before building the RAIDs. Often, a
> system which passes manufacturer tests and runs NT and passes QAPLUS/FE and
> installs Redhat will fail this test in an hour.
>
> After a *lot* of wasted time in the past, we now require that the system
> can run this test for a week before proceeding with further system
> configuration.
>
> The three most common problems we've found, in order:
>
> 1) Motherboard.
> 2) Memory.
> 3) SCSI cable / termination.
>
> --Mike
> ----------------------------------------------------------
> Mike Bird Tel: 209-742-5000 FAX: 209-966-3117
> President POP: 209-742-5156 PGR: 209-742-9979
> Iron Mtn Systems http://member.yosemite.net/
>