On Saturday 21 August 2004 15:17, Lyle Giese wrote: > My experience with IDE in raid arrays is less than stellar and will be > trashing them as I rebuild servers. I have had several instances where one > drive fails and the entire array falls over as the kernel struggles to > recover from the loss of a drive or the error messages. I have seen this > with Linux generated arrays and with the Promise IDE raid cards. Besides > the performance of the Promise parallel IDE raid sucks big time.
Odd, I have absolutely *zero* issues with Promise PATA cards... I use strictly software RAID on both SCSI and IDE on Linux 2.4. Never had issues with the kernel failing due to I/O load on rebuild or dealing with failed drives. Note: You can easily throttle the I/O bandwidth used for rebuilding through /proc. I've never had to do it though. Now mind you all I do is software RAID1. I don't do RAID5. I typically buy drives in pairs and then use LVM to give me a big "blob" of storage and partition it up as I see fit with logical volumes. The largest (# of drives) array I have is an 8-drive array, with 6 in pairs and ganged together for about 300G and then a separate RAID0 on a pair of old IBM DeathStar drives for my temporary data area for MythTv. This is all on a cheapass Pentium3 system. No issues even when running in degraded mode. -A. _______________________________________________ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users