> I've read a lot of different reports that suggest at this point in time, 
> kernel software raid is in most cases better than controller raid.
>   
Let me define 'most cases' for you. Linux software raid can perform 
better or the same if you are using raid0/raid1/raid1+0 arrays. If you 
are using raid5/6 arrays, the most disks are involved, the better 
hardware raid (those with sufficient processing power and cache - a long 
time ago software raid 5 beat the pants of hardware raid cards based on 
Intel i960 chips) will perform.


I have already posted on this and there are links to performance tests 
on this very subject. Let me look for the post.


> The basic argument seems to be that CPU's are fast enough now that the 
> limitation on throughput is the drive itself, and that SATA resolved the 
> bottleneck that PATA caused with kernel raid. The arguments then go on 
>   
Complete bollocks. The bottleneck is not the drives themselves as 
whether it is SATA/PATA disk drive performance has not changed much 
which is why 15k RPM disks are still king. The bottleneck is the bus be 
it PCI-X or PCIe 16x/8x/4x or at least the latencies involved due to bus 
traffic.

> to give numerous examples where a failing hardware raid controller 
> CAUSED data loss, where a raid card died and an identical raid card had 
> to be scrounged from eBay to even read the data on the drives, etc. - 
> problems that apparently don't happen with kernel software raid.
>
>   
Buy extra cards. Duh. Easy solution for what can be a very rare problem.

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to