Ow Mun Heng wrote:
-----Original Message-----
From: Scott Marlowe <[EMAIL PROTECTED]>
If you throw enough drives on a quality RAID controller at it you can
get very good throughput.  If you're looking at read only / read
mostly, then RAID5 or 6 might be a better choice than RAID-10.  But
RAID 10 is my default choice unless testing shows RAID-5/6 can beat
it.

I'm loading my slave server with RAID-0 based on 3 IDE 7200 Drives.
Is this worst off than a RAID 5 implementation?


I see no problem using Raid-0 on a purely read only database where there is a copy of the data somewhere else. RAID 0 gives performance. If one of the 3 drives dies it takes the server down and lost of data will happen. The idea behind RAID 1/5/6/10 is if a drive does fail the system can keep going. Giving you time to shut down and replace the bad disk or if you have hot swappable just pull and replace. I just went through failed drives on Email server a few months ago. This a case where i told the client the server is 5 years old time to replace it about 3 months latter i get a call "the server is really slow". It turned out 1 of the drives in the RAID 10 had failed. The client allowed me to order a new server at that point.

Reply via email to