Sorry for the lack of specifying in the first post, I was mainly directing it to their situation and how RAID1 was a poor choice. The reason RAID5 and RAID6 are much faster at reading is the same reason RAID0 is superior in read speed to RAID1. The data is spread across a number of drives and when the data is read back each drive pulls small chunks to reconstruct the file. If each drive in the array can do a max of 120MB/s you are going to get significant speeds when pulling from multiple drives at once.
This is a raid6 array on non-enterprise hardware, slower than it should be IMO, should be around 500mb/s, but 300 is fine for what it is used for. [r...@----]# hdparm -tT /dev/sdb1 /dev/sdb1: Timing cached reads: 4036 MB in 2.00 seconds = 2018.43 MB/sec Timing buffered disk reads: 996 MB in 3.01 seconds = 331.25 MB/sec On Sun, Jan 4, 2009 at 12:17 AM, Jeff Lasman <[email protected]> wrote: > On Saturday 03 January 2009 07:54 pm, Peter Manis wrote: > > > I have know people who have had both drives fail. > > I've heard of it, too. But I've never seen it. And since we backup as > well, and can restore quickly, I'm not that worried. > > > That aside RAID1 > > is slow, you will have better performance by using RAID5, RAID6, > > RAID10, RAID 0+1, or any of the many other RAID levels. > > Please show me statistics. My understanding is that RAID1 is fast at > reading, slow at writing. Which fits our model (webhosting) perfectly. > > > Remember > > this was a database server, a central location for the site's entire > > database. For one database is usually always the bottleneck so > > squeezing speed where you can is always a good thing not to mention > > the requirement for better fault tolerance than mirroring. > > I didn't see anywhere in your first post on the topic that you meant in > this particular circumstance. I'd agree with you; for a database server > they should have been using a more fault-tolerant configuration. > > > It also > > wasteful when it comes to data, you are using N*2 vs N+1 (r5) or N+2 > > (r6). RAID10 and 0+1 are also wasteful, but may be better options > > than RAID5/6 depending on the application. > > Yes, it's wasteful. > > > If I was in a situation where I was hosting a number of sites on a > > number of servers I would not feel as strong about avoiding RAID1 > > (depending on traffic), because it would not be a single point of > > failure for the whole operation as this database server was. > > And this is a major concern; see more below... > > > I would never use RAID1 alone for a database server unless it was a > > last resort and by last resort I mean, the machine is a 1U that will > > only hold 2 drives or the machine only has 2 SATA/SCSI ports and no > > way to add even a non-raid controller card... or I was 100% > > completely broke. > > For a database server, I'd agree. > > And ... most (not all) of our hosting machines are 1U machines which > hold two drives. We could use NAS in lieu of drives-in-servers, but > then I'd worry about single-point-of-failure affecting more clients. > > I know of one webhoster who hosts over 6,000 domains on one sever. Yes, > he uses NAS. I'd still rather do it my way (20 servers for 6,000 > domains, each running two drives, RAID1). > > Thanks for your continued clarification. > > Jeff > -- > Jeff Lasman, Nobaloney Internet Services > P.O. Box 52200, Riverside, CA 92517 > Our jplists address used on lists is for list email only > voice: +1 951 643-5345, or see: > "http://www.nobaloney.net/contactus.html" > _______________________________________________ > LinuxUsers mailing list > [email protected] > http://socallinux.org/cgi-bin/mailman/listinfo/linuxusers > -- Peter Manis (678) 269-7979
