From: "Tony Sceats" <tony.sce...@gmail.com> To: slug@slug.org.au > > Slower, though ... is a bit of a strange claim. Not because it is false, > but > because the answer is complex: you can, for example, double read speed and > halve write speed, using a two disk RAID 1 array ... in the ideal case. >
I must say I'm curious about this, because I have always assumed that for a RAID 1 the write speed would be roughly the same as a single disk, not halved.. my reasoning being that both writes would occur in parallel, as with the reads.. the difference of course is that the 2 reads in parallel each transfer half the data, but the 2 writes transfers all the data each sure, you may have a little bit of overhead - issuing 2 IO instructions instead of 1, or in the case of a setup where both disks share the same bus (which is not the ideal setup) there would be contention on this bus, but halved? Is it really the case? If this is true, I guess the reason would be that the same data travels over the same bus twice before the operation can be said to be completed, therefore halving your write speed. But then this holds true for the read as well, so that despite issuing an instruction to 2 different disks, each with half the data requested, then you will meet the same contention and the data will get to you with the same speed as 1 disk.. so, if this is right, then RAID 1 compared to a single disk would be something like 1. 2 disks on 2 buses = (approx) half read time, same write time 2. 2 disks on 1 bus = (approx) same read time, double write time I honestly don't know if this is the case or not, I've certaintly never measured it and it may be implementation specific, but if not I'd really like to be shown where this is wrong.. I am inclined to think for raid1: 1. 2 disks on 2 buses = (approx) same read time, same write time 2. 2 disks on 1 bus = (approx) double read time, double write time and for raid0: 1. 2 disks on 2 buses = (approx) half read time, half write time 2. 2 disks on 1 bus = slightly better than same read time and write time The reason I state the above is that I did see a benchmark for one of those SIL680 PCI cards (Dual IDE Channel), most raid0 gain was having 2 individual drives on the individual IDE buss' They did put 4 IDE drive on 2 IDE buss and you got more gain but not as much as 2 drives on their own buss , all compared to 1 drive on one buss of course. I use a Kernel raid setup with 2 disks (Samsung 500GB), raid1 for /boot and raid0 for / and backup with dd to another drive every other week. This is just a desktop nothing too important, Raid 5 seems all the go from what I have read but I do not have the setup or time to look into it.. My raid1 /boot /dev/md1: Timing cached reads: 7612 MB in 2.00 seconds = 3810.62 MB/sec Timing buffered disk reads: 244 MB in 3.01 seconds = 81.09 MB/sec 1 drive from the raid1 array above: /dev/sda1: Timing cached reads: 7490 MB in 2.00 seconds = 3749.14 MB/sec Timing buffered disk reads: 248 MB in 3.02 seconds = 82.01 MB/sec My raid0 / /dev/md3: Timing cached reads: 7770 MB in 2.00 seconds = 3889.21 MB/sec Timing buffered disk reads: 486 MB in 3.00 seconds = 161.79 MB/sec Some guy had a new WD drive and got 100MB/sec from a single drive so expect 200 MB/sec from 2 (Sata2) ... haven't seen a Sata3 drive yet. 1 drive from the raid0 md3 array above: /dev/sda3: Timing cached reads: 7612 MB in 2.00 seconds = 3810.57 MB/sec Timing buffered disk reads: 256 MB in 3.01 seconds = 84.99 MB/sec These read-times were all done with: hdparm -tT /dev/(device) Anyone know a good non destructive write test for benchmarking HDD ? Hope this helps Brett -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html