On Fri, 8 Jul 2011, Marco Peereboom wrote: > On Fri, Jul 08, 2011 at 09:39:26PM +0200, Piotr Durlej wrote: > > Reading RAID 1 volumes can be slow because of the way reads are > > interleaved: > > > > # dd if=/dev/rsd0c of=/dev/null bs=1m count=256 > > 256+0 records in > > 256+0 records out > > 268435456 bytes transferred in 1.841 secs (145780529 bytes/sec) > > # dd if=/dev/rsd1c of=/dev/null bs=1m count=256 > > 256+0 records in > > 256+0 records out > > 268435456 bytes transferred in 1.774 secs (151248200 bytes/sec) > > # dd if=/dev/rsd2c of=/dev/null bs=1m count=256 > > 256+0 records in > > 256+0 records out > > 268435456 bytes transferred in 4.683 secs (57311236 bytes/sec) > > # bioctl sd2 > > Volume Status Size Device > > softraid0 0 Online 491516657664 sd2 RAID1 > > 0 Online 491516657664 0:0.0 noencl <sd1d> > > 1 Online 491516657664 0:1.0 noencl <sd0d> > > # ^D > > > > As a workaround I have produced this patch: > > > > http://www.durlej.net/sr1.diff > > This only helps sequential reads. I think under real load this is a > total wash. What if you picked the slow drive? What if you are doing > heavy writes at the same time too? >
Indeed, this will only help sequential reads. However, this includes small numbers of concurrent sequential reads. It is clear that true random reads will suffer from this change, but this can be solved by a clever split seek design so as to allow the array to scale. And writes will hit all working drives anyway, so in this case read performance will be affected no matter which disk you are reading from. As for picking the slow drive, I think arrays are usually composed of same or similar drives. Also, for sequential reads without this patch, the entire array is significantly slower than as single component. This is probably due to the fact that reading every other sector of a component in a two disk RAID 1 set is not going to make the disk spin twice as fast ;) Anyway, this is not what I would expect from a RAID 1 array. Regards, Piotr
