On 02/21/2013 07:27 PM, Timothy Coalson wrote:
> I think last time this was asked, the consensus was that the implementation
> was based on linear feedback shift registers and xor, which happens to be a
> reed-solomon code (not as clear on this part, but what matters is what it
> is, not what it isn't).  Regardless, from reading the source previously, I
> am fairly sure it operates bytewise, with xor for first syndrome (parity),
> and LFSR and then xor for the other syndromes.
> 
> See
> http://openindiana.org/pipermail/openindiana-discuss/2012-October/010419.html

I tore out the parity calculations for raidz1 and raidz2 (attached) from
vdev_raidz.c and here are the results:

 ("5 1 32 1000000" below means 1000000 iterations over a 5-drive
  raidz-1 at 32k per data drive; 4 data drives * 32k = 128k block)
$ for ((I=0; $I < 2 ; I=$I + 1 )); do time ./raidz_test 5 1 32 1000000 &
done
real    0m32.045s
user    0m32.336s
sys     0m0.015s

real    0m32.372s
user    0m32.486s
sys     0m0.017s

So combined raidz1 throughput is:
128 * 1024 * 1000000 / 2^30 / 32 * 2 = 7.6293 GB/s

 ("4 2 64 1000000" below means 1000000 iterations over a 4-drive
  raidz-2 at 64k per data drive; 2 data drives * 64k = 128k block)
RAIDZ2:
for ((I=0; $I < 2 ; I=$I + 1 )); do time ./raidz_test 4 2 64 1000000 & done
real    3m3.040s
user    3m0.920s
sys     0m0.078s

real    3m3.082s
user    3m1.092s
sys     0m0.058s

So combined raidz2 throughput is:
128 * 1024 * 1000000 / 2^30 / 183 * 2 = 1.3341 GB/s

Next comes the factor of reduced data spindle count. A 4-drive raidz1
will contain 3 data spindles, while a 4-drive raidz2 will only contain 2
data spindles. Fewer spindles = less raw throughput.

I think we can thus conclude that the performance drop Reginald is
seeing is entirely expected.

Cheers,
--
Saso
_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to