On Tue, Aug 9, 2011 at 8:45 PM, Gregory Durham <gregory.dur...@gmail.com> wrote:

> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks.

So you are creating one zpool with one disk per vdev and varying the
number of vdevs (the number of vdevs == the number of disks), there is
NO redundancy ?

Do you have compression enabled ?
Do you have dedup enabled ?
I expect the answer to both of the above is no given the test data is
/dev/zero, although that would tend to be limited by your memory
bandwidth (and if this is a modern server I would expect _much_ higher
numbers if compression were on). What is the server hardware
configuration ?

You are testing sequential write access only, is this really what the
application will be doing ?

> After each test, I
> destroy the underlying storage volume and create a new one. As you can
> see by the results, adding more disks, makes no difference to the
> performance. This should make a large difference from 4 disks to 8
> disks, however no difference is shown.

Unless you are being limited by something else... What does `iostat
-xn 1` show during the test ? There should be periods of zero activity
and then huge peaks (as the transaction group is committed to disk).

You are using a 4KB test data block size, is that realistic ? My
experience is that ZFS performance with block sizes that small with
the default "suggested recordsize" of 128K is not very good, try
setting recordsize to 16K (zfs set recordsize=16k <poolname>) and see
if you get different results. Try using a different tool (iozone is OK
but the best I have found is filebench, but that takes a bit more to
get useful data out of) instead of dd. Try a different test data block
size.

See 
https://spreadsheets.google.com/a/kraus-haus.org/spreadsheet/pub?hl=en_US&hl=en_US&key=0AtReWsGW-SB1dFB1cmw0QWNNd0RkR1ZnN0JEb2RsLXc&output=html
for my experience changing configurations. I did not bother changing
the total number of drives as that was already fixed by what we
bought.

> Any help would be greatly appreciated!
>
> This is the result:
>
> root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
> root@cm-srfe03:/home/gdurham~# sh createPool.sh 4
> spares are: c0t5000CCA223C00A25d0
> spares are: c0t5000CCA223C00B2Fd0
> spares are: c0t5000CCA223C00BA6d0
> spares are: c0t5000CCA223C00BB7d0
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C3503681+0 records in
> 3503681+0 records out
> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s
>
>
> real    0m39.396s
> user    0m1.791s
> sys     0m36.029s
> root@cm-srfe03:/home/gdurham~#
> root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
> root@cm-srfe03:/home/gdurham~# sh createPool.sh 6
> spares are: c0t5000CCA223C00A25d0
> spares are: c0t5000CCA223C00B2Fd0
> spares are: c0t5000CCA223C00BA6d0
> spares are: c0t5000CCA223C00BB7d0
> spares are: c0t5000CCA223C02C22d0
> spares are: c0t5000CCA223C009B9d0
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C2298711+0 records in
> 2298711+0 records out
> 9415520256 bytes (9.4 GB) copied, 25.813 s, 365 MB/s
>
>
> real    0m25.817s
> user    0m1.171s
> sys     0m23.544s
> root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
> root@cm-srfe03:/home/gdurham~# sh createPool.sh 8
> spares are: c0t5000CCA223C00A25d0
> spares are: c0t5000CCA223C00B2Fd0
> spares are: c0t5000CCA223C00BA6d0
> spares are: c0t5000CCA223C00BB7d0
> spares are: c0t5000CCA223C02C22d0
> spares are: c0t5000CCA223C009B9d0
> spares are: c0t5000CCA223C012B5d0
> spares are: c0t5000CCA223C029AFd0
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C6272342+0 records in
> 6272342+0 records out
> 25691512832 bytes (26 GB) copied, 70.4122 s, 365 MB/s
>
>
> real    1m10.433s
> user    0m3.187s
> sys     1m4.426s
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to