On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
<gregory.dur...@gmail.com> wrote:
> Hello,
> We just purchased two of the sc847e26-rjbod1 units to be used in a
> storage environment running Solaris 11 express.
>
> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
> 9200-8e hba. We are not using failover/redundancy. Meaning that one
> port of the hba goes to the primary front backplane interface, and the
> other goes to the primary rear backplane interface.
>
> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks. After each test, I
> destroy the underlying storage volume and create a new one. As you can
> see by the results, adding more disks, makes no difference to the
> performance. This should make a large difference from 4 disks to 8
> disks, however no difference is shown.
>
> Any help would be greatly appreciated!
>
> This is the result:
>
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C3503681+0 records in
> 3503681+0 records out
> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s

So, the problem here is that you're not testing the storage at all.
You're basically measuring dd.

To get meaningful results, you need to do two things:

First, run it for long enough so you eliminate any write cache
effects. Writes go to memory and only get sent to disk in the
background.

Second, use a proper benchmark suite, and one that isn't itself
a bottleneck. Something like vdbench, although there are others.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to