You mentioned that the pool was somewhat full, can you send the output
of 'zpool iostat -v pool0'? You can also try doing the following to
reduce 'metaslab_min_alloc_size' to 4K:

echo "metaslab_min_alloc_size/Z 1000" | mdb -kw

NOTE: This will change the running system so you may want to make this
change during off-peak hours.

Then check your performance and see if it makes a difference.

- George


On Mon, May 16, 2011 at 10:58 AM, Donald Stahl <d...@blacksun.org> wrote:
> Here is another example of the performance problems I am seeing:
>
> ~# dd if=/dev/zero of=/pool0/ds.test bs=1024k count=2000 2000+0 records in
> 2000+0 records out
> 2097152000 bytes (2.1 GB) copied, 56.2184 s, 37.3 MB/s
>
> 37MB/s seems like some sort of bad joke for all these disks. I can
> write the same amount of data to a set of 6 SAS disks on a Dell
> PERC6/i at a rate of 160MB/s and those disks are hosting 25 vm's and a
> lot more IOPS than this box.
>
> zpool iostat during the same time shows:
> pool0       14.2T  25.3T    124  1.30K   981K  4.02M
> pool0       14.2T  25.3T    277    914  2.16M  23.2M
> pool0       14.2T  25.3T     65  4.03K   526K  90.2M
> pool0       14.2T  25.3T     18  1.76K   136K  6.81M
> pool0       14.2T  25.3T    460  5.55K  3.60M   111M
> pool0       14.2T  25.3T    160      0  1.24M      0
> pool0       14.2T  25.3T    182  2.34K  1.41M  33.3M
>
> The zero's and other low numbers don't make any sense. And as I
> mentioned- the busy percent and service times of these disks are never
> abnormally high- especially when compared to the much smaller, better
> performing pool I have.
>



-- 
George Wilson



M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to