On Sep 4, 2009, at 5:25 PM, Scott Meilicke <scott.meili...@craneaerospace.com
> wrote:
I only see the blocking while load testing, not during regular
usage, so I am not so worried. I will try the kernel settings to see
if that helps if/when I see the issue in production.
For what it is worth, here is the pattern I see when load testing
NFS (iometer, 60% random, 65% read, 8k chunks, 32 outstanding I/Os):
data01 59.6G 20.4T 46 24 757K 3.09M
data01 59.6G 20.4T 39 24 593K 3.09M
data01 59.6G 20.4T 45 25 687K 3.22M
data01 59.6G 20.4T 45 23 683K 2.97M
data01 59.6G 20.4T 33 23 492K 2.97M
data01 59.6G 20.4T 16 41 214K 1.71M
data01 59.6G 20.4T 3 2.36K 53.4K 30.4M
data01 59.6G 20.4T 1 2.23K 20.3K 29.2M
data01 59.6G 20.4T 0 2.24K 30.2K 28.9M
data01 59.6G 20.4T 0 1.93K 30.2K 25.1M
data01 59.6G 20.4T 0 2.22K 0 28.4M
data01 59.7G 20.4T 21 295 317K 4.48M
data01 59.7G 20.4T 32 12 495K 1.61M
data01 59.7G 20.4T 35 25 515K 3.22M
data01 59.7G 20.4T 36 11 522K 1.49M
data01 59.7G 20.4T 33 24 508K 3.09M
LSI SAS HBA, 3 x 5 disk raidz, Dell 2950, 16GB RAM.
With that setup you'll see max 3x the IOPS of the type of disks, not
really the kind of setup for 60% random workload. Assuming 2TB SATA
drives the max IOPS would be around 240 IOPS.
Now if it were mirror vdevs you'd get 7x or 560 IOPS.
Is this for VMware or data warehousing?
You'll also need an SSD drive in the mix if your not using a
controller with NVRAM write-back. Especially when sharing over NFS.
I guess since it's 15 drives it's an MD1000, I might have gone with
the newer 2.5" drive enclosure as it holds 24 over 15 and most SSDs
come in 2.5".
Since you got it already, invest in a PERC 6/E with 512MB of cache and
stick it in the other PCIe 8x slot.
-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss