Hi. I've been doing some simple read/write tests using filebench on a
mirrored pool. Essentially, I've been scaling up the number of disks
in the pool before each test between 4, 8 and 12. I've noticed that
for individual disks, ZFS write performance scales very well between
4, 8 and 12 disks. This may be due to the fact that I'm using a SSD as
a logging device. But I'm seeing individual disk performance drop by
as much as 14MB per disk between 4 and 12 disks. Across the entire
pool that means I've lost 168MB of raw throughput just by adding two
mirror sets. I'm curious to know if there are any dials I can turn to
improve this. System details are below:
HW: Dual Quad Core 2.33 Xeon 8GB RAM
Disks: Seagate Savio 10K 146GB and LSI 1068e HBA latest firmware
OS: SCXE snv_121
Thank in advance..
--------------------------------------------------------------------------------
This email and any files transmitted with it are confidential and are
intended solely for the use of the individual or entity to whom they
are addressed. This communication may contain material protected by
the attorney-client privilege. If you are not the intended recipient,
be advised that any use, dissemination, forwarding, printing or
copying is strictly prohibited. If you have received this email in
error, please contact the sender and delete all copies.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss