On Mon, 16 Jul 2012, Bob Friesenhahn wrote:

On Mon, 16 Jul 2012, Michael Hase wrote:

This is my understanding of zfs: it should load balance read requests even for a single sequential reader. zfs_prefetch_disable is the default 0. And I can see exactly this scaling behaviour with sas disks and with scsi disks, just not on this sata pool.

Is the BIOS configured to use AHCI mode or is it using IDE mode?

Not relevant here, disks are connected to an onboard sas hba (lsi 1068, see first post), hardware is a primergy rx330 with 2 qc opterons.


Are the disks 512 byte/sector or 4K?

512 byte/sector, HDS721010CLA330


Maybe it's a corner case which doesn't matter in real world applications? The random seek values in my bonnie output show the expected performance boost when going from one disk to a mirrored configuration. It's just the sequential read/write case, that's different for sata and sas disks.

I don't have a whole lot of experience with SATA disks but it is my impression that you might see this sort of performance if the BIOS was configured so that the drives were used as IDE disks. If not that, then there must be a bottleneck in your hardware somewhere.

With early nevada releases I had indeed the IDE/AHCI problem, albeit on different hardware. Solaris only ran in IDE mode, disks were 4 times slower than on linux, see http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/intel/sol_10_05_08/2999.html

Wouldn't a hardware bottleneck show up on raw dd tests as well? I can stream > 130 MB/sec from each of the two disks in parallel. dd reading from more than these two disks at the same time results in a slight slowdown, but here we talk about nearly 400 MB/sec aggregated bandwidth through the onboard hba, the box has 6 disk slots:

                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   94.5    0.0   94.5    0.0  0.0  1.0    0.0   10.5   0 100 c13t6d0
   94.5    0.0   94.5    0.0  0.0  1.0    0.0   10.6   0 100 c13t1d0
   93.0    0.0   93.0    0.0  0.0  1.0    0.0   10.7   0 100 c13t2d0
   94.5    0.0   94.5    0.0  0.0  1.0    0.0   10.5   0 100 c13t5d0

Don't know why this is a bit slower, maybe some pci-e bottleneck. Or something with the mpt driver, intrstat shows only one cpu handles all mpt interrupts. Or even the slow cpus? These are 1.8ghz opterons.

During sequential reads from the zfs mirror I see > 1000 interrupts/sec on one cpu. So it could really be a bottleneck somewhere triggerd by the "smallish" 128k i/o requests from the zfs side. I think I'll benchmark again on a xeon box with faster cpus, my tests with sas disks were done on this other box.

Michael


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to