I'm seeing seriously degraded performance with round-robin SAS multipathing. I'm hoping you guys can help me achieve full throughput across both paths.

My System Config:
OpenSolaris snv_134
2 x E5520 2.4 GHz Xeon Quad-Core Processors
48 GB RAM
2 x LSI SAS 9200-8e (eight-port external 6Gb/s SATA and SAS PCIe 2.0 HBA)
1 X Mellanox 40 Gb/s dual port card PCIe 2.0
1 x JBOD: Supermicro SC846E2-R900B (Dual LSI SASX36 3Gb/s Expander Backplane, 24 Hot Swap drives)
22 x Seagate Constellation ES SAS drives

Performance I'm seeing with Multipathing Enabled (driver: mpt_sas):

With only one of the two paths connected:
1 drive connected: 137 MB/s sustained write, asvc_t: 8 ms
22 drives connected: 1.1 GB/s sustained write, asvc_t: 12 ms

With two paths connected, round-robin enabled:
1 drive connected: 13.7 MB/s sustained write, asvc_t: 25 ms
22 drives: 235 MB/s sustained write, asvc_t: 99 ms

With two paths connected, round-robin disabled, pin half the drives to one path (path A), the other half of the drives to the other path (path B):
22 drives: 2.2 GB/s sustained write (1.1 GB/s per path), asvc_t: 12 ms

Multipath support info:
mpathadm show mpath-support libmpscsi_vhci.so
mpath-support:  libmpscsi_vhci.so
        Vendor:  Sun Microsystems
        Driver Name:  scsi_vhci
        Default Load Balance:  round-robin
        Supported Load Balance Types:
                round-robin
                logical-block
        Allows To Activate Target Port Group Access:  yes
        Allows Path Override:  no
        Supported Auto Failback Config:  1
        Auto Failback:  on
        Failback Polling Rate (current/max):  0/0
        Supported Auto Probing Config:  0
        Auto Probing:  NA
        Probing Polling Rate (current/max):  NA/NA
        Supported Devices:

Do I have to add an entry to this section of /kernel/drv/scsi_vhci.conf (if so, how do i find the information to add)?:

#
# For a device that has a GUID, discovered on a pHCI with mpxio enabled, vHCI access also depends on one of the scsi_vhci failover modules accepting the device. The default way this occurs is by a failover module's "probe" implementation (sfo_device_probe) indicating the device is supported under scsi_vhci. To override this default probe-oriented configuration in order to
#
# 1) establish support for a device not currently accepted under scsi_vhci
#
# or 2) override the module selected by "probe"
#
# or 3) disable scsi_vhci support for a device
#
# you can add a 'scsi-vhci-failover-override' tuple, as documented in
# scsi_get_device_type_string(9F). For each tuple, the first part provides basic device identity information (vid/pid) and the second part selects the failover module by "failover-module-name". If you want to disable scsi_vhci support for a device, use the special failover-module-name "NONE". # Currently, for each failover-module-name in 'scsi-vhci-failover-override' (except "NONE") there needs to be a # "misc/scsi_vhci/scsi_vhci_<failover-module-name>" in 'ddi-forceload' above.
#
#       "                  111111"
#       "012345670123456789012345",     "failover-module-name" or "NONE"
#       "|-VID--||-----PID------|",
# scsi-vhci-failover-override =
#       "STK     FLEXLINE 400",         "f_asym_lsi",
#       "SUN     T4",                   "f_tpgs",
#       "CME     XIRTEMMYS",            "NONE";
#
#END: FAILOVER_MODULE_BLOCK (DO NOT MOVE OR DELETE).

How can I get this working as expected (2.2 GB/s round-robin load-balanced across both paths)?

Thanks in advance for your assistance!

Josh Simon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to