Hi Karl,

I like to verify that no dead or dying disk is killing pool
performance and your zpool status looks good. Jim has replied
with some ideas to check your individual device performance.

Otherwise, you might be impacted by this CR:

7060894 zfs recv is excruciatingly slow

This CR covers both zfs send/recv ops and should be resolved
in an upcoming Solaris 10 release. Its already available in an
s11 SRU.

Thanks,

Cindy

On 5/7/12 10:45 AM, Karl Rossing wrote:
Hi,

I'm showing slow zfs send on pool v29. About 25MB/sec
bash-3.2# zpool status vdipool
  pool: vdipool
 state: ONLINE
scan: scrub repaired 86.5K in 7h15m with 0 errors on Mon Feb 6 01:36:23 2012
config:

        NAME                       STATE     READ WRITE CKSUM
        vdipool                    ONLINE       0     0     0
          raidz1-0                 ONLINE       0     0     0
c0t5000C500103F2057d0 ONLINE 0 0 0 (SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod c0t5000C5000440AA0Bd0 ONLINE 0 0 0 (SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod c0t5000C500103E9FFBd0 ONLINE 0 0 0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod c0t5000C500103E370Fd0 ONLINE 0 0 0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod c0t5000C500103E120Fd0 ONLINE 0 0 0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
        logs
          mirror-1                 ONLINE       0     0     0
c0t500151795955D430d0 ONLINE 0 0 0(ATA-INTEL SSDSA2VP02-02M5-18.64GB) onboard drive on x4140 c0t500151795955BDB6d0 ONLINE 0 0 0 (ATA-INTEL SSDSA2VP02-02M5-18.64GB)onboard drive on x4140
        cache
c0t5001517BB271845Dd0 ONLINE 0 0 0 (ATA-INTEL SSDSA2CW16-0362-149.05GB)onboard drive on x4140
        spares
c0t5000C500103E368Fd0 AVAIL (SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod

The drives are in an external promise 12 drive jbod. The jbod is also connected to another server that uses the other 6 SEAGATE ST31000640SS drives.

This on Solaris 10 8/11 (Generic_147441-01). I'm using LSI 9200 for the external promise jbod and an internal 9200 for the zli and l2arc which also uses rpool. FW versions on both cards are MPTFW-12.00.00.00-IT and MPT2BIOS-7.23.01.00.

I'm wondering why the zfs send could be so slow. Could the other server be slowing down the sas bus?

Karl




CONFIDENTIALITY NOTICE: This communication (including all attachments) is confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in whole or in part, by anyone other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify
the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to