Hi list!

I have seen less-than-stellar ZFS performance on a setup of one main
head connected to a JBOD (using SAS, but drives are SATA).  There are
16 drives (8 mirrors) in this pool but I'm getting 180ish MB
sequential writes (using dd, I know it's not precise, but those
numbers should be higher).

With some help on IRC, it seems that part of the reason I'm slowing
down is some drives seem to be slower than the others.  Initially, I
had some drives running at 1.5 mode instead of 3.0 -- They are all
running at 3.0 now.  While running the following dd command, the
output of iostat reflects a much higher %b which seems to say that
those drives are slower (but could they really be slowing down
everything else that much? --- Or am I looking at the wrong spot
here?) -- The pool configuration is also included below

dd if=/dev/zero of=4g bs=1M count=4000

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    1.0    0.0    8.0    0.0  0.0  0.0    0.0    0.2   0   0 c1
    1.0    0.0    8.0    0.0  0.0  0.0    0.0    0.2   0   0 c1t2d0
    8.0 3857.8   64.0 337868.8  0.0 64.5    0.0   16.7   0 704 c5
    0.0  259.0    0.0 26386.2  0.0  3.6    0.0   14.0   0  37
c5t50014EE0ACE4AEEFd0
    1.0  266.0    8.0 27139.2  0.0  3.6    0.0   13.5   0  37
c5t50014EE056EB0356d0
    2.0  276.0   16.0 19315.1  0.0  3.7    0.0   13.3   0  40
c5t50014EE00239C976d0
    0.0  279.0    0.0 19699.0  0.0  3.6    0.0   13.0   0  37
c5t50014EE0577C459Cd0
    1.0  232.0    8.0 23061.9  0.0  3.6    0.0   15.4   0  37
c5t50014EE0578F60F5d0
    0.0  227.0    0.0 22677.9  0.0  3.6    0.0   15.8   0  37
c5t50014EE0AC407BAEd0
    0.0  205.0    0.0 24870.2  0.0  3.4    0.0   16.6   0  35
c5t50014EE0AC408605d0
    0.0  205.0    0.0 24870.2  0.0  3.4    0.0   16.6   0  35
c5t50014EE056EB0B94d0
    1.0  210.0    8.0 15954.2  0.0  4.4    0.0   20.8   0  68
c5t5000C50010C77647d0
    0.0  212.0    0.0 16082.2  0.0  4.1    0.0   19.2   0  42
c5t5000C50010C865DEd0
    0.0  207.0    0.0 20093.9  0.0  4.2    0.0   20.3   0  45
c5t5000C50010C77679d0
    0.0  208.0    0.0 19689.5  0.0  4.1    0.0   19.8   0  44
c5t5000C50010C7672Dd0
    0.0  259.0    0.0 14013.7  0.0  5.1    0.0   19.7   0  53
c5t5000C5000A11B600d0
    2.0  320.0   16.0 19942.9  0.0  6.9    0.0   21.5   0  84
c5t5000C50008315CE5d0
    1.0  259.0    8.0 23380.2  0.0  3.6    0.0   13.9   0  37
c5t50014EE001407113d0
    0.0  234.0    0.0 20692.4  0.0  3.6    0.0   15.4   0  38
c5t50014EE00194FB1Bd0

  pool: tank
 state: ONLINE
  scan: scrub canceled on Mon Jan 30 11:07:02 2012
config:

        NAME                       STATE     READ WRITE CKSUM
        tank                       ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c5t50014EE0ACE4AEEFd0  ONLINE       0     0     0
            c5t50014EE056EB0356d0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c5t50014EE00239C976d0  ONLINE       0     0     0
            c5t50014EE0577C459Cd0  ONLINE       0     0     0
          mirror-3                 ONLINE       0     0     0
            c5t50014EE0578F60F5d0  ONLINE       0     0     0
            c5t50014EE0AC407BAEd0  ONLINE       0     0     0
          mirror-4                 ONLINE       0     0     0
            c5t50014EE056EB0B94d0  ONLINE       0     0     0
            c5t50014EE0AC408605d0  ONLINE       0     0     0
          mirror-5                 ONLINE       0     0     0
            c5t5000C50010C77647d0  ONLINE       0     0     0
            c5t5000C50010C865DEd0  ONLINE       0     0     0
          mirror-6                 ONLINE       0     0     0
            c5t5000C50010C7672Dd0  ONLINE       0     0     0
            c5t5000C50010C77679d0  ONLINE       0     0     0
          mirror-7                 ONLINE       0     0     0
            c5t50014EE001407113d0  ONLINE       0     0     0
            c5t50014EE00194FB1Bd0  ONLINE       0     0     0
          mirror-8                 ONLINE       0     0     0
            c5t5000C50008315CE5d0  ONLINE       0     0     0
            c5t5000C5000A11B600d0  ONLINE       0     0     0
        cache
          c1t2d0                   ONLINE       0     0     0
          c1t3d0                   ONLINE       0     0     0
        spares
          c5t5000C5000D46F13Dd0    AVAIL

From c5t5000C50010C77647d0 to c5t5000C50008315CE5d0 are the 6 Seagate
drives, they are 2 ST31000340AS and 4 ST31000340NS.   The rest of the
drives are all WD RE3 (WD1002FBYS).

Could those Seagate's really be slowing down the array that much or
there is something else in here that I should be trying to look at?  I
did the same dd on the main OS pool (2 mirrors) and got 63MB/s ..
times 8 mirrors should give me 504MBs reads?

tl;dr: My tank of 8 mirrors is giving 180MB writes, how to fix?!

--
Mohammed Naser — vexxhost
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to