I've got a pool which I'm currently syncing a few hundred gigabytes to
using rsync.  The source machine is pretty slow, so it only goes at
about 20 MB/s.  Watching "zpool iostat -v local-space 10", I see a
pattern like this (trimmed to take up less space):
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
local-space   251G   405G      0    143     51  17.1M
  mirror     251G   405G      0    143     51  17.1M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    137      0  17.1M
local-space   252G   404G      1    163  2.55K  17.6M
  mirror     252G   404G      1    163  2.55K  17.6M
    c1d0s6      -      -      0    145  6.39K  16.7M
    c0d0s6      -      -      0    150  38.4K  17.6M
local-space   253G   403G      0    159    511  16.9M
  mirror     253G   403G      0    159    511  16.9M
    c1d0s6      -      -      0    340      0  41.0M
    c0d0s6      -      -      0    145  12.8K  16.9M
local-space   253G   403G      0    135    511  16.2M
  mirror     253G   403G      0    135    511  16.2M
    c1d0s6      -      -      0    484      0  60.4M
    c0d0s6      -      -      0    130      0  16.2M
local-space   253G   403G      0    125      0  15.4M
  mirror     253G   403G      0    125      0  15.4M
    c1d0s6      -      -      0    471      0  59.0M
    c0d0s6      -      -      0    123      0  15.4M
local-space   253G   403G      0    139      0  16.2M
  mirror     253G   403G      0    139      0  16.2M
    c1d0s6      -      -      0    474      0  59.3M
    c0d0s6      -      -      0    129      0  16.2M
local-space   253G   403G      0    139     51  17.1M
  mirror     253G   403G      0    139     51  17.1M
    c1d0s6      -      -      0      3  6.39K   476K
    c0d0s6      -      -      0    137      0  17.1M
local-space   253G   403G      0    144      0  18.1M
  mirror     253G   403G      0    144      0  18.1M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    144      0  18.1M
local-space   253G   403G      0    146      0  18.1M
  mirror     253G   403G      0    146      0  18.1M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    144      0  18.1M
local-space   253G   403G      0    156      0  19.3M
  mirror     253G   403G      0    156      0  19.3M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    154      0  19.3M
local-space   253G   403G      0    152      0  19.1M
  mirror     253G   403G      0    152      0  19.1M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    152      0  19.1M
local-space   253G   403G      0    158      0  19.1M
  mirror     253G   403G      0    158      0  19.1M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    152      0  19.1M
local-space   253G   403G      0    150      0  18.5M
  mirror     253G   403G      0    150      0  18.5M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    147      0  18.5M
local-space   253G   403G      0    155      0  19.4M
  mirror     253G   403G      0    155      0  19.4M
    c1d0s6      -      -      0      0      0      0
    c0d0s6      -      -      0    155      0  19.4M

The interesting part of this (as far as I can tell) is the rightmost
column; the write speeds of the second disk stay constant at about 20
MB/s, and the first disk fluctuates between zero and 60 MB/s.  Is this
normal behavior?  Could it indicate a failing disk?  There's nothing
in `fmadm faulty', `dmesg', or `/var/adm/messages' that would indicate
an impending disk failure, but this behavior is strange.  I'm using
rsync-3.0.1 (yes, security fix is on the way) on both ends, and no NFS
involved in this.  rsync is writing 256k blocks.  `iostat -xl 2' shows
a similar kind of fluctuation going on.

Any suggestions what's going on?  Any other diagnostics you'd like to
see?  I'd be happy to provide them.

Thanks!
Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to