Measure the I/O performance with iostat.  You should see something that
looks sorta like (iostat -zxCn 10):
                    extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 5948.9  349.3 40322.3 5238.1 0.1 16.7    0.0    2.7   0 330 c9
    3.7    0.0  230.7    0.0  0.0  0.1    0.0   13.5   0   2 c9t1d0
  845.0    0.0 5497.4    0.0  0.0  0.9    0.0    1.1   1  32 c9t2d0
    3.8    0.0  230.7    0.0  0.0  0.0    0.0   10.6   0   1 c9t3d0
  845.2    0.0 5495.4    0.0  0.0  0.9    0.0    1.1   1  32 c9t4d0
    3.8    0.0  237.1    0.0  0.0  0.0    0.0   10.4   0   1 c9t5d0
  841.4    0.0 5519.7    0.0  0.0  0.9    0.0    1.1   1  32 c9t6d0
    3.8    0.0  237.3    0.0  0.0  0.0    0.0    9.2   0   1 c9t7d0
  843.5    0.0 5485.2    0.0  0.0  0.9    0.0    1.1   1  31 c9t8d0
    3.7    0.0  230.8    0.0  0.0  0.1    0.0   15.2   0   2 c9t9d0
  850.2    0.0 5488.6    0.0  0.0  0.9    0.0    1.1   1  31 c9t10d0
    3.1    0.0  211.2    0.0  0.0  0.0    0.0   13.2   0   1 c9t11d0
  847.9    0.0 5523.4    0.0  0.0  0.9    0.0    1.1   1  31 c9t12d0
    3.1    0.0  204.9    0.0  0.0  0.0    0.0    9.6   0   1 c9t13d0
  847.2    0.0 5506.0    0.0  0.0  0.9    0.0    1.1   1  31 c9t14d0
    3.4    0.0  224.1    0.0  0.0  0.0    0.0   12.3   0   1 c9t15d0
    0.0  349.3    0.0 5238.1  0.0  9.9    0.0   28.4   1 100 c9t16d0

Here you can clearly see a raidz2 resilver in progress. c9t16d0
is the disk being resilvered (write workload) and half of the 
others are being read to generate the resilvering data.  Note
the relative performance and the ~30% busy for the surviving
disks.  If you see iostat output that looks significantly different
than this, then you might be seeing one of two common causes:

1. Your version of ZFS has the new resilver throttle *and* the
  pool is otherwise servicing I/O.

2. Disks are throwing errors or responding very slowly.  Use
  fmdump -eV to observe error reports.

 -- richard

On Nov 1, 2010, at 12:33 PM, Mark Sandrock wrote:

> Hello,
> 
>       I'm working with someone who replaced a failed 1TB drive (50% utilized),
> on an X4540 running OS build 134, and I think something must be wrong.
> 
> Last Tuesday afternoon, zpool status reported:
> 
> scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
> 
> and a week being 168 hours, that put completion at sometime tomorrow night.
> 
> However, he just reported zpool status shows:
> 
> scrub: resilver in progress for 447h26m, 65.07% done, 240h10m to go
> 
> so it's looking more like 2011 now. That can't be right.
> 
> I'm hoping for a suggestion or two on this issue.
> 
> I'd search the archives, but they don't seem searchable. Or am I wrong about 
> that?
> 
> Thanks.
> Mark (subscription pending)
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
ZFS and performance consulting
http://www.RichardElling.com













_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to