I'm on S11E 150.0.1.9 and I replaced one of the drives and the pool seems to be 
stuck in a resilvering loop.  I performed a 'zpool clear' and 'zpool scrub' and 
just complains that the drives I didn't replace are degraded because of too 
many errors.  Oddly the replaced drive is reported as being fine.  The CKSUM 
counts get up to about 108 or so when the resilver is completed.

I'm now trying to evacuate the pool onto another pool, however the zfs 
send/receive is dying after 380GB into sending the first dataset.

Here is some output.  Any help or insights will be helpful.  Thanks

cfs

  pool: dpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Tue Jul 26 15:03:32 2011
    63.4G scanned out of 5.02T at 6.81M/s, 212h12m to go
    15.1G resilvered, 1.23% done
config:

        NAME        STATE     READ WRITE CKSUM
        dpool       DEGRADED     0     0     6
          raidz1-0  DEGRADED     0     0    12
            c9t0d0  DEGRADED     0     0     0  too many errors
            c9t1d0  DEGRADED     0     0     0  too many errors
            c9t3d0  DEGRADED     0     0     0  too many errors
            c9t2d0  ONLINE       0     0     0  (resilvering)

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x0>
        [redacted list of 20 files, mostly in the same directory]


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to