...and, apparently, I can replace two drives at the similar time (in two 
commands), and resilvering goes in parallel:

{code}
[r...@t2k1 /]# zpool status pool
  pool: pool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: resilver completed after 0h0m with 0 errors on Sun Jan 18 15:11:24 2009
config:

        NAME          STATE     READ WRITE CKSUM
        pool          DEGRADED     0     0     0
          raidz2      DEGRADED     0     0     0
            c1t0d0s3  ONLINE       0     0     0
            /ff1      OFFLINE      0     0     0
            c1t2d0s3  ONLINE       0     0     0
            /ff2      UNAVAIL      0     0     0  cannot open

errors: No known data errors
[r...@t2k1 /]# zpool replace pool /ff1 c1t1d0s3; zpool replace pool /ff2 
c1t3d0s3
{code}

This took a while, about half-a-minute. Now, how is array rebuild going?

{code}
[r...@t2k1 /]# zpool status pool
  pool: pool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.48% done, 1h9m to go
config:

        NAME            STATE     READ WRITE CKSUM
        pool            DEGRADED     0     0     0
          raidz2        DEGRADED     0     0     0
            c1t0d0s3    ONLINE       0     0     0
            replacing   DEGRADED     0     0     0
              /ff1      OFFLINE      0     0     0
              c1t1d0s3  ONLINE       0     0     0
            c1t2d0s3    ONLINE       0     0     0
            replacing   DEGRADED     0     0     0
              /ff2      UNAVAIL      0     0     0  cannot open
              c1t3d0s3  ONLINE       0     0     0

errors: No known data errors
{code}

The progress meter tends to lie at first: resilvering takes roughly 30 min for 
the
raidz2 of 4*60Gb slices.

BTW, an earlier poster reported very slow synchronization using real disks and
sparse files on a single disk. I removed the sparse files as soon as the array 
was
initialized, and writing to two searate drives went reasonably well. 

I sent data from the latest snapshot of the oldpool to the newpool with 
{code}
zfs send -R oldp...@20090118-02-postupgrade | zfs  recv -vF -d newpool
{code}

Larger datasets went in the normal range of 13-20Mb/s (of course, smaller 
datasets and snapshots ranging in a few kilobytes of size took more time to
open-close than actually copying data; so estimated speed was bytes or kbytes 
per sec).

//Jim
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to