Hi all,

I'm having some trouble with adding cache drives to a zpool, anyone got any 
ideas?

muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$

I have two SSDs in the system, I've created an 8gb partition on each drive for 
use as a mirrored write cache. I also have the remainder of the drive 
partitioned for use as the read only cache. However, when attempting to add it 
I get the error above.

Here's a zpool status:

  pool: aggr0
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Feb 21 21:13:45 2013
    1.13T scanned out of 20.0T at 106M/s, 51h52m to go
    74.2G resilvered, 5.65% done
config:

        NAME                         STATE     READ WRITE CKSUM
        aggr0                        DEGRADED     0     0     0
          raidz2-0                   DEGRADED     0     0     0
            c7t5000C50035CA68EDd0    ONLINE       0     0     0
            c7t5000C5003679D3E2d0    ONLINE       0     0     0
            c7t50014EE2B16BC08Bd0    ONLINE       0     0     0
            c7t50014EE2B174216Dd0    ONLINE       0     0     0
            c7t50014EE2B174366Bd0    ONLINE       0     0     0
            c7t50014EE25C1E7646d0    ONLINE       0     0     0
            c7t50014EE25C17A62Cd0    ONLINE       0     0     0
            c7t50014EE25C17720Ed0    ONLINE       0     0     0
            c7t50014EE206C2AFD1d0    ONLINE       0     0     0
            c7t50014EE206C8E09Fd0    ONLINE       0     0     0
            c7t50014EE602DFAACAd0    ONLINE       0     0     0
            c7t50014EE602DFE701d0    ONLINE       0     0     0
            c7t50014EE20677C1C1d0    ONLINE       0     0     0
            replacing-13             UNAVAIL      0     0     0
              c7t50014EE6031198C1d0  UNAVAIL      0     0     0  cannot open
              c7t50014EE0AE2AB006d0  ONLINE       0     0     0  (resilvering)
            c7t50014EE65835480Dd0    ONLINE       0     0     0
        logs
          mirror-1                   ONLINE       0     0     0
            c25t10d1p1               ONLINE       0     0     0
            c25t9d1p1                ONLINE       0     0     0

errors: No known data errors

As you can see, I've successfully added the 8gb partitions in a write caches. 
Interestingly, when I do a zpool iostat -v it shows the total as 111gb:

                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
aggr0                        20.0T  7.27T  1.33K    139  81.7M  4.19M
  raidz2                     20.0T  7.27T  1.33K    115  81.7M  2.70M
    c7t5000C50035CA68EDd0        -      -    566      9  6.91M   241K
    c7t5000C5003679D3E2d0        -      -    493      8  6.97M   242K
    c7t50014EE2B16BC08Bd0        -      -    544      9  7.02M   239K
    c7t50014EE2B174216Dd0        -      -    525      9  6.94M   241K
    c7t50014EE2B174366Bd0        -      -    540      9  6.95M   241K
    c7t50014EE25C1E7646d0        -      -    549      9  7.02M   239K
    c7t50014EE25C17A62Cd0        -      -    534      9  6.93M   241K
    c7t50014EE25C17720Ed0        -      -    542      9  6.95M   241K
    c7t50014EE206C2AFD1d0        -      -    549      9  7.02M   239K
    c7t50014EE206C8E09Fd0        -      -    526     10  6.94M   241K
    c7t50014EE602DFAACAd0        -      -    576     10  6.91M   241K
    c7t50014EE602DFE701d0        -      -    591     10  7.00M   239K
    c7t50014EE20677C1C1d0        -      -    530     10  6.95M   241K
    replacing                    -      -      0    922      0  7.11M
      c7t50014EE6031198C1d0      -      -      0      0      0      0
      c7t50014EE0AE2AB006d0      -      -      0    622      2  7.10M
    c7t50014EE65835480Dd0        -      -    595     10  6.98M   239K
logs                             -      -      -      -      -      -
  mirror                      740K   111G      0     43      0  2.75M
    c25t10d1p1                   -      -      0     43      3  2.75M
    c25t9d1p1                    -      -      0     43      3  2.75M
---------------------------  -----  -----  -----  -----  -----  -----
rpool                        7.32G  12.6G      2      4  41.9K  43.2K
  c4t0d0s0                   7.32G  12.6G      2      4  41.9K  43.2K
---------------------------  -----  -----  -----  -----  -----  -----

Something funky is going on here...

Wooks
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to