> I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot
> the whole pool became unavailable after apparently loosing a diskdrive.
> [...]
>         NAME        STATE     READ WRITE CKSUM
>         data        UNAVAIL      0     0     0  insufficient replicas
>           c1t0d0    ONLINE       0     0     0
> [...]
>           c1t4d0    UNAVAIL      0     0     0  cannot open
> --------------
> 
> The problem as I see it is that the pool should be able to handle
> 1 disk error, no?

If it were a raidz pool, that would be correct.  But according to
zpool status, it's just a collection of disks with no replication.
Specifically, compare these two commands:

(1) zpool create data A B C

(2) zpool create data raidz A B C

Assume each disk has 500G capacity.

The first command will create an unreplicated pool with 1.5T capacity.
The second will create a single-parity RAID-Z pool with 1.0T capacity.

My guess is that you intended the latter, but actually typed the former,
perhaps assuming that RAID-Z was always present.  If so, I apologize for
not making this clearer.  If you have any suggestions for how we could
improve the zpool(1M) command or documentation, please let me know.

One option -- I confess up front that I don't really like it -- would be
to make 'unreplicated' an explicit replication type (in addition to
mirror and raidz), so that you couldn't get it by accident:

        zpool create data unreplicated A B C

The extra typing would be annoying, but would make it almost impossible
to get the wrong behavior by accident.

Jeff

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to