> -----Original Message-----
> From: Mark J Musante [mailto:mark.musa...@oracle.com]
> Sent: Wednesday, August 11, 2010 5:03 AM
> To: Seth Keith
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] zfs replace problems please please help
> 
> On Tue, 10 Aug 2010, seth keith wrote:
> 
> > # zpool status
> >  pool: brick
> > state: UNAVAIL
> > status: One or more devices could not be used because the label is missing
> >        or invalid.  There are insufficient replicas for the pool to continue
> >        functioning.
> > action: Destroy and re-create the pool from a backup source.
> >   see: http://www.sun.com/msg/ZFS-8000-5E
> > scrub: none requested
> > config:
> >
> >        NAME           STATE     READ WRITE CKSUM
> >        brick          UNAVAIL      0     0     0  insufficient replicas
> >          raidz1       UNAVAIL      0     0     0  insufficient replicas
> >            c13d0      ONLINE       0     0     0
> >            c4d0       ONLINE       0     0     0
> >            c7d0       ONLINE       0     0     0
> >            c4d1       ONLINE       0     0     0
> >            replacing  UNAVAIL      0     0     0  insufficient replicas
> >              c15t0d0  UNAVAIL      0     0     0  cannot open
> >              c11t0d0  UNAVAIL      0     0     0  cannot open
> >            c12d0      FAULTED      0     0     0  corrupted data
> >            c6d0       ONLINE       0     0     0
> >
> > What I want is to remove c15t0d0 and c11t0d0 and replace with the original 
> > c6d1.
> Suggestions?
> 
> Do the labels still exist on c6d1?  e.g. what do you get from "zdb -l
> /dev/rdsk/c6d1s0"?
> 
> If the label still exists, and the pool guid is the same as the labels on
> the other disks, you could try doing a "zpool detach brick c15t0d0" (or
> c11t0d0), then export & try re-importing.  ZFS may find c6d1 at that
> point.  There's no way to guarantee that'll work.

When I do a zdb -l /dev/rdsk/<any device> I get the same output for all my 
drives in the pool, but I don't think it looks right:

# zdb -l /dev/rdsk/c4d0
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3


If I try this zpool deatch action,  can it be reversed if there is a problem?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to