I just put in a (low priority) bug report on this.

Ben

> This post from close to a year ago never received a
> response.  We just had this same thing happen to
> another server that is running Solaris 10 U6.  One of
> the disks was marked as removed and the pool
> degraded, but 'zpool status -x' says all pools are
> healthy.  After doing an 'zpool online' on the disk
> it resilvered in fine.  Any ideas why 'zpool status
> -x' reports all healthy while 'zpool status' shows a
> pool in degraded mode?
> 
> thanks,
> Ben
> 
> > We run a cron job that does a 'zpool status -x' to
> > check for any degraded pools.  We just happened to
> > find a pool degraded this morning by running
> 'zpool
> > status' by hand and were surprised that it was
> > degraded as we didn't get a notice from the cron
> > job.
> > 
> > # uname -srvp
> > SunOS 5.11 snv_78 i386
> > 
> > # zpool status -x
> > all pools are healthy
> > 
> > # zpool status pool1
> >   pool: pool1
> > tate: DEGRADED
> >  scrub: none requested
> > onfig:
> > 
> >         NAME         STATE     READ WRITE CKSUM
> > pool1        DEGRADED     0     0     0
> >           raidz1     DEGRADED     0     0     0
> >   c1t8d0   REMOVED      0     0     0
> >           c1t9d0   ONLINE       0     0     0
> >   c1t10d0  ONLINE       0     0     0
> >           c1t11d0  ONLINE       0     0     0
> > No known data errors
> > 
> > I'm going to look into it now why the disk is
> listed
> > as removed.
> > 
> > Does this look like a bug with 'zpool status -x'?
> > 
> > Ben
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to