Possibly related is the fact that fmd is now in a CPU spin loop constantly
checking the time, even tough there are no reported faults, i.e.,

# fmdump -v
TIME                 UUID                                 SUNW-MSG-ID
fmdump: /var/fm/fmd/fltlog is empty

# svcs fmd
STATE          STIME    FMRI
online         13:11:43 svc:/system/fmd:default

# prstat
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
   422 root       17M   13M run     11    0  20:42:51  19% fmd/22


# truss -p 422 |& head -20
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    lwp_park(0xFDB7DF40, 0)                         Err#62 ETIME
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453
/13:    time()                                          = 1189279453

Is this a known bug with fmd and ZFS?

Thanks.


On Fri, Sep 07, 2007 at 08:55:52PM -0700, Stuart Anderson wrote:
> I am curious why zpool status reports a pool to be in the DEGRADED state
> after a drive in a raidz2 vdev has been successfully replaced. In this
> particular case drive c0t6d0 was failing so I ran,
> 
> zpool offline home/c0t6d0
> zpool replace home c0t6d0 c8t1d0
> 
> and after the resilvering finished the pool reports a degraded state.
> Hopefully this is incorrect. At this point is the vdev in question
> now has full raidz2 protected even though it is listed as "DEGRADED"?
> 

-- 
Stuart Anderson  [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to