Try fmdump -e and then fmdump -eV, it could be a pathological disk just this 
side of failure doing heavy retries that id dragging the pool down.

Craig

--
Craig Morgan


On 18 Dec 2011, at 16:23, Jan-Aage Frydenbø-Bruvoll <j...@architechs.eu> wrote:

> Hi,
> 
> On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert <nat...@tuneunix.com> wrote:
>>  I know some others may already have pointed this out - but I can't see it
>> and not say something...
>> 
>> Do you realise that losing a single disk in that pool could pretty much
>> render the whole thing busted?
>> 
>> At least for me - the rate at which _I_ seem to lose disks, it would be
>> worth considering something different ;)
> 
> Yeah, I have thought that thought myself. I am pretty sure I have a
> broken disk, however I cannot for the life of me find out which one.
> zpool status gives me nothing to work on, MegaCli reports that all
> virtual and physical drives are fine, and iostat gives me nothing
> either.
> 
> What other tools are there out there that could help me pinpoint
> what's going on?
> 
> Best regards
> Jan
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to