PS.  I think this also gives you a chance at making the whole problem
much simpler.  Instead of the hard question of "is this faulty",
you're just trying to say "is it working right now?".

In fact, I'm now wondering if the "waiting for a response" flag
wouldn't be better as "possibly faulty".  That way you could use it
with checksum errors too, possibly with settings as simple as "errors
per minute" or "error percentage".  As with the timeouts, you could
have it off by default (or provide sensible defaults), and let
administrators tweak it for their particular needs.

Imagine a pool with the following settings:
- zfs-auto-device-timeout = 5s
- zfs-auto-device-checksum-fail-limit-epm = 20
- zfs-auto-device-checksum-fail-limit-percent = 10
- zfs-auto-device-fail-delay = 120s

That would allow the pool to flag a device as possibly faulty
regardless of the type of fault, and take immediate proactive action
to safeguard data (generally long before the device is actually
faulted).

A device triggering any of these flags would be enough for ZFS to
start reading from (or writing to) other devices first, and should you
get multiple failures, or problems on a non redundant pool, you always
just revert back to ZFS' current behaviour.

Ross





On Tue, Nov 25, 2008 at 8:37 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> I think we (the ZFS team) all generally agree with you.  The current
> nevada code is much better at handling device failures than it was
> just a few months ago.  And there are additional changes that were
> made for the FishWorks (a.k.a. Amber Road, a.k.a. Sun Storage 7000)
> product line that will make things even better once the FishWorks team
> has a chance to catch its breath and integrate those changes into nevada.
> And then we've got further improvements in the pipeline.
>
> The reason this is all so much harder than it sounds is that we're
> trying to provide increasingly optimal behavior given a collection of
> devices whose failure modes are largely ill-defined.  (Is the disk
> dead or just slow?  Gone or just temporarily disconnected?  Does this
> burst of bad sectors indicate catastrophic failure, or just localized
> media errors?)  The disks' SMART data is notoriously unreliable, BTW.
> So there's a lot of work underway to model the physical topology of
> the hardware, gather telemetry from the devices, the enclosures,
> the environmental sensors etc, so that we can generate an accurate
> FMA fault diagnosis and then tell ZFS to take appropriate action.
>
> We have some of this today; it's just a lot of work to complete it.
>
> Oh, and regarding the original post -- as several readers correctly
> surmised, we weren't faking anything, we just didn't want to wait
> for all the device timeouts.  Because the disks were on USB, which
> is a hotplug-capable bus, unplugging the dead disk generated an
> interrupt that bypassed the timeout.  We could have waited it out,
> but 60 seconds is an eternity on stage.
>
> Jeff
>
> On Mon, Nov 24, 2008 at 10:45:18PM -0800, Ross wrote:
>> But that's exactly the problem Richard:  AFAIK.
>>
>> Can you state that absolutely, categorically, there is no failure mode out 
>> there (caused by hardware faults, or bad drivers) that won't lock a drive up 
>> for hours?  You can't, obviously, which is why we keep saying that ZFS 
>> should have this kind of timeout feature.
>>
>> For once I agree with Miles, I think he's written a really good writeup of 
>> the problem here.  My simple view on it would be this:
>>
>> Drives are only aware of themselves as an individual entity.  Their job is 
>> to save & restore data to themselves, and drivers are written to minimise 
>> any chance of data loss.  So when a drive starts to fail, it makes complete 
>> sense for the driver and hardware to be very, very thorough about trying to 
>> read or write that data, and to only fail as a last resort.
>>
>> I'm not at all surprised that drives take 30 seconds to timeout, nor that 
>> they could slow a pool for hours.  That's their job.  They know nothing else 
>> about the storage, they just have to do their level best to do as they're 
>> told, and will only fail if they absolutely can't store the data.
>>
>> The raid controller on the other hand (Netapp / ZFS, etc) knows all about 
>> the pool.  It knows if you have half a dozen good drives online, it knows if 
>> there are hot spares available, and it *should* also know how quickly the 
>> drives under its care usually respond to requests.
>>
>> ZFS is perfectly placed to spot when a drive is starting to fail, and to 
>> take the appropriate action to safeguard your data.  It has far more 
>> information available than a single drive ever will, and should be designed 
>> accordingly.
>>
>> Expecting the firmware and drivers of individual drives to control the 
>> failure modes of your redundant pool is just crazy imo.  You're throwing 
>> away some of the biggest benefits of using multiple drives in the first 
>> place.
>> --
>> This message posted from opensolaris.org
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to