Re: How to detect / notify when a raid drive fails?

2015-11-30 Thread Anand Jain




On 11/28/2015 01:19 AM, Christoph Anton Mitterer wrote:

On Fri, 2015-11-27 at 17:16 +0800, Anand Jain wrote:

   I understand as a user, a full md/lvm set of features are important
   to begin operations using btrfs and we don't have it yet. I have to
   blame it on the priority list.

What's would be especially nice from the admin side, would be something
like /proc/mdstat, which centrally gives information about the health
of your RAID.


 Yep. Its planned. A design doc was in my draft for some time now, I
 just sent it to the mailing list for review comments.



It can/should of course be more than just "OK" / "not OK"...
information about which devices are in which state, whether a
rebuild/reconstruction/scrub is going on, etc. pp.


 right.


Maybe even details of properties like chunk sizes (as far as these
apply to btrfs).

Having a dedicated monitoring process... well nice to have, but
something like mdstat is, always there, doesn't need special userland
tools and can easily used by 3rd party stuff like Icinga/Nagios
check_raid.


 yep. will consider.


I think the keywords here are human readable + parseable... so maybe
even two files.


 yeah. for parseable reasons I liked procs, there is experimental
 /proc/fs/btrfs/devlist. but procs is kind of not recommended. So
 probably we would need a wrapper tool on top the sysfs to provide
 the same effect.

Thanks, Anand



Cheers,
Chris.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to detect / notify when a raid drive fails?

2015-11-27 Thread Christoph Anton Mitterer
On Fri, 2015-11-27 at 17:16 +0800, Anand Jain wrote:
>   I understand as a user, a full md/lvm set of features are important
>   to begin operations using btrfs and we don't have it yet. I have to
>   blame it on the priority list.
What's would be especially nice from the admin side, would be something
like /proc/mdstat, which centrally gives information about the health
of your RAID.

It can/should of course be more than just "OK" / "not OK"...
information about which devices are in which state, whether a
rebuild/reconstruction/scrub is going on, etc. pp.
Maybe even details of properties like chunk sizes (as far as these
apply to btrfs).

Having a dedicated monitoring process... well nice to have, but
something like mdstat is, always there, doesn't need special userland
tools and can easily used by 3rd party stuff like Icinga/Nagios
check_raid.
I think the keywords here are human readable + parseable... so maybe
even two files.


Cheers,
Chris.

smime.p7s
Description: S/MIME cryptographic signature


Re: How to detect / notify when a raid drive fails?

2015-11-27 Thread Anand Jain



On 11/27/2015 01:30 PM, Duncan wrote:

Ian Kelling posted on Thu, 26 Nov 2015 21:14:57 -0800 as excerpted:


I'd like to run "mail" when a btrfs raid drive fails, but I don't know
how to detect that a drive has failed. It don't see it in any docs.
Otherwise I assume I would never know until enough drives fail that the
filesystem stops working, and I'd like to know before that.


Btrfs isn't yet mature enough to have a device failure notifier daemon,
like for instance mdadm does.  There's a patch set going around that adds
global spares, so btrfs can detect the problem and grab a spare, but it's
only a rather simplistic initial implementation designed to provide the
framework for more fancy stuff later, and that's about it in terms of
anything close, so far.


Thanks Duncan.

 Adding more.. the above hot spare patch set also brings the device
 to a "failed state" when there is a confirmed flush/write failure.
 And prevents any further IOs to it in the context of raid, if there
 is no raid, it will kick in the FS error mode which generally goes
 to the readonly mode /panic as configured at mount. It will do this
 even if there is no hot spare configured.

 btrfs-progs part if not there yet. Because its waiting for sysfs
 patch set to be integrated, so that progs can use it instead of
 writing new/updating ioctls.

 These patch set also introduced another state which device can go
 into, that is "offline state". But it can work only when sysfs
 interface is provided. Offline will be used mainly when
 we don't have a confirmation that device has failed, but has just
 disappears, like pulling out a drive. Being in offline state, the
 resilver/replace will never begin.

 Since we wanted to avoid unnecessary hot replace/resilver, offline
 state is important.

 What is not there in this patch yet is (from the kernel side, apart
 from the btrfs-progs side) is to bring the disk back online (in the
 raid context). As of now it will do nothing, though progs tells
 user that kernel knows about the reappeared device.

 I understand as a user, a full md/lvm set of features are important
 to begin operations using btrfs and we don't have it yet. I have to
 blame it on the priority list.

Thanks, Anand



What generally happens now, however, is that the btrfs will note failures
attempting to write the device and start queuing up writes.  If the
device reappears fast enough, btrfs will flush the queue and be back to
normal.  Otherwise, you pretty much need to reboot and mount degraded,
then add a device and rebalance. (btrfs device delete missing broke some
versions ago and just got fixed by the latest btrfs-progs-4.3.1, IIRC.)

As for alerts, you'd see the pile of accumulating write errors in the
kernel log.  Presumably you can write up a script that can alert on that
and mail you the log or whatever, but I don't believe there's anything
official or close to it, yet.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to detect / notify when a raid drive fails?

2015-11-26 Thread Duncan
Ian Kelling posted on Thu, 26 Nov 2015 21:14:57 -0800 as excerpted:

> I'd like to run "mail" when a btrfs raid drive fails, but I don't know
> how to detect that a drive has failed. It don't see it in any docs.
> Otherwise I assume I would never know until enough drives fail that the
> filesystem stops working, and I'd like to know before that.

Btrfs isn't yet mature enough to have a device failure notifier daemon, 
like for instance mdadm does.  There's a patch set going around that adds 
global spares, so btrfs can detect the problem and grab a spare, but it's 
only a rather simplistic initial implementation designed to provide the 
framework for more fancy stuff later, and that's about it in terms of 
anything close, so far.

What generally happens now, however, is that the btrfs will note failures 
attempting to write the device and start queuing up writes.  If the 
device reappears fast enough, btrfs will flush the queue and be back to 
normal.  Otherwise, you pretty much need to reboot and mount degraded, 
then add a device and rebalance. (btrfs device delete missing broke some 
versions ago and just got fixed by the latest btrfs-progs-4.3.1, IIRC.)

As for alerts, you'd see the pile of accumulating write errors in the 
kernel log.  Presumably you can write up a script that can alert on that 
and mail you the log or whatever, but I don't believe there's anything 
official or close to it, yet.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to detect / notify when a raid drive fails?

2015-11-26 Thread Ian Kelling
On Thu, Nov 26, 2015, at 09:30 PM, Duncan wrote:
> What generally happens now, however, is that the btrfs will note failures 
> attempting to write the device and start queuing up writes.  If the 
> device reappears fast enough, btrfs will flush the queue and be back to 
> normal.  Otherwise, you pretty much need to reboot and mount degraded, 
> then add a device and rebalance. (btrfs device delete missing broke some 
> versions ago and just got fixed by the latest btrfs-progs-4.3.1, IIRC.)
> 
> As for alerts, you'd see the pile of accumulating write errors in the 
> kernel log.  Presumably you can write up a script that can alert on that 
> and mail you the log or whatever, but I don't believe there's anything 
> official or close to it, yet.

Great info, thanks. Just trying to write a file, sync and read it
sounds like the easiest test for now, especially since I don't
know what the write fail log entries will look like. And setting
up SMART notifications.

- Ian
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html