On 10/27/2015 11:12 PM, Sankarshan Mukhopadhyay wrote:
On Mon, Oct 26, 2015 at 7:04 PM, Shyam <srang...@redhat.com> wrote:
Older idea on this was, to consume the logs and filter based on the message
IDs for those situations that can be remedied. The logs are hence the point
where the event for consumption is generated.

Also, the higher level abstraction uses these logs, it can *watch* based on
message ID filters that are of interest to it, than parse the log message
entirely to gain insight on the issue.

Are all situations usually atomic? Is it expected to have specific
mapping between an event recorded in a log from one part of an
installed system to a possible symptom? Or, do a collection of events
lead up to an observed failure (which, in turn, is recorded as a
series of events on the logs)?



logs are from a point in the code, at a point in time (just stating the obvious). So to detect a genuine failure or that an event has occurred, it may need multiple messages to detect the same. Pattern for such events need to be called out, for it to make sense.

But, *some* log messages from higher layers, do denote failure as a collection point.

Overall to answer your question, it is a combination of all, depending on the event/situation.

But I did not understand the _atomic_ part in the question and also I am not sure I answered what you are thinking about.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to