[Simple-evcorr-users] SEC historical home page will close on June 30

2024-06-22 Thread Risto Vaarandi
hi all,

during the early days of SEC, its home page was located at
http://kodu.neti.ee/~risto/sec. That site is a part of a web portal owned
by Telia Estonia (www.telia.ee), and Telia has decided to close that portal
on June 30.

Note that http://kodu.neti.ee/~risto/sec has not been actively used since
2009, and when accessing that site, you will be redirected to the current
SEC home page at Github: https://simple-evcorr.github.io. Although the old
site is not likely to be used by end users, there is a slight chance the
old link is still present in some web pages and documentation which were
originally created before 2009. That email serves as a notification to
update the SEC home page address if you have been using the old link.

Finally, we thank Telia Estonia for keeping the kodu.neti.ee portal up and
running as a free service for 25+ years :)

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Problem with action2

2024-04-11 Thread Risto Vaarandi
hi Tom,

the PairWithWindow rule works as follows (see also the documentation of the
PairWithWindow rule in https://simple-evcorr.github.io/man.html#lbAP):

step1) if the incoming event matches the pattern defined with the 'pattern'
field, the rule either (a) starts a new event correlation operation if it
does not exist yet, or (b) if the operation exists, the rule sends the
event to the operation which consumes the event silently.
step2) if the incoming event does not match the pattern defined with the
'pattern' field, the event is processed by all event correlation operations
started by the rule, and the operations try to match this event against
their 'pattern2' patterns. If any of the patterns matches, corresponding
'action2' of the relevant operation is executed.

Given the scheme described above, if the 'pattern' field matches all events
that 'pattern2' matches, all events are handled during step1 and no event
will reach step2. You are seeing this behavior, since both patterns are
identical in your rule definition. To fix that issue, you need to make the
'pattern' and 'pattern2' fields different enough, so that the first pattern
would only match the specific event which should start the event
correlation operation, whereas the second pattern would only match the
event which should end the operation.

kind regards,
risto

Kontakt Tom Damon via Simple-evcorr-users (<
simple-evcorr-users@lists.sourceforge.net>) kirjutas kuupäeval N, 11.
aprill 2024 kell 23:34:

> Hello list,
>
>   I’m trying to get this rule working.  The action works, but action2 does
> not. What am I missing?
>
>
>
> type=PairWithWindow
>
> ptype=regexp
>
> pattern=host.(\S+)\s+subtype=\S+\smessage=.*User-ID-Agent\s+(\S+)\s(\S+):
>
> desc=(WARNING) $1 is $3 from $2
>
> action=pipe 'sending' /etc/logzilla/scripts/sec.sh '%s'
>
> ptype2=regexp
>
> pattern2=host.(\S+)\s+subtype=\S+\smessage=.*User-ID-Agent\s+(\S+)\s(\S+):
>
> desc2=(NOTICE) You seeing this means, we have seen a recovery event.
>
> action2=pipe 'sending' /etc/logzilla/scripts/sec.sh 'recovered'
>
> window=5
>
>
>
> Thanks,
>
> Tom Damon
>
> LogZilla
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC tutorial has been updated

2023-12-26 Thread Risto Vaarandi
hi all,

SEC tutorial has been updated and you can access the new version at
https://simple-evcorr.github.io/SEC-tutorial.pdf.

Many thanks to Jim Van Meggelen for suggestions on how to improve the
tutorial!

Happy Boxing Day to all SEC users :)

risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Storing a sequence counter in a context

2023-09-24 Thread Risto Vaarandi
hi Jim,
many thanks for the clarification! I have a better understanding now about
the nature of the problem.
Because you are doing offline processing of an already existing log file,
removing old contexts or old hash table keys (recommendation from the end
of my post) might not be so important, provided that call IDs are not
reused for different calls in the processed log file, and the number of
call IDs is not very large. That would keep the solution a bit more simple.
kind regards,
risto

Kontakt Jim Van Meggelen () kirjutas
kuupäeval L, 23. september 2023 kell 22:11:

> Risto,
>
> I cannot change the timestamps, as they are coded in the files I'm working
> with (old data). Going forward, I also cannot guarantee that I will have
> control over the source data, and it is likely that it will have a similar
> granularity. Finally, there's no need for that level of granularity
> downstream; it is only necessary that two events with the same timestamp
> are sequenced so that it is clear which happened first.
>
> It does seem a bit kludgy, but if I try to reduce the kludge in my SEC
> file, I am almost certain to have to introduce kludge somewhere else, which
> is worse (this way I only have SEC to maintain, rather than additional
> technology).
>
> This way, I can feed relevant log files to SEC from any source, and trust
> that I'll be able to handle them, so long as they follow standard Asterisk
> log format.
>
> Thanks again for your generous advice.
>
> Jim
>
>
>
> --
> Jim Van Meggelen
> ClearlyCore Inc.
>
>
>
> +1-416-639-6001 (DID)
> +1-877-253-2716 (Canada)
> +1-866-644-7729 (USA)
> +1-416-425-6111 x6001
> jim.vanmegge...@clearlycore.com
> http://www.clearlycore.com
>
>
> *Asterisk: The Definitive GuideFIFTH EDITION NOW AVAILABLE TO DOWNLOAD:*
> https://cdn.oreillystatic.com/pdf/Asterisk_The_Definitive_Guide.pdf
>
> --
>
> *From: *"Risto Vaarandi" 
> *To: *"Jim Van Meggelen" 
> *Cc: *"simple-evcorr-users" 
> *Sent: *Saturday, 23 September, 2023 03:51:02
> *Subject: *Re: [Simple-evcorr-users] Storing a sequence counter in a
> context
>
> hi Jim,
> the solutions from my previous post were provided without knowing the root
> cause of the issue. If the main reason for the problem is the nature of the
> timestamps in the logs (that is, they are provided with an accuracy of a
> second), I would recommend using a different logging scheme with
> high-resolution timestamps. For example, as David has mentioned in his
> post, using syslog servers with RFC5424-based syslog protocol (
> https://datatracker.ietf.org/doc/html/rfc5424) provides such timestamps
> out-of-the-box. I am not sure whether you can change the logging scheme for
> these particular application messages, but if you can, these timestamps are
> highly useful for many purposes like the investigation of past system
> faults and security incidents.
> kind regards,
> risto
>
>
> hi Jim,
>>
>> let me provide some suggestions how to accomplish this task. First, you
>> could utilize the context based approach that you have described in your
>> post. In that case, the numeral that has been retrieved from the context
>> with the 'pop' action needs to be incremented, and you can do it with the
>> 'lcall' action that invokes the relevant Perl code. Here is one example (I
>> have added several of extra actions into the rule for initializing the
>> context):
>>
>> type=single
>> ptype=regexp
>> pattern=call_id (\S+)
>> desc=call id $1
>> action=exists %e call_uniqueid_seq_$1; \
>>if %e ( none ) else ( add call_uniqueid_seq_$1 0); \
>>pop call_uniqueid_seq_$1 %sequence_num; \
>>lcall %sequence_num %sequence_num -> ( sub { $_[0] + 1 } ); \
>>add call_uniqueid_seq_$1 %sequence_num
>>
>> The first two actions check if the context call_uniqueid_seq_* for the
>> given call exists, and if not, 0 is written into that context to initialize
>> it:
>>
>> exists %e call_uniqueid_seq_$1; \
>> if %e ( none ) else ( add call_uniqueid_seq_$1 0);
>>
>> The 'pop' and 'add' actions (the 3rd and 5th action in the above rule)
>> are identical to the actions from your email.
>>
>> The 'lcall' action (4th action in the above rule) increments the counter
>> value retrieved from the context by executing the following Perl one-line
>> function:
>> sub { $_[0] + 1 }
>> That function takes the value of its parameter (that is, %sequence_num)
>> and returns an incremented value which is stored back into %sequence_num.
>> Note that %sequence_num appears twice in the 'lcall' action, sinc

Re: [Simple-evcorr-users] Storing a sequence counter in a context

2023-09-23 Thread Risto Vaarandi
hi Jim,
the solutions from my previous post were provided without knowing the root
cause of the issue. If the main reason for the problem is the nature of the
timestamps in the logs (that is, they are provided with an accuracy of a
second), I would recommend using a different logging scheme with
high-resolution timestamps. For example, as David has mentioned in his
post, using syslog servers with RFC5424-based syslog protocol (
https://datatracker.ietf.org/doc/html/rfc5424) provides such timestamps
out-of-the-box. I am not sure whether you can change the logging scheme for
these particular application messages, but if you can, these timestamps are
highly useful for many purposes like the investigation of past system
faults and security incidents.
kind regards,
risto


hi Jim,
>
> let me provide some suggestions how to accomplish this task. First, you
> could utilize the context based approach that you have described in your
> post. In that case, the numeral that has been retrieved from the context
> with the 'pop' action needs to be incremented, and you can do it with the
> 'lcall' action that invokes the relevant Perl code. Here is one example (I
> have added several of extra actions into the rule for initializing the
> context):
>
> type=single
> ptype=regexp
> pattern=call_id (\S+)
> desc=call id $1
> action=exists %e call_uniqueid_seq_$1; \
>if %e ( none ) else ( add call_uniqueid_seq_$1 0); \
>pop call_uniqueid_seq_$1 %sequence_num; \
>lcall %sequence_num %sequence_num -> ( sub { $_[0] + 1 } ); \
>add call_uniqueid_seq_$1 %sequence_num
>
> The first two actions check if the context call_uniqueid_seq_* for the
> given call exists, and if not, 0 is written into that context to initialize
> it:
>
> exists %e call_uniqueid_seq_$1; \
> if %e ( none ) else ( add call_uniqueid_seq_$1 0);
>
> The 'pop' and 'add' actions (the 3rd and 5th action in the above rule) are
> identical to the actions from your email.
>
> The 'lcall' action (4th action in the above rule) increments the counter
> value retrieved from the context by executing the following Perl one-line
> function:
> sub { $_[0] + 1 }
> That function takes the value of its parameter (that is, %sequence_num)
> and returns an incremented value which is stored back into %sequence_num.
> Note that %sequence_num appears twice in the 'lcall' action, since it acts
> both as an input parameter for the Perl function and as the variable where
> the return value from the function is stored.
>
> There is also an alternative way for addressing the same assignment. This
> approach involves keeping the counter for each call ID not in a SEC
> context, but rather in a Perl hash table. That approach is more efficient,
> since the counter value is not copied back and forth between the Perl
> function and the SEC context each time it needs to be incremented. Instead,
> the counter value stays in a Perl data structure during these increments.
> Here is a relevant rule example:
>
> type=single
> ptype=regexp
> pattern=call_id (\S+)
> desc=call id $1
> action=lcall %sequence_num $1 -> ( sub { ++$calls{$_[0]} } ); \
>write - the new value for counter for call $1 is %sequence_num
>
> The above example involves the use of a Perl hash (associative array)
> 'calls', where the call ID serves as a key into the hash. For each key
> (call ID), the hash stores the counter for that call ID. Each time the
> function
> sub { ++$calls{$_[0]} }
> gets called, the counter for the relevant call ID (the value of $1) is
> incremented in the hash 'calls', and the result is stored to %sequence_num.
>
> If you just want to retrieve the current value of the counter for some
> call ID (the value of $1) without incrementing it, you can simply invoke
> the following action:
>
> lcall %sequence_num $1 -> ( sub { $calls{$_[0]} } )
>
> With both the context-based and Perl-hash-based approach, one needs to
> think about dropping stale contexts (or stale hash keys) for calls that are
> no longer relevant. With contexts, you can remove them with the 'delete'
> action. In the case you use the Perl hash for storing the counters, you can
> use the following 'lcall' action for deleting a key from the hash 'calls':
>
> lcall %sequence_num $1 -> ( sub { delete $calls{$_[0]} } )
>
> I hope the above examples are useful.
>
> kind regards,
> risto
>
>
>
>
>
>
>
>
> Kontakt Jim Van Meggelen () kirjutas
> kuupäeval R, 22. september 2023 kell 19:50:
>
>> I am using SEC for an atypical use case, where rather than using log
>> events to correlate events for things like security events, I am using it
>> to parse through Asterisk logs in order to capture a sequence of phone call
>> events (call comes in, auto attendant answers, user selects a digit, call
>> is transferred, etc). I have SEC write each to a CSV file, which can then
>> be processed downstream for reporting (for example, I produced a Sankey
>> chart using this data).
>>
>> SEC has proven fantastic for this, however one 

Re: [Simple-evcorr-users] Storing a sequence counter in a context

2023-09-22 Thread Risto Vaarandi
hi Jim,

let me provide some suggestions how to accomplish this task. First, you
could utilize the context based approach that you have described in your
post. In that case, the numeral that has been retrieved from the context
with the 'pop' action needs to be incremented, and you can do it with the
'lcall' action that invokes the relevant Perl code. Here is one example (I
have added several of extra actions into the rule for initializing the
context):

type=single
ptype=regexp
pattern=call_id (\S+)
desc=call id $1
action=exists %e call_uniqueid_seq_$1; \
   if %e ( none ) else ( add call_uniqueid_seq_$1 0); \
   pop call_uniqueid_seq_$1 %sequence_num; \
   lcall %sequence_num %sequence_num -> ( sub { $_[0] + 1 } ); \
   add call_uniqueid_seq_$1 %sequence_num

The first two actions check if the context call_uniqueid_seq_* for the
given call exists, and if not, 0 is written into that context to initialize
it:

exists %e call_uniqueid_seq_$1; \
if %e ( none ) else ( add call_uniqueid_seq_$1 0);

The 'pop' and 'add' actions (the 3rd and 5th action in the above rule) are
identical to the actions from your email.

The 'lcall' action (4th action in the above rule) increments the counter
value retrieved from the context by executing the following Perl one-line
function:
sub { $_[0] + 1 }
That function takes the value of its parameter (that is, %sequence_num) and
returns an incremented value which is stored back into %sequence_num. Note
that %sequence_num appears twice in the 'lcall' action, since it acts both
as an input parameter for the Perl function and as the variable where the
return value from the function is stored.

There is also an alternative way for addressing the same assignment. This
approach involves keeping the counter for each call ID not in a SEC
context, but rather in a Perl hash table. That approach is more efficient,
since the counter value is not copied back and forth between the Perl
function and the SEC context each time it needs to be incremented. Instead,
the counter value stays in a Perl data structure during these increments.
Here is a relevant rule example:

type=single
ptype=regexp
pattern=call_id (\S+)
desc=call id $1
action=lcall %sequence_num $1 -> ( sub { ++$calls{$_[0]} } ); \
   write - the new value for counter for call $1 is %sequence_num

The above example involves the use of a Perl hash (associative array)
'calls', where the call ID serves as a key into the hash. For each key
(call ID), the hash stores the counter for that call ID. Each time the
function
sub { ++$calls{$_[0]} }
gets called, the counter for the relevant call ID (the value of $1) is
incremented in the hash 'calls', and the result is stored to %sequence_num.

If you just want to retrieve the current value of the counter for some call
ID (the value of $1) without incrementing it, you can simply invoke the
following action:

lcall %sequence_num $1 -> ( sub { $calls{$_[0]} } )

With both the context-based and Perl-hash-based approach, one needs to
think about dropping stale contexts (or stale hash keys) for calls that are
no longer relevant. With contexts, you can remove them with the 'delete'
action. In the case you use the Perl hash for storing the counters, you can
use the following 'lcall' action for deleting a key from the hash 'calls':

lcall %sequence_num $1 -> ( sub { delete $calls{$_[0]} } )

I hope the above examples are useful.

kind regards,
risto








Kontakt Jim Van Meggelen () kirjutas
kuupäeval R, 22. september 2023 kell 19:50:

> I am using SEC for an atypical use case, where rather than using log
> events to correlate events for things like security events, I am using it
> to parse through Asterisk logs in order to capture a sequence of phone call
> events (call comes in, auto attendant answers, user selects a digit, call
> is transferred, etc). I have SEC write each to a CSV file, which can then
> be processed downstream for reporting (for example, I produced a Sankey
> chart using this data).
>
> SEC has proven fantastic for this, however one minor issue has been that
> the timestamps in the logs are not granular to smaller than a second, so
> it's possible for two or more events to occur within the same second.
> Generally this doesn't cause a problem, but when sorting the CSV file
> elsewhere, this can result in out-of-sequence lines, if they both contain
> the exact same timestamp.
>
> So, what I've been trying to figure out is how to store a counter that is
> tied to the uniqueid of the call, and increment that with each event. I
> figured I'd be able to do this by storing an integer in the event store of
> a context (tied to the uniqueid for that call). I can then increment it as
> the various log lines of the call are processed in turn.
>
> The part I think I'm not getting is due to my lack of understanding of
> Perl (and specifically perl syntax).
>
> The first rule can create the context:
>
> add call_uniqueid_seq_$4 1 (where $4 is the unique ID for that call)
>

Re: [Simple-evcorr-users] SEC tutorial

2023-08-11 Thread Risto Vaarandi
hi Jim,
feel free to share your remarks! Perhaps it is better to send them to my
private email address, since the mailing list has a low limit for the
maximum email size (I think it is only around 30-40KB). I will be out of
office this weekend and next week to have my vacation, but after that I can
start reading your remarks.
kind regards,
risto

Kontakt Jim Van Meggelen () kirjutas
kuupäeval N, 10. august 2023 kell 19:35:

> Risto,
>
> I know it's been a long time, but I have been chipping away at that
> tutorial. It has been really helpful.
>
> I'm not an academic person (I dropped out of high school), so I don't know
> the correct and formal way to comment on academic papers. As such, my notes
> might seem a bit less formal than is appropriate.
>
> I did however write a book for O'Reilly, so I do have a bit of a feel for
> book writing, and it is from that angle that most of my comments are coming.
>
> Are you interested in what I've written? I fear giving offense, but I had
> some ideas that might be worth considering.
>
> Jim
>
>
>
> --
> Jim Van Meggelen
> ClearlyCore Inc.
>
>
>
> +1-416-639-6001 (DID)
> +1-877-253-2716 (Canada)
> +1-866-644-7729 (USA)
> +1-416-425-6111 x6001
> jim.vanmegge...@clearlycore.com
> http://www.clearlycore.com
>
>
> *Asterisk: The Definitive GuideFIFTH EDITION NOW AVAILABLE TO DOWNLOAD:*
> https://cdn.oreillystatic.com/pdf/Asterisk_The_Definitive_Guide.pdf
>
> --
>
> *From: *"Risto Vaarandi" 
> *To: *"Jim Van Meggelen" 
> *Cc: *"simple-evcorr-users" 
> *Sent: *Thursday, 1 December, 2022 13:07:03
> *Subject: *Re: [Simple-evcorr-users] SEC tutorial
>
> hi Jim,
> many thanks for your very kind offer! The tutorial is fairly large (around
> 50 pages), but if you have time, I would appreciate your feedback. Adding
> the remarks and recommendations into PDF works fine for me, and you can
> just send the modified PDF over to my private email address.
> kind regards,
> risto
>
> Kontakt Jim Van Meggelen () kirjutas
> kuupäeval N, 1. detsember 2022 kell 15:25:
>
>> Risto,
>>
>> Thanks for doing that up. I'll work through it.
>>
>> Are you interested in feedback on it? If so, do you have a preferred
>> mechanism for that? (I can just mark up the PDF as I read through and
>> upload it somewhere when I'm done). Note that I might find myself mostly
>> copyediting, rather than testing all the examples.
>>
>> Jim
>>
>>
>> --
>> Jim Van Meggelen
>> ClearlyCore Inc.
>>
>>
>>
>> +1-416-639-6001 (DID)
>> +1-877-253-2716 (Canada)
>> +1-866-644-7729 (USA)
>> +1-416-425-6111 x6001
>> jim.vanmegge...@clearlycore.com
>> http://www.clearlycore.com
>>
>>
>> *Asterisk: The Definitive GuideFIFTH EDITION NOW AVAILABLE TO DOWNLOAD:*
>> https://cdn.oreillystatic.com/pdf/Asterisk_The_Definitive_Guide.pdf
>>
>> --
>>
>> *From: *"Risto Vaarandi" 
>> *To: *"simple-evcorr-users" 
>> *Sent: *Thursday, 1 December, 2022 05:23:48
>> *Subject: *[Simple-evcorr-users] SEC tutorial
>>
>> hi all,
>>
>> during the last few weeks, I have written a new SEC tutorial paper which
>> can be accessed through this web link:
>> https://raw.githubusercontent.com/simple-evcorr/tutorial/main/SEC-tutorial.pdf
>> Links to this tutorial and the relevant Github repository (
>> https://github.com/simple-evcorr/tutorial) are also available in the SEC
>> home page.
>>
>> This tutorial is primarily intended for people who haven't used SEC
>> before, and it explains the essentials of SEC in a gentle way, starting
>> from simple examples and gradually moving to more complex ones. After
>> reading this tutorial, the reader should have a good understanding of event
>> correlation rules, event correlation operations, contexts, synthetic
>> events, pattern match caching, and many other things. And it might provide
>> interesting insights even for experienced users :)
>>
>> As mentioned in the introduction, this tutorial does not intend to be a
>> replacement for the official documentation. Therefore, it does not attempt
>> to explain every possible detail, but rather provide an overview of the key
>> concepts, so that it would be easier to look up more detailed information
>> from the man page.
>>
>> Hopefully the tutorial will be useful and you will find it easy to follow
>> :)
>>
>> kind regards,
>> risto
>>
>>
>> ___
>> Simple-evcorr-users mailing list
>> Simple-evcorr-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>>
>>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC-2.9.2 released

2023-06-03 Thread Risto Vaarandi
hi all,

SEC-2.9.2 has been released which is available from the SEC home page. You
can also download it through a direct download link:
https://github.com/simple-evcorr/sec/releases/download/2.9.2/sec-2.9.2.tar.gz
.

Here is the changelog for the new version:

--- version 2.9.2

* starting from this version, list of event occurrence times that correspond
  to event group string tokens is passed to PerlFunc and NPerlFunc event
  group patterns as an additional parameter.


kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Duplicate suppression and rearming

2023-03-16 Thread Risto Vaarandi
It is great to hear that everything is working properly :)

kind regards,
risto

Kontakt Spelta Edoardo () kirjutas kuupäeval
N, 16. märts 2023 kell 15:39:

> Hi,
>
> i did the same test you did, just with an error in the pattern matching 
>
>
>
> it’s beautifully working now!
>
>
>
> Thanks a lot for the help, it was very precious!
>
>
>
>
>
> Best regards,
>
> M
>
>
>
> *Da:* Risto Vaarandi 
> *Inviato:* mercoledì 15 marzo 2023 19:20
> *A:* Spelta Edoardo 
> *Cc:* simple-evcorr-users@lists.sourceforge.net
> *Oggetto:* Re: [Simple-evcorr-users] Duplicate suppression and rearming
>
>
>
>
>
>
>
>1. As long as we exclude the longer window expiration case (so we just
>want to suppress events for 10 secs), why you suggest to use two Single
>rules instead of a SingleWithSuppress ?
>
>
>
> I’m referring to these rules:
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=!SUPP_$1
> desc=react to event $1 and set up suppression
> action=write - We have observed event $1; create SUPP_$1 10
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=SUPP_$1
> desc=do not react to event $1 and only update its suppression context
> action=set SUPP_$1 10
>
>
>
>1. Does SingleWithSuppress use a sliding window ? According to my
>latest tests it does not, so it should be equivalent to your two Single
>rules..shouldn’t it ?
>
>
>
>  you are correct that the SingleWithSuppress rule does not use a sliding
> window. In contrast, the above two rules implement the sliding suppression
> window.
>
>
>
>
>
>
>
>1. For the more complex scenario, when you  introduced also a MAXLIFE
>context of 60 secs, so i’m referring to:
>
>
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=!SUPP_$1
> desc=react to event $1 and set up suppression
> action=write - We have observed event $1; \
>create SUPP_$1_MAXLIFE 60 ( delete SUPP_$1 ); \
>create SUPP_$1 10 ( delete SUPP_$1_MAXLIFE )
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=SUPP_$1
> desc=do not react to event $1 and only update its suppression context
> action=set SUPP_$1 10
>
>
>
>
>
> ..as far as i can see, the shortest 10secs context will always expire
> first, thus deleting the MAXLIFE as well…so in the end this behaves just
> like the longer context does not exist at all. It seems equivalent to your
> rules in bullet 1), am i wrong ?
>
>
>
> the lifetime of SUPP_$1 context is extended for another 10 seconds on each
> matching event, so the SUPP_$1 context can actually have a longer lifetime
> than SUPP_$1_MAXLIFE.
>
>
>
>
>
> I’ve been generating identical events every two seconds, and with those
> two rules i get 1 line every 10 seconds…instead i would expect to get one
> every 60 seconds…because i expected the 10sec window to be constantly
> sliding thus suppressing everything until the 60 seconds windows expires
> and basically reset everything.
>
>
>
> What am i missing ?
>
>
>
>
>
> May I ask what is the format of the events you are generating, and do they
> all look the same (for example, event_A)?
>
>
>
> I have tested the ruleset with events "event_A" that are generated after
> every 5 seconds with the following command line:
>
>
>
> while true; do echo event_A >>test.log; sleep 5; done
>
>
>
> Also, before starting the above command line, I executed sec with the
> following command line that monitors test.log for incoming events. In
> addition, this command line writes a detailed debug log to sec.log:
>
>
>
> sec --conf=test.sec --input=test.log --log=sec.log
>
>
>
> Here is the content from sec.log:
>
>
>
> Wed Mar 15 19:56:28 2023: SEC (Simple Event Correlator) 2.9.1
> Wed Mar 15 19:56:28 2023: Reading configuration from test.sec
> Wed Mar 15 19:56:28 2023: 2 rules loaded from test.sec
> Wed Mar 15 19:56:28 2023: No --bufsize command line option or --bufsize=0,
> setting --bufsize to 1
> Wed Mar 15 19:56:28 2023: Opening input file test.log
> Wed Mar 15 19:56:28 2023: Interactive process, SIGINT can't be used for
> changing the logging level
> Wed Mar 15 19:56:31 2023: Writing event 'We have observed event A' to file
> '-'
> Wed Mar 15 19:56:31 2023: Creating context 'SUPP_A_MAXLIFE'
> Wed Mar 15 19:56:31 2023: Creating context 'SUPP_A'
> Wed Mar 15 19:56:36 2023: Changing settings for context 'SUPP_A'
> Wed Mar 15 19:56:41 2023: Changing settings for context 'SUPP_A'
> Wed Mar 15 19:56:46 2023: Changing settings for context 'SUPP_A'
> Wed Mar 15 19:56:51 202

Re: [Simple-evcorr-users] Duplicate suppression and rearming

2023-03-15 Thread Risto Vaarandi
>
>
>1. As long as we exclude the longer window expiration case (so we just
>want to suppress events for 10 secs), why you suggest to use two Single
>rules instead of a SingleWithSuppress ?
>
>
>
> I’m referring to these rules:
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=!SUPP_$1
> desc=react to event $1 and set up suppression
> action=write - We have observed event $1; create SUPP_$1 10
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=SUPP_$1
> desc=do not react to event $1 and only update its suppression context
> action=set SUPP_$1 10
>
>
>
>1. Does SingleWithSuppress use a sliding window ? According to my
>latest tests it does not, so it should be equivalent to your two Single
>rules..shouldn’t it ?
>
>
 you are correct that the SingleWithSuppress rule does not use a sliding
window. In contrast, the above two rules implement the sliding suppression
window.


>
>
>
>1. For the more complex scenario, when you  introduced also a MAXLIFE
>context of 60 secs, so i’m referring to:
>
>
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=!SUPP_$1
> desc=react to event $1 and set up suppression
> action=write - We have observed event $1; \
>create SUPP_$1_MAXLIFE 60 ( delete SUPP_$1 ); \
>create SUPP_$1 10 ( delete SUPP_$1_MAXLIFE )
>
> type=Single
> ptype=RegExp
> pattern=event_(\w+)
> context=SUPP_$1
> desc=do not react to event $1 and only update its suppression context
> action=set SUPP_$1 10
>
>
>
>
>
> ..as far as i can see, the shortest 10secs context will always expire
> first, thus deleting the MAXLIFE as well…so in the end this behaves just
> like the longer context does not exist at all. It seems equivalent to your
> rules in bullet 1), am i wrong ?
>

the lifetime of SUPP_$1 context is extended for another 10 seconds on each
matching event, so the SUPP_$1 context can actually have a longer lifetime
than SUPP_$1_MAXLIFE.


>
> I’ve been generating identical events every two seconds, and with those
> two rules i get 1 line every 10 seconds…instead i would expect to get one
> every 60 seconds…because i expected the 10sec window to be constantly
> sliding thus suppressing everything until the 60 seconds windows expires
> and basically reset everything.
>
>
>
> What am i missing ?
>
>
>

May I ask what is the format of the events you are generating, and do they
all look the same (for example, event_A)?

I have tested the ruleset with events "event_A" that are generated after
every 5 seconds with the following command line:

while true; do echo event_A >>test.log; sleep 5; done

Also, before starting the above command line, I executed sec with the
following command line that monitors test.log for incoming events. In
addition, this command line writes a detailed debug log to sec.log:

sec --conf=test.sec --input=test.log --log=sec.log

Here is the content from sec.log:

Wed Mar 15 19:56:28 2023: SEC (Simple Event Correlator) 2.9.1
Wed Mar 15 19:56:28 2023: Reading configuration from test.sec
Wed Mar 15 19:56:28 2023: 2 rules loaded from test.sec
Wed Mar 15 19:56:28 2023: No --bufsize command line option or --bufsize=0,
setting --bufsize to 1
Wed Mar 15 19:56:28 2023: Opening input file test.log
Wed Mar 15 19:56:28 2023: Interactive process, SIGINT can't be used for
changing the logging level
Wed Mar 15 19:56:31 2023: Writing event 'We have observed event A' to file
'-'
Wed Mar 15 19:56:31 2023: Creating context 'SUPP_A_MAXLIFE'
Wed Mar 15 19:56:31 2023: Creating context 'SUPP_A'
Wed Mar 15 19:56:36 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:56:41 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:56:46 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:56:51 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:56:56 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:01 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:06 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:11 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:16 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:21 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:26 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:31 2023: Changing settings for context 'SUPP_A'
Wed Mar 15 19:57:32 2023: Deleting stale context 'SUPP_A_MAXLIFE'
Wed Mar 15 19:57:32 2023: Deleting context 'SUPP_A'
Wed Mar 15 19:57:32 2023: Context 'SUPP_A' deleted
Wed Mar 15 19:57:32 2023: Stale context 'SUPP_A_MAXLIFE' deleted
Wed Mar 15 19:57:36 2023: Writing event 'We have observed event A' to file
'-'
Wed Mar 15 19:57:36 2023: Creating context 'SUPP_A_MAXLIFE'
Wed Mar 15 19:57:36 2023: Creating context 'SUPP_A'

As you can see, the event "event_A" that appeared at 19:56:31 triggered a
notification message, and also, suppression contexts were set up. Also,
after every 5 seconds debug messages "Changing settings for context
'SUPP_A'" appear. These messages reflect the 

Re: [Simple-evcorr-users] Duplicate suppression and rearming

2023-03-14 Thread Risto Vaarandi
> Absolutely amazing!
>

it is great that I was able to help :)


>
>
> @Risto Vaarandi  I need some time to understand
> deeply what you propose and if it’s applicable to hundreds of conditions,
> but it sounds really promising !!
>

As a side note: this technique is applicable for any number of conditions
in parallel. Just make sure that both context names contain a unique ID for
each different condition. In the example from my previous post, the $1
variable plays that role and allows to process many event types
simultaneously (e.g., event_A, event_B, event_AB, event_ABC, etc.).

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Duplicate suppression and rearming

2023-03-14 Thread Risto Vaarandi
hi Mugugno,

thanks for clarifying your scenario! Perhaps I can first explain a simpler
ruleset that is addressing a part of your scenario, and then provide a
slightly more complex solution that addresses your scenario fully.

Let's assume that you want to react to the first instance of some event,
and then keeping repeated instances of the same event suppressed, provided
that the time frame between consecutive suppressed events is at most 10
seconds. In order to do that, you could utilize this ruleset of 2 Single
rules:

type=Single
ptype=RegExp
pattern=event_(\w+)
context=!SUPP_$1
desc=react to event $1 and set up suppression
action=write - We have observed event $1; create SUPP_$1 10

type=Single
ptype=RegExp
pattern=event_(\w+)
context=SUPP_$1
desc=do not react to event $1 and only update its suppression context
action=set SUPP_$1 10

These two rules process events in the following format: event_
(for example, event_X). The rules utilize the contexts SUPP_* for
suppressing events of a given type. For example, for suppressing repeated
instances of event_X, the context SUPP_X is used.

The first rule matches the event and checks if the suppression context is
present. If the suppression context is not found, we are dealing with the
first event we have seen, or the first event after previous suppression has
ended. Therefore, the rule prints a notice "We have observed event *" to
standard output. Also, after executing this action, the rule activates the
suppression for the given event type. For example, if the event was
event_X, the context SUPP_X is created. Note that the lifetime of the
SUPP_X context is 10 seconds. If no subsequent events event_X will be
observed during these 10 seconds, SUPP_X context will simply expire, and
any further event_X will again trigger a notice written to standard output.

However, if some event_X will be seen within 10 seconds, it will match the
second rule instead of the first one. The second rule does not produce any
notice (in other words, it is suppressing event_X without action), but it
is rather extending the lifetime of the suppression context SUPP_X for
additional 10 seconds starting from the *current moment*. This means that
as long as events event_X will keep arriving with the time internal not
larger than 10 seconds between them, the lifetime of the SUPP_X context
will get extended for another 10 seconds on the arrival of each event_X.

Obviously, the above ruleset has the following problem -- if events event_X
keep on arriving at least once per 10 seconds indefinitely, the suppression
of event_X will never end. For addressing this problem, you can use a
further development of the previous ruleset that sets an upper limit of 60
seconds for the suppression:

type=Single
ptype=RegExp
pattern=event_(\w+)
context=!SUPP_$1
desc=react to event $1 and set up suppression
action=write - We have observed event $1; \
   create SUPP_$1_MAXLIFE 60 ( delete SUPP_$1 ); \
   create SUPP_$1 10 ( delete SUPP_$1_MAXLIFE )

type=Single
ptype=RegExp
pattern=event_(\w+)
context=SUPP_$1
desc=do not react to event $1 and only update its suppression context
action=set SUPP_$1 10

When event_X is observed and the suppression is currently not active, the
first rule matches and writes a notice to standard output. In addition to
context SUPP_X, the context SUPP_X_MAXLIFE gets created with the lifetime
of 60 seconds (that is the maximum lifetime of the event suppression
process). Note that the lifetime of this context is *not* extended by the
ruleset, and also, when this context expires, it will delete the SUPP_X
context that implements the suppression. This means that the suppression
can never be active for more than 60 seconds after we saw the first event
that triggered a notice message to standard output.

Also, note that when SUPP_X context expires, it will delete the
SUPP_X_MAXLIFE context, since this context is no longer necessary. In other
words, whatever context (SUPP_X or SUPP_X_MAXLIFE) expires first, it will
always delete the other context, in order to continue with the clean state
for the given event type.

By using the above technique with two contexts, you can implement the
scenario you want. Also, I hope that the examples I have provided are clear
enough and helpful for illustrating the main point.

kind regards,
risto


Kontakt Spelta Edoardo () kirjutas kuupäeval
T, 14. märts 2023 kell 16:58:

> Hello,
>
> you definitely described my scenario better than me.
>
>
>
> This is the one i’m after:
>
> 0, 2, 23, 54 -- events at minutes 2 and 23 should be suppressed, but the
> event at minute 54 should trigger an action, since it happens 31 minutes
> after the event 23
>
> 0, 11, 32, 44, 61 -- events at minutes 11, 32 and 44 should be suppressed,
> since they are separated by less than 30 minutes, but the event at minute
> 61 should trigger an action, since the suppression can't last for longer
> than 1 hour
>

Re: [Simple-evcorr-users] Duplicate suppression and rearming

2023-03-14 Thread Risto Vaarandi
hi Mugugno,

let me clarify your scenario a bit, considering the diagram from your post:

T1---T27---T30---T57-T60--T61

Eventsuppr supprsuppr  suppr
suppr Event


Do you want the suppression to start after the time moment T1 when the
first event was observed and run for 1 hour, so that this window would not
be sliding?


Or do you rather want the time window between two *consecutive* events to
be less than N minutes in order to suppress them, so that suppression would
work during max 1 hour? For example, suppose N=30minutes and consider the
following events happening at these minutes:


0, 2, 23, 54 -- events at minutes 2 and 23 should be suppressed, but the
event at minute 54 should trigger an action, since it happens 31 minutes
after the event 23

0, 11, 32, 44, 61 -- events at minutes 11, 32 and 44 should be suppressed,
since they are separated by less than 30 minutes, but the event at minute
61 should trigger an action, since the suppression can't last for longer
than 1 hour


I would appreciate it if you could provide some additional comments on your
scenario.


kind regards,

risto

Hello,
>
> i’ve recently been struggling with SEC to implement something for this
> specific use case: suppressing specific entries read from a syslog for a
> certain amount of time (say 30min) but make sure that after a longer time
> (say 1h), if they are still being received, i’m getting a new one.
>
>
>
> If these events are being received continously they will always be
> suppressed because the windows will be sliding accordingly but i’m trying
> to find a way to make it stop after 1h
>
>
>
> The sequence should look like this:
>
>
>
>
> T1---T27---T30---T57-T60--T61
>
> Eventsuppr supprsuppr  suppr
> suppr Event
>
>
>
> I tried combining a SingleWithSuppress (which is ok for the suppression
> part) and context expiration but i cannot find a working solution.
>
> Anybody already faced this use case ?
>
>
>
> Any help appreciated!
>
>
>
>
>
> Thanks and regards,
>
> Mugugno
>
>
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SEC tutorial

2022-12-02 Thread Risto Vaarandi
hi Dusan,
many thanks for your kind words :)
risto

Kontakt Dusan Sovic () kirjutas kuupäeval R, 2.
detsember 2022 kell 23:07:

> Hi Risto,
>
>
>
> Thanks for this tutorial.
>
> It’s very handy (especially 4.5 Internal contexts).
>
> I’ve learned again something new.
>
> SEC helping me reliably more than 6 years without any issues.
>
> It’s simple, reliable and well-documented.
>
>
>
> Thank you,
>
> Dusan
>
>
>
> *Od: *Risto Vaarandi 
> *Odoslané: *štvrtok 1. decembra 2022 11:25
> *Komu: *simple-evcorr-users@lists.sourceforge.net
> *Predmet: *[Simple-evcorr-users] SEC tutorial
>
>
>
> hi all,
>
>
>
> during the last few weeks, I have written a new SEC tutorial paper which
> can be accessed through this web link:
> https://raw.githubusercontent.com/simple-evcorr/tutorial/main/SEC-tutorial.pdf
>
> Links to this tutorial and the relevant Github repository (
> https://github.com/simple-evcorr/tutorial) are also available in the SEC
> home page.
>
>
>
> This tutorial is primarily intended for people who haven't used SEC
> before, and it explains the essentials of SEC in a gentle way, starting
> from simple examples and gradually moving to more complex ones. After
> reading this tutorial, the reader should have a good understanding of event
> correlation rules, event correlation operations, contexts, synthetic
> events, pattern match caching, and many other things. And it might provide
> interesting insights even for experienced users :)
>
>
>
> As mentioned in the introduction, this tutorial does not intend to be a
> replacement for the official documentation. Therefore, it does not attempt
> to explain every possible detail, but rather provide an overview of the key
> concepts, so that it would be easier to look up more detailed information
> from the man page.
>
>
>
> Hopefully the tutorial will be useful and you will find it easy to follow
> :)
>
>
>
> kind regards,
>
> risto
>
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SEC tutorial

2022-12-01 Thread Risto Vaarandi
hi Jim,
many thanks for your very kind offer! The tutorial is fairly large (around
50 pages), but if you have time, I would appreciate your feedback. Adding
the remarks and recommendations into PDF works fine for me, and you can
just send the modified PDF over to my private email address.
kind regards,
risto

Kontakt Jim Van Meggelen () kirjutas
kuupäeval N, 1. detsember 2022 kell 15:25:

> Risto,
>
> Thanks for doing that up. I'll work through it.
>
> Are you interested in feedback on it? If so, do you have a preferred
> mechanism for that? (I can just mark up the PDF as I read through and
> upload it somewhere when I'm done). Note that I might find myself mostly
> copyediting, rather than testing all the examples.
>
> Jim
>
>
> --
> Jim Van Meggelen
> ClearlyCore Inc.
>
>
>
> +1-416-639-6001 (DID)
> +1-877-253-2716 (Canada)
> +1-866-644-7729 (USA)
> +1-416-425-6111 x6001
> jim.vanmegge...@clearlycore.com
> http://www.clearlycore.com
>
>
> *Asterisk: The Definitive GuideFIFTH EDITION NOW AVAILABLE TO DOWNLOAD:*
> https://cdn.oreillystatic.com/pdf/Asterisk_The_Definitive_Guide.pdf
>
> --
>
> *From: *"Risto Vaarandi" 
> *To: *"simple-evcorr-users" 
> *Sent: *Thursday, 1 December, 2022 05:23:48
> *Subject: *[Simple-evcorr-users] SEC tutorial
>
> hi all,
>
> during the last few weeks, I have written a new SEC tutorial paper which
> can be accessed through this web link:
> https://raw.githubusercontent.com/simple-evcorr/tutorial/main/SEC-tutorial.pdf
> Links to this tutorial and the relevant Github repository (
> https://github.com/simple-evcorr/tutorial) are also available in the SEC
> home page.
>
> This tutorial is primarily intended for people who haven't used SEC
> before, and it explains the essentials of SEC in a gentle way, starting
> from simple examples and gradually moving to more complex ones. After
> reading this tutorial, the reader should have a good understanding of event
> correlation rules, event correlation operations, contexts, synthetic
> events, pattern match caching, and many other things. And it might provide
> interesting insights even for experienced users :)
>
> As mentioned in the introduction, this tutorial does not intend to be a
> replacement for the official documentation. Therefore, it does not attempt
> to explain every possible detail, but rather provide an overview of the key
> concepts, so that it would be easier to look up more detailed information
> from the man page.
>
> Hopefully the tutorial will be useful and you will find it easy to follow
> :)
>
> kind regards,
> risto
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC tutorial

2022-12-01 Thread Risto Vaarandi
hi all,

during the last few weeks, I have written a new SEC tutorial paper which
can be accessed through this web link:
https://raw.githubusercontent.com/simple-evcorr/tutorial/main/SEC-tutorial.pdf
Links to this tutorial and the relevant Github repository (
https://github.com/simple-evcorr/tutorial) are also available in the SEC
home page.

This tutorial is primarily intended for people who haven't used SEC before,
and it explains the essentials of SEC in a gentle way, starting from simple
examples and gradually moving to more complex ones. After reading this
tutorial, the reader should have a good understanding of event correlation
rules, event correlation operations, contexts, synthetic events, pattern
match caching, and many other things. And it might provide interesting
insights even for experienced users :)

As mentioned in the introduction, this tutorial does not intend to be a
replacement for the official documentation. Therefore, it does not attempt
to explain every possible detail, but rather provide an overview of the key
concepts, so that it would be easier to look up more detailed information
from the man page.

Hopefully the tutorial will be useful and you will find it easy to follow :)

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] add string to the event store of a context only if it doesn't exist

2022-11-06 Thread Risto Vaarandi
hi Sean,

there are several ways to approach this problem and perhaps I will outline
two possible ways below.

Before adding a new ID to the context, one could search all existing IDs in
the context with the SEC 'while' action, looping over all IDs one by one,
and comparing each previously stored ID with the new ID value. However, it
is more efficient to implement that functionality in one 'lcall' action
that would take all stored IDs as an input parameter for a 3-line Perl
function, and do all processing inside that function. Here is an example
rule that illustrates that approach:

type=single
ptype=regexp
pattern=add ID (\d+)
desc=add new ID to perl hash
action=copy CONTEXT %id_list; \
   lcall %ret %id_list $1 -> ( sub { my(@keys) = split(/\n/, $_[0]); \
   my(%hash) = map { $_ => 1 } @keys; \
   return !exists($hash{$_[1]}); } ); \
   if %ret ( add CONTEXT $1 )

In this rule, all IDs that have been previously stored to CONTEXT are
written into the action list variable %id_list, so that IDs are separated
with newlines in that variable. In the Perl function that 'lcall' action
invokes, the list of stored IDs is the first parameter and the new ID value
the second one. Also, the function returns true if the new value is not
present in the list, and false if it is already there. Inside the function,
the list of IDs is turned into a hash table (%hash), and a new value is
searched from this hash table with a key lookup. Finally, the SEC 'if'
action checks the function return value and adds the new ID to CONTEXT only
if the function returned true.

This approach has the drawback that you have to build the Perl hash table
before each attempt to add a new ID to the context. It is much cheaper not
to use the context when you are storing the IDs, but keep them in a Perl
hash during this process, moving them over to a context when you start the
cleanup procedure. Here is an example how this could be done:

type=single
ptype=regexp
pattern=add ID (\d+)
desc=add new ID to perl hash
action=lcall %o $1 -> ( sub { $hash{$_[0]} = 1 } );

type=single
ptype=regexp
pattern=process IDs
desc=process all IDs from hash and clear the hash
action=lcall %id_list -> ( sub { keys(%hash) } ); \
   fill CONTEXT %id_list; \
   lcall %o -> ( sub { %hash = () } )

As you can see, the first rule creates a key with the value of 1 for each
observed ID in the hash table (if the key already exists in the table, this
operation simply overwrites its value, and is thus a no-op). The second
rule implements a cleanup procedure for all stored IDs. First, the Perl
function of the 'lcall' action returns a list of all keys from the hash
table. The keys will be joined into a string, using newlines as separators,
and this string is assigned to the action list variable %id_list. The
'fill' action will then write the value of the %id_list variable into
CONTEXT. Since %id_list holds a multiline string, each line from that
string will become a separate line in the event store of CONTEXT. Finally,
another 'lcall' action will clear the Perl hash table.

With the above approach, the ID values are tracked with the help of the
Perl hash table, and the context is created at a cleanup stage when all IDs
are processed together. Since I don't know the nature of the cleanup
process, the second rule is just setting up CONTEXT for that without any
further action. If the cleanup process involves some Perl code or invoking
an external script, having CONTEXT might not even be needed, but you could
directly pass the %id_list variable to the cleanup code or cleanup script.

Hopefully these examples were able to provide some ideas on how to tackle
the problem you have.

kind regards,
risto


It’s late and my Perl is years rusty. Does anyone already have a routine
> that will do this? Basically I’m using an event store to track of a list of
> id’s so I can clean something up. I need a way to add a string to the event
> store only if it’s not already in there.
>
>
>
> Thanks.
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] context w/ input file name embedded

2022-11-05 Thread Risto Vaarandi
hi Sean,

Risto,
>
>
>
> Man I feel like an idiot. That’s what I get for copy / pasteing stuff
> around. I removed the ; in the test setup and it’s working like I thought
> it would.
>
>
>
> My real setup seems to be good now as well.
>
>
>
> Thanks a million.
>

It's great I was able to help (such redundant characters are sometimes very
difficult to spot and I've run into the same issue several times in the
past :)


>
>
> While I have you, one other quick question, can SEC do sub-second
> precision?
>

Unfortunately not, the current code base utilizes the time() function for
all time measurements which returns seconds since Epoch (that limits the
precision to full seconds).


>
>
> I’ve used SEC on and off for years so thanks for a great product. It
> always to be the tool I pull out when I have a really complicated problem
> to solve.
>

Many thanks for using it :)

risto


>
>
> Sean
>
>
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] context w/ input file name embedded

2022-11-05 Thread Risto Vaarandi
hi Sean,
I was quite puzzled why the ruleset you have posted is not working, and
after testing it several times and looking at the rules, I think I found
the reason. When you look at the 'context' field of the second rule, there
is an extra semicolon at the end of the context name (when the rule file is
loaded and parsed, it is treated as a part of the context name). Therefore,
the 'context' field is not checking the presence of the right context.
After removing that semicolon, the second rule started to produce the
matches :-)
kind regards,
risto

Kontakt Sean Hennessey () kirjutas
kuupäeval P, 6. november 2022 kell 02:06:

> I’m trying to do something I think I possible, but it’s not quite working.
> I have a use case where I need to watch one central log file to catch a
> system that is going to fire off a sep process that creates a log I’m
> really interested in. I the rules that do this just fine. It uses the
> addinput and dropinput to pull in and out files.
>
>
>
> My issue comes in watching these other log files. Each one of these file
> is logging events I’m concerned about. Each file has threads. Those threads
> are unique in that file, but not across files. So thread 1 for example is
> going to be in all log files. I’m able to set a context that includes the
> filename and thread id w/ no problem on the rule that is picking up the
> first event I’m concerned about. My issue is then using the context= in the
> next rule to limit it to just that input.
>
>
>
> Here’s some examples I’ve tried:
>
>
>
> type=single
>
> ptype=regexp
>
> pattern=Action one.*Thread="Thread([0-9]+)"
>
> desc=Action one for $+{_inputsrc} thread $1
>
> action=create LK_$+{_inputsrc}_$1 86400;fill LK_$+{_inputsrc}_$1 %t;
>
>
>
> type=single
>
> continue=takenext
>
> ptype=regexp
>
> pattern="Thread([0-9]+)"
>
> context=LK_$+{_inputsrc}_$1;
>
> desc=got a line from $+{_inputsrc} for thread $1 winner winner
> ===LK_$+{_intcontext}_$1===
>
> action=delete LK_$+{_inputsrc}_$1;write - NL %t %var1 %s;
>
>
>
> type=single
>
> continue=takenext
>
> ptype=regexp
>
> pattern="Thread([0-9]+)"
>
> context=LK_inp.text_21
>
> desc=why only this one? got a line from $+{_inputsrc} for thread $1
> +++$+{_intcontext}+++ ===$+{_inputsrc}=== _$1_
>
> action=pop LK_$+{_inputsrc}_$1 %var1;delete LK_$+{_inputsrc}_$1;write - NL
> %t %var1 %s;
>
>
>
>
>
> sec --conf=testing.sec --intevents --intcontexts --nochildterm
> --input=inp.text --debug=6
>
> SEC (Simple Event Correlator) 2.8.2
>
> Reading configuration from testing.sec
>
> 3 rules loaded from testing.sec
>
> No --bufsize command line option or --bufsize=0, setting --bufsize to 1
>
> Opening input file inp.text
>
> Interactive process, SIGINT can't be used for changing the logging level
>
> Creating SEC internal context 'SEC_INTERNAL_EVENT'
>
> Creating SEC internal event 'SEC_STARTUP'
>
> Deleting SEC internal context 'SEC_INTERNAL_EVENT'
>
>
>
>
>
> Now I feed in one line:
>
> echo 'INFO  [2022-11-05 16:24:58,506] Action one Thread="Thread21", "' >>
> inp.text
>
>
>
> I get:
> Creating context 'LK_inp.text_21'
>
> Filling context 'LK_inp.text_21' with event(s) 'Sat Nov  5 19:24:55 2022'
>
>
>
> If I create the sec.dump file I see this as far as what contexts are there:
> List of contexts:
>
> 
>
> Context Name: LK_inp.text_21
>
> Creation Time: Sat Nov  5 19:24:55 2022
>
> Lifetime: 86400 seconds
>
> 1 events associated with context:
>
> Sat Nov  5 19:24:55 2022
>
> 
>
> Total: 1 elements
>
>
>
> I now feed in the other line:
>
> echo 'INFO  [2022-11-05 16:24:58,506] Action two Thread="Thread21", "' >>
> inp.text
>
>
>
> Rule 3 ends up firing and not rule 2:
>
> Pop the last element of context 'LK_inp.text_21' event store into variable
> '%var1'
>
> Variable '%var1' set to 'Sat Nov  5 19:24:55 2022'
>
> Deleting context 'LK_inp.text_21'
>
> Context 'LK_inp.text_21' deleted
>
> Writing event 'NL Sat Nov  5 19:26:25 2022 Sat Nov  5 19:24:55 2022 why
> only this one? got a line from inp.text for thread 21
> +++_FILE_EVENT_inp.text+++ ===inp.text=== _21_' to file '-'
>
> NL Sat Nov  5 19:26:25 2022 Sat Nov  5 19:24:55 2022 why only this one?
> got a line from inp.text for thread 21 +++_FILE_EVENT_inp.text+++
> ===inp.text=== _21_
>
>
>
> Rule 3 is able to delete the context just fine, so I know the
> $+{_inputsrc} is being evaluated correctly. Can it not be evaluated in the
> context= line?
>
>
>
> Can someone guide me w/ a way to get this work?
>
>
>
> Thanks in advance
>
> Sean
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] new features introduced in SEC-2.9.1

2022-05-04 Thread Risto Vaarandi
hi all,

this email provides an introduction to new features in the 2.9.1 version.

Starting from the 2.9.0 version (released last year), EventGroup rules are
supporting event group patterns which allow for matching specific event
sequences within predefined time windows. For example, suppose you want to
match a sequence of events A and B within the window of 60 seconds,
provided that this sequence ends with subsequence A B A. In order to
accomplish this, you could use the 'egpattern' field together with
EventGroup2 rule:

type=EventGroup2
ptype=SubStr
pattern=EVENT_A
ptype2=SubStr
pattern2=EVENT_B
desc=Sequence of A and B that ends with A B A
action=write - %s
egptype=RegExp
egpattern=1 2 1$
window=60

In the above rule, the 'egpattern' field ensures that the last three events
in the event sequence were matched by the 1st, 2nd, and 1st regular
expression pattern respectively.

Starting from SEC-2.9.1, one is no longer limited to numeric tokens that
indicate the matching pattern (for example, tokens 1 and 2 like in the
above rule), but it is possible to define custom tokens. This allows for
implementing useful event correlation schemes, even for EventGroup rules
involving a single event pattern. For example, consider the following rule:

type=EventGroup
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
desc=SSH login failures from three different hosts within 1m for user $1
egtoken=$2
egptype=PerlFunc
egpattern=sub { my(%hosts) = map { $_ => 1 } @{$_[1]}; \
return scalar(keys %hosts) >= 3; }
action=pipe '%t: %s' /bin/mail root@localhost
window=60
thresh=3

The above rule sends an email to root@localhost if at least three SSH login
failures were observed within 60 seconds for the same user account, so that
login failures originated from three unique client hosts. In this rule, the
'egtoken' field configures the use of client host IP addresses as tokens.
Also, the 'egpattern' field is a Perl function which takes the list of
tokens as its second input parameter ($_[1]), making sure that the list
contains at least three unique elements (IP addresses).

The above task can be accomplished with the help of contexts (e.g., this
paper https://ristov.github.io/publications/cogsima15-sec-web.pdf describes
one rule example), but the new features offer an opportunity for writing
more compact solutions.

Hopefully the rule examples from this email provided some insights how to
use the new features and what tasks they allow to address.

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC-2.9.1 released

2022-05-04 Thread Risto Vaarandi
hi all,

SEC-2.9.1 has been released which is available from the SEC home page and
through the following download link:
https://github.com/simple-evcorr/sec/releases/download/2.9.1/sec-2.9.1.tar.gz

Here is the changelog for the new version:

--- version 2.9.1

* added support for 'egtoken*' fields in EventGroup rules.

* starting from this version, list of event group string tokens is passed
  to PerlFunc and NPerlFunc event group patterns as an additional parameter.


kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Parsing Asterisk log files for downstream reporting - so far so good!

2022-04-01 Thread Risto Vaarandi
hi Jim,

it is great to hear that the rule example was useful! As for supporting the
project, the best way of doing it is to post interesting questions to the
mailing list :) The discussions in the mailing list will be helpful for all
the subscribers, and will also provide new ideas for further development of
SEC.

kind regards,
risto


Risto,
>
> I tried that out and it looks to be doing what it should!
>
> Thank you so much for generously taking your time to help me with this.
> I'm still trying to wrap my head around how contexts and descriptions and
> such connect it all together, but I am learning!
>
> Is there some way I can support the project by way of thanks?
>
> Warm regards and thanks again. Aitäh!
>
> Jim
>
> --
> Jim Van Meggelen
> ClearlyCore Inc.
>
>
>
> +1-416-639-6001 (DID)
> +1-877-253-2716 (Canada)
> +1-866-644-7729 (USA)
> +1-416-425-6111 x6001
> jim.vanmegge...@clearlycore.com
> http://www.clearlycore.com
>
>
> *Asterisk: The Definitive GuideFIFTH EDITION NOW AVAILABLE TO DOWNLOAD:*
> https://cdn.oreillystatic.com/pdf/Asterisk_The_Definitive_Guide.pdf
>
> --
>
> *From: *"Risto Vaarandi" 
> *To: *"Jim Van Meggelen" 
> *Cc: *"simple-evcorr-users" 
> *Sent: *Thursday, 31 March, 2022 12:50:46
> *Subject: *Re: [Simple-evcorr-users] Parsing Asterisk log files for
> downstream reporting - so far so good!
>
> hi Jim,
>
> if you want to match the "ActivationHelp" event and react to the earliest
> "User entered" event, provided that these events share the same caller ID,
> you could use the following rule:
>
> type=PAIR
> desc=IVR caller $4 offered activation or statement inquiry
> ptype=RegExp
> action= write - $4 activation or statement inquiry offered
> pattern=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C-[0-9a-f]{8})\].*Executing
> \[(.+)\@(ivr-welcome:[0-9]{1,3})\] (.+)\(\"([A-Z]{3,10})\/(.+)\",
> \"ActivationHelp,prompts.*
> ptype2=regexp
> rem=pattern2=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C-[0-9a-f]{8})\].*
> app_read.c: User entered '?(nothing|[0-9]*)'?.*
> pattern2=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[($4)\].*
> app_read.c: User entered '?(nothing|[0-9]*)'?.*
> desc2=Flexiti IVR - caller selection at activation or statement inquiry
> action2=assign %p flexity; \
> assign %i -ivr ; \
> write %p%i.csv $1,$2,$4,$5 ; closef %p%i.csv ; \
> write - $4 ACTIVATION_STATEMENT %p %i $1 User entered: $5
> window=10800
>
> Let me also explain the changes I have made in the original version:
>
> 1) in the 'pattern' field, I have replaced "flexity-ivr-welcome" with
> "ivr-welcome", so that the example events would be properly matched. Also,
> I have set %p in the 'action2' field, since this variable was originally
> used without a value.
>
> 2) the 'desc' field of the rule has been set to the string 'IVR caller $4
> offered activation or statement inquiry'. Since the value of the 'desc'
> field determines the ID of the event correlation operation, the presence of
> the $4 variable (corresponds to the caller ID) in the 'desc' field ensures
> that the rule will start a separate Pair operation for each distinct caller
> ID. That will prevent situations where there is only one operation that
> mistakenly reacts to "ActivationHelp" and "User entered" events for
> different caller ID's.
>
> 3) in the 'pattern2' field, "\[(C-[0-9a-f]{8})\]" has been replaced with
> "\[($4)\]". The use of $4 variable in 'pattern2' field allows for narrowing
> the regular expression match to the particular caller ID you are expecting
> to see. For instance, after seeing the event
>
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-welcome:34] Read("SIP/CHANNELNAME-000138ac",
> "ActivationHelp,prompts/902/063,1,,,15,") in new stack
>
> the Pair operation is started which expects another event matching the
> following regular expression (note that $4 has been replaced with the
> caller ID from the previous event):
>
> ^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C\-4037)\].*
> app_read.c: User entered '?(nothing|[0-9]*)'?.*
>
> Without this modification to 'pattern2', the regular expression would
> match the "User entered" event for *any* caller ID (that is probably not
> what you want).
>
> 4) I have also set the "window" parame

Re: [Simple-evcorr-users] Parsing Asterisk log files for downstream reporting - so far so good!

2022-03-31 Thread Risto Vaarandi
hi Jim,

if you want to match the "ActivationHelp" event and react to the earliest
"User entered" event, provided that these events share the same caller ID,
you could use the following rule:

type=PAIR
desc=IVR caller $4 offered activation or statement inquiry
ptype=RegExp
action= write - $4 activation or statement inquiry offered
pattern=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C-[0-9a-f]{8})\].*Executing
\[(.+)\@(ivr-welcome:[0-9]{1,3})\] (.+)\(\"([A-Z]{3,10})\/(.+)\",
\"ActivationHelp,prompts.*
ptype2=regexp
rem=pattern2=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C-[0-9a-f]{8})\].*
app_read.c: User entered '?(nothing|[0-9]*)'?.*
pattern2=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[($4)\].*
app_read.c: User entered '?(nothing|[0-9]*)'?.*
desc2=Flexiti IVR - caller selection at activation or statement inquiry
action2=assign %p flexity; \
assign %i -ivr ; \
write %p%i.csv $1,$2,$4,$5 ; closef %p%i.csv ; \
write - $4 ACTIVATION_STATEMENT %p %i $1 User entered: $5
window=10800

Let me also explain the changes I have made in the original version:

1) in the 'pattern' field, I have replaced "flexity-ivr-welcome" with
"ivr-welcome", so that the example events would be properly matched. Also,
I have set %p in the 'action2' field, since this variable was originally
used without a value.

2) the 'desc' field of the rule has been set to the string 'IVR caller $4
offered activation or statement inquiry'. Since the value of the 'desc'
field determines the ID of the event correlation operation, the presence of
the $4 variable (corresponds to the caller ID) in the 'desc' field ensures
that the rule will start a separate Pair operation for each distinct caller
ID. That will prevent situations where there is only one operation that
mistakenly reacts to "ActivationHelp" and "User entered" events for
different caller ID's.

3) in the 'pattern2' field, "\[(C-[0-9a-f]{8})\]" has been replaced with
"\[($4)\]". The use of $4 variable in 'pattern2' field allows for narrowing
the regular expression match to the particular caller ID you are expecting
to see. For instance, after seeing the event

[2022-02-08-06:18:01:] VERBOSE[24858][C-4037] pbx.c: Executing
[8005551234@ivr-welcome:34] Read("SIP/CHANNELNAME-000138ac",
"ActivationHelp,prompts/902/063,1,,,15,") in new stack

the Pair operation is started which expects another event matching the
following regular expression (note that $4 has been replaced with the
caller ID from the previous event):

^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C\-4037)\].*
app_read.c: User entered '?(nothing|[0-9]*)'?.*

Without this modification to 'pattern2', the regular expression would match
the "User entered" event for *any* caller ID (that is probably not what you
want).

4) I have also set the "window" parameter to 3 hours (10800 seconds) -- if
for some reason you would never be seeing "User entered" event for the
given caller ID, the Pair operation will not exist forever but will rather
time out after 3 hours.

5) If there is a chance of not seeing the "User entered event", you can
also utilize the following Single rule after the Pair rule for terminating
the hanging Pair operation when the call is ended:

type=Single
ptype=RegExp
pattern=^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C-[0-9a-f]{8})\].*pbx\.c:
Spawn extension \(.*\) exited non-zero on.*
desc=End of call for caller $4
action=reset -1 IVR caller $4 offered activation or statement inquiry

Note that the first parameter of the 'reset' action (-1) indicates that an
event correlation operation started by the previous rule (i.e., the Pair
rule) has to be terminated. And the second parameter "IVR caller $4 offered
activation or statement inquiry" provides the ID of this operation (in
other words, whenever a call with some ID has ended, we will be terminating
the Pair operation for this call ID). Also, if this Pair operation does not
exist for the given call ID (maybe because the ActivationHelp event was
never seen for this particular call), the 'reset' action is a no-op.

However, if you are absolutely confident that the "User entered" event
always occurs after "ActivationHelp", you don't need the above Single rule.

Does this solution work for you? I hope I understood the problem
description correctly, and if there are any other questions or something
that needs improvement, feel free to ask.

kind regards,
risto


Risto,
>
> Absolutely.
>
> The following pattern should catch it:
>
> ^.*\[(20[234][0-9]-[0-9]{2}-[0-3][0-9])-([0-2][0-9]:[0-5][0-9]:[0-5][0-9]).*VERBOSE\[([0-9]{3,6})\]\[(C-[0-9a-f]{8})\].*
> pbx.c: Spawn extension \(.*\) exited non-zero on.*
>
>
> Here's a few lines leading up to it 

Re: [Simple-evcorr-users] Parsing Asterisk log files for downstream reporting - so far so good!

2022-03-31 Thread Risto Vaarandi
hi Jim,
can you provide an example of the log event that indicates the end of the
call?
kind regards,
risto

Kontakt Jim Van Meggelen () kirjutas
kuupäeval N, 31. märts 2022 kell 15:45:

> Risto,
>
> Thank you for the reply.
>
> You are correct that the C-[0-9a-f]{8} string uniquely identifies each
> call, and is present on all log lines.
>
> We cannot safely assume the maximum call length; some calls could
> reasonably be 30 minutes or even more. We will be able to identify the end
> of call, though, so a pattern/rule can look for that.
>
> I'm very interested in your thoughts. Thanks again.
>
>
> --
> Jim Van Meggelen
> ClearlyCore Inc.
>
>
>
> +1-416-639-6001 (DID)
> +1-877-253-2716 (Canada)
> +1-866-644-7729 (USA)
> +1-416-425-6111 x6001
> jim.vanmegge...@clearlycore.com
> http://www.clearlycore.com
>
>
> *Asterisk: The Definitive GuideFIFTH EDITION NOW AVAILABLE TO DOWNLOAD:*
> https://cdn.oreillystatic.com/pdf/Asterisk_The_Definitive_Guide.pdf
>
> --
>
> *From: *"Risto Vaarandi" 
> *To: *"Jim Van Meggelen" 
> *Cc: *simple-evcorr-users@lists.sourceforge.net
> *Sent: *Thursday, 31 March, 2022 05:48:44
> *Subject: *Re: [Simple-evcorr-users] Parsing Asterisk log files for
> downstream reporting - so far so good!
>
> hi Jim,
>
> I do have couple of things in mind that might help addressing this issue,
> but before coming up with any suggestions, may I ask some questions? As I
> understand, the phone call is uniquely identified by the numeral that
> follows the C character (C-4037 in your example) which is present in
> all log messages for that particular call.
>
> However, is there also a specific log message that denotes the end of the
> call? If there is no message indicating the end-of-call, can we assume that
> the call has ended if there have been no messages for that call ID for the
> last 5 or 10 minutes?
>
> kind regards,
> risto
>
> Kontakt Jim Van Meggelen () kirjutas
> kuupäeval N, 31. märts 2022 kell 07:41:
>
>> I have an atypical use for SEC, in which I'm parsing Asterisk log files
>> to produce CSV peg counts for downstream usage reporting.
>>
>> This is working well and I'm loving this tool!
>>
>> However, I've run into a problem that I cannot seem to find a way around.
>>
>> The log file produces events that are tied to a unique ID for each
>> channel/call (C-), and using that, plus the timestamp, I can
>> produce output that is correlated with a call, and chronological. So far so
>> good.
>>
>> However, there's one event that is difficult to work with, because it can
>> happen more than once on a typical phone call, needs to correlate with an
>> event that happens a few lines prior, but carries no identifying
>> correlation.
>>
>> Here's an example of the output I'm working with (curated for brevity and
>> relevance, and specific to a single unique ID; there are actually hundreds
>> of lines of output per-call):
>>
>> [2022-02-08-06:17:53:] VERBOSE[24858][C-4037] pbx.c: Executing
>> [8005551234@ivr-welcome:9] Read("SIP/CHANNELNAME-000138ac",
>> "EarlyChoice,prompts/001,1,,0,0.01") in new stack
>> [2022-02-08-06:17:53:] VERBOSE[24858][C-4037] app_read.c: Accepting a
>> maximum of 1 digits.
>> [2022-02-08-06:17:53:] VERBOSE[24858][C-4037] file.c:
>>  Playing 'prompts/001.slin' (language 'en')
>> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] app_read.c: User
>> entered nothing.
>> --
>> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] pbx.c: Executing
>> [8005551234@ivr-welcome:13] Read("SIP/CHANNELNAME-000138ac",
>> "EarlyChoice,prompts/001,1,,0,0.01") in new stack
>> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] app_read.c: Accepting a
>> maximum of 1 digits.
>> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] file.c:
>>  Playing 'prompts/001.slin' (language 'fr')
>> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] app_read.c: User
>> entered nothing.
>> --
>> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] pbx.c: Executing
>> [8005551234@ivr-welcome:17] Read("SIP/CHANNELNAME-000138ac",
>> "LanguageChoice,prompts/002,1,,,0.01") in new stack
>> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] app_read.c: Accepting a
>> maximum of 1 digits.
>> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] file.c:
>>  Playing 'prompts/002.slin' (language 'en')
>> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] app_read.c: User
>> entered nothing.
>> --
>> [2022-02-08-06:18:01:

Re: [Simple-evcorr-users] Parsing Asterisk log files for downstream reporting - so far so good!

2022-03-31 Thread Risto Vaarandi
hi Jim,

I do have couple of things in mind that might help addressing this issue,
but before coming up with any suggestions, may I ask some questions? As I
understand, the phone call is uniquely identified by the numeral that
follows the C character (C-4037 in your example) which is present in
all log messages for that particular call.

However, is there also a specific log message that denotes the end of the
call? If there is no message indicating the end-of-call, can we assume that
the call has ended if there have been no messages for that call ID for the
last 5 or 10 minutes?

kind regards,
risto

Kontakt Jim Van Meggelen () kirjutas
kuupäeval N, 31. märts 2022 kell 07:41:

> I have an atypical use for SEC, in which I'm parsing Asterisk log files to
> produce CSV peg counts for downstream usage reporting.
>
> This is working well and I'm loving this tool!
>
> However, I've run into a problem that I cannot seem to find a way around.
>
> The log file produces events that are tied to a unique ID for each
> channel/call (C-), and using that, plus the timestamp, I can
> produce output that is correlated with a call, and chronological. So far so
> good.
>
> However, there's one event that is difficult to work with, because it can
> happen more than once on a typical phone call, needs to correlate with an
> event that happens a few lines prior, but carries no identifying
> correlation.
>
> Here's an example of the output I'm working with (curated for brevity and
> relevance, and specific to a single unique ID; there are actually hundreds
> of lines of output per-call):
>
> [2022-02-08-06:17:53:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-welcome:9] Read("SIP/CHANNELNAME-000138ac",
> "EarlyChoice,prompts/001,1,,0,0.01") in new stack
> [2022-02-08-06:17:53:] VERBOSE[24858][C-4037] app_read.c: Accepting a
> maximum of 1 digits.
> [2022-02-08-06:17:53:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/001.slin' (language 'en')
> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] app_read.c: User entered
> nothing.
> --
> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-welcome:13] Read("SIP/CHANNELNAME-000138ac",
> "EarlyChoice,prompts/001,1,,0,0.01") in new stack
> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] app_read.c: Accepting a
> maximum of 1 digits.
> [2022-02-08-06:17:55:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/001.slin' (language 'fr')
> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] app_read.c: User entered
> nothing.
> --
> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-welcome:17] Read("SIP/CHANNELNAME-000138ac",
> "LanguageChoice,prompts/002,1,,,0.01") in new stack
> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] app_read.c: Accepting a
> maximum of 1 digits.
> [2022-02-08-06:17:58:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/002.slin' (language 'en')
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] app_read.c: User entered
> nothing.
> --
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-welcome:21] Read("SIP/CHANNELNAME-000138ac",
> "LanguageChoice,prompts/002,1,,1,15") in new stack
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] app_read.c: Accepting a
> maximum of 1 digits.
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/002.slin' (language 'fr')
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] app_read.c: User entered
> '1'
> --
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-welcome:34] Read("SIP/CHANNELNAME-000138ac",
> "ActivationHelp,prompts/902/063,1,,,15,") in new stack
> [2022-02-08-06:18:01:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/902.slin' (language 'en')
> [2022-02-08-06:18:16:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/063.slin' (language 'en')
> [2022-02-08-06:18:32:] VERBOSE[24858][C-4037] app_read.c: User entered
> '3'
> --
> [2022-02-08-06:19:36:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-MainMenu:2] Read("SIP/CHANNELNAME-000138ac",
> "ListenForZero,prompts/044,1,,,0.01") in new stack
> [2022-02-08-06:19:36:] VERBOSE[24858][C-4037] app_read.c: Accepting a
> maximum of 1 digits.
> [2022-02-08-06:19:36:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/044.slin' (language 'en')
> [2022-02-08-06:19:46:] VERBOSE[24858][C-4037] app_read.c: User entered
> nothing.
> --
> [2022-02-08-06:19:55:] VERBOSE[24858][C-4037] pbx.c: Executing
> [8005551234@ivr-MainMenu:8] Read("SIP/CHANNELNAME-000138ac",
> "ListenForZero,prompts/045,1,,,0.01") in new stack
> [2022-02-08-06:19:55:] VERBOSE[24858][C-4037] app_read.c: Accepting a
> maximum of 1 digits.
> [2022-02-08-06:19:55:] VERBOSE[24858][C-4037] file.c:
>  Playing 'prompts/045.slin' (language 'en')
> [2022-02-08-06:19:57:] VERBOSE[24858][C-4037] app_read.c: User entered
> nothing.
> --
> 

[Simple-evcorr-users] update in the sec rule repository

2021-11-20 Thread Risto Vaarandi
hi all,

just a small note -- the JSON parsing example in the sec rule repository
has been updated:
https://github.com/simple-evcorr/rulesets/tree/master/parsing-json

Instead of the perl JSON wrapper module which will call some backend
module, the updated example directly employs fast and efficient
Cpanel::JSON::XS module (the module has been packaged for major Linux and
BSD  distributions and can thus be easily installed).

Due to this update, a new ruleset tarball has also been released:
https://github.com/simple-evcorr/rulesets/releases

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] An assist with a rule

2021-07-08 Thread Risto Vaarandi
>
> Beautiful..thank you SO much Risto.  I love the fact that you not only 
> provide something I can use right away, but took the time to explain how it 
> works.  I learn more about SEC every time I email the list.
>
> Thanks again!
>
> James
>

It is great to hear that the examples from my post were helpful :)
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] An assist with a rule

2021-07-08 Thread Risto Vaarandi
hi James,

yes, you can employ the EventGroup rule for addressing this task, and
let me provide two slightly different solutions below. The first and
somewhat simpler solution looks like this:

type=EventGroup
ptype=RegExp
pattern=^\d+\.\d+ \S+ ([\d.]+) \d+ ([\d.]+) 445 \d+\.\d+
pipelsass netlogon DsrEnumerateDomainTrusts
context=!ATTACKER_$1_DC_$2
count=alias ATTACKER_$1 ATTACKER_$1_DC_$2
init=create ATTACKER_$1
end=delete ATTACKER_$1
desc=Attacker $1 has enumerated domain trust against several domain controllers
action=write - %s
thresh=3
window=30

The regular expression pattern of the rule has been designed to match
events in the format you have described, except the expression assumes
that instead of SameIP and DifferentIP there are IPv4 addresses in the
event. For example:

1625677087.655857 CAY8g9D3jAKFwhD02 10.1.1.1 59143 192.168.1.1 445
0.017172 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts

Also, it is assumed that the first IP address (10.1.1.1 in the above
event) belongs to the attacker, while the second IP address
(192.168.1.1) belongs to the domain controller. The above EventGroup
rule runs event correlation operations that write alerts to standard
output if the same attacker accesses three different domain
controllers within 30 seconds. Since the 'desc' field of the rule has
a value with $1 variable (IP address of the attacker), the rule runs a
separate event correlation operation for each attacker, with each
operation having its own event counter.

In order to make sure that access to each domain controller from the
same attacker is counted just once by the corresponding operation, the
above rule employs context aliases which refer to the same context
data structure. The use of just one context with multiple alias names
for each attacker eases the garbage collection procedure when event
correlation operation for the attacker is complete. Each time the rule
observes an event for a new attacker, it creates a new event
correlation operation for this attacker, and the initialization action
'create ATTACKER_$1' provided with the 'init' field is executed. This
action will create the context ATTACKER_ that all future
alias names for this attacker will refer to.

When an event for attacker  and domain controller 
is observed, the rule will check if the context alias
ATTACKER__DC_ exists (that check is implemented by
the 'context' field of the rule). If that alias is not present, the
event gets processed, and the event correlation operation that is
executing for attacker  will create a new alias with that
name, with the alias pointing to context ATTACKER_ (the
relevant action is defined in the 'count' field of the rule). For
example, when the event

1625677087.655857 CAY8g9D3jAKFwhD02 10.1.1.1 59143 192.168.1.1 445
0.017172 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts

is observed, the alias ATTACKER_10.1.1.1_DC_192.168.1.1 gets created
which points to context ATTACKER_10.1.1.1. When the event for the
attacker 10.1.1.1 and domain controller 192.168.1.1 appears again, the
boolean expression !ATTACKER_10.1.1.1_DC_192.168.1.1 provided with the
'context' field evaluates false, and the event is no longer matching
the rule. This ensures that when multiple events for the same domain
controller and attacker combination appear, only the first event is
counted. Therefore, if an event correlation operation is executing for
attacker 10.1.1.1 and its event counter reaches the value of 3, three
events have been observed for attacker 10.1.1.1 for different domain
controller IP addresses. When the event correlation operation for some
attacker IP terminates, the garbage collection action given with the
'end' field is executed, and 'delete ATTACKER_$1' action will remove
the context ATTACKER_ and all its aliases.

In order to illustrate the work of this rule, consider the following
four events for attacker 10.1.1.1 and three different domain
controllers:

1625677087.655857 CAY8g9D3jAKFwhD02 10.1.1.1 59143 192.168.1.1 445
0.017172 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts
1625677087.655860 CAY8g9D3jAKFwhD02 10.1.1.1 59143 192.168.1.1 445
0.017172 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts
1625677087.744290 C6mmd42niEVcW1djY5 10.1.1.1 59144 192.168.1.2 445
0.003689 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts
1625677087.776183 CamRzj1Y20KA9cv9Mi 10.1.1.1 59145 192.168.1.3 445
0.003968 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts

Here is the debug log of SEC which illustrates the event correlation process:

1625677087.655857 CAY8g9D3jAKFwhD02 10.1.1.1 59143 192.168.1.1 445
0.017172 \\pipe\\lsass netlogon DsrEnumerateDomainTrusts   <- first
input event
Creating context 'ATTACKER_10.1.1.1'  <- an event correlation
operation starts for attacker 10.1.1.1 and context ATTACKER_10.1.1.1
is created
Creating alias 'ATTACKER_10.1.1.1_DC_192.168.1.1' for context
'ATTACKER_10.1.1.1'  <- a context alias for attacker 10.1.1.1 and
DC 192.168.1.1 is created and event counter of the operation for

Re: [Simple-evcorr-users] reopening inputfile inconsistently fails

2021-05-26 Thread Risto Vaarandi
hi Brian,

let me provide my comments below in inline fashion:

>
> I've seen a log rotation where the input file did not get re-opened, and am 
> working on troubleshooting that.
>
> For the SEC process that failed, sending a SIGUSR2 failed, but sending a 
> SIGABRT worked.
> (both sent as the same user as the process owner)

To clarify the purpose of SIGUSR2 a bit -- this signal has been indeed
designed for handling file rotations, but it only works for SEC
outputs (for example, files created with 'write' action) and SEC log
file (specified with --log command line option). As for handling input
file rotations, it happens automatically and there is no need for
sending a specific signal to SEC. For each input file, its inode
number is monitored and whenever it changes, the input file has been
rotated and will be reopened. As for the SIGABRT signal, it forces SEC
to reopen all input files if --nokeepopen command line option has been
provided, but by default SEC only attempts to open those input files
which are currently in the closed state.

>
> The input file for that process is an NFS mounted read-only backed file 
> system, to which I have no real access for experimentation.
>
> I created a baby SEC config file for testing, and specified a local 
> filesystem, and was unable to recreate the failure.
> I tried using "mv input_orig input_new; touch input_new", and "cp /dev/null 
> >> input_orig".
> I used non-detached mode, as opposed to detached mode for the failing config, 
> though I wouldn't expect that to make a difference.
>
> I'm currently thinking that it might have to do with the NFS mount options, 
> perhaps specifically the locking methods, or maybe the soft vs. hard mount.

Since handling log rotations depends on file inode numbers, I suspect
the file inode number occasionally stays the same after rotation on an
NFS mounted file system. To find out what actually happened, it is
best to look into the SEC log file. Whenever a rotated input file has
been detected, the message "Input file  has been recreated" will
be written into the log file before input file is reopened (if the
input file is truncated without inode number change, the message
"Input file  has been truncated" is logged). If you are
currently not collecting SEC log messages, I'd recommend activating
logging with the --log command line option and in order to keep the
log file smaller, you can exclude debug-level messages with --debug=5
command line option. Once the issue surfaces again, log messages will
provide a clue what actually happened.

For finding out what information SEC currently has about input files,
you can let SEC generate a dump file with SIGUSR1 signal and then look
into the "Input sources" section in the dump file. Here is an example
fragment from this section:

/var/log/sshd.log (status: Open, type: regular file, read offset: 256,
file size: 256, device/inode: 2065/6449643528, received data: 8123
lines, context: _FILE_EVENT_SSHD)

>From the above information, you can see the device and inode numbers
of the input file. With /usr/bin/stat tool you can find out what
device and inode numbers are reported for that file by the NFS server
and if these numbers are the same you can see in the SEC dump file
(these numbers should always be the same, apart from a very small time
frame before SEC handles the rotated file).

>
> The mtab entry (redhat 7.9) for this includes the following options:
>
> foo.ucsd.edu:/remotefilesystem /localfilemountdir nfs 
> ro,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,soft,nolock,proto=udp,timeo=11,retrans=3,sec=sys,mountaddr=AAA.BBB.CCC.DDD,mountvers=3,mountport=4002,mountproto=udp,local_lock=all,addr=AAA.BBB.CCC.DDD
>  0 0
>
>
> The same SEC config running on a Solaris 11 box with the same NFS mounted 
> filesystem, doesn't have this problem.  The mnttab file there has these 
> options:
>
> foo.ucsd.edu:/remotefilesystem /localfilenountdir nfs 
> ro,nodevices,noquota,vers=3,proto=tcp,xattr,zone=ratbert2,sharezone=1,dev=9540001
>1613782895
>
> Ideas anyone?

>From different forum posts I found a discussion on 'actimeo' file
system option which can be used for setting the caching time for file
attributes. If the current value is too large, the NFS client might
cache file attributes for too long and not detect the inode number
change in a timely fashion. But it is just a guess and for
investigating this issue more closely, some experiments with a test
NFS server are needed.

Hope this helps,
risto

>
> --
> Brian Parent
> Information Technology Services Department
> ITS Computing Infrastructure Operations Group
> its-ci-ops-h...@ucsd.edu (team email address for Service Now)
> UC San Diego
> (858) 534-6090
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


___
Simple-evcorr-users mailing 

Re: [Simple-evcorr-users] SEC-2.9.0 released

2021-05-24 Thread Risto Vaarandi
hi Jaakko,

thanks a lot for packaging the new version for Debian!!

kind regards,
risto


>
> Hi,
>
> Debian is currently pretty frozen and in process of releasing new
> version, so I made the package temporarily available at:
>
> https://liiwi.idle.fi/sec/
>
> I'll remove it from there when it's uploaded to Debian. The package
> also installs cleanly to current stable.
>
> -Jaakko
> On Wed, May 12, 2021 at 8:41 PM Risto Vaarandi  
> wrote:
> >
> > hi all,
> >
> > SEC-2.9.0 has been released which is available from the SEC home page
> > (use the following link for direct download:
> > https://github.com/simple-evcorr/sec/releases/download/2.9.0/sec-2.9.0.tar.gz).
> >
> > Here is the changelog for the new version:
> >
> > * added support for 'cmdexec', 'spawnexec', 'cspawnexec', 'pipeexec'
> >   and 'reportexec' actions.
> > * added support for 'shell' field in SingleWithScript rules.
> > * added support for 'egptype' and 'egpattern' fields in EventGroup rules.
> > * added support for %.sp built-in action list variable.
> > * added ipv6 support for 'tcpsock' and 'udpsock' actions.
> > * bugfixes for 'write', 'writen', 'owritecl', 'udgram', 'ustream',
> >   'udpsock' and 'tcpsock' actions (exceptions from syswrite() and send()
> >   are now handled, and 'ustream' action no longer blocks on Linux when
> >   peer backlog queue is full).
> > * improved socket handling routines.
> > * improved error reporting for invalid command line arguments.
> > * starting from this version, a program provided with --timeout-script
> >   command line option is executed without shell interpretation.
> > * starting from this version, SEC uses Perl JSON::PP module instead of
> >   JSON module (JSON::PP is included in the standard Perl installation).
> >
> >
> > kind regards,
> > risto
> >
> >
> > ___
> > Simple-evcorr-users mailing list
> > Simple-evcorr-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC-2.9.0 released

2021-05-12 Thread Risto Vaarandi
hi all,

SEC-2.9.0 has been released which is available from the SEC home page
(use the following link for direct download:
https://github.com/simple-evcorr/sec/releases/download/2.9.0/sec-2.9.0.tar.gz).

Here is the changelog for the new version:

* added support for 'cmdexec', 'spawnexec', 'cspawnexec', 'pipeexec'
  and 'reportexec' actions.
* added support for 'shell' field in SingleWithScript rules.
* added support for 'egptype' and 'egpattern' fields in EventGroup rules.
* added support for %.sp built-in action list variable.
* added ipv6 support for 'tcpsock' and 'udpsock' actions.
* bugfixes for 'write', 'writen', 'owritecl', 'udgram', 'ustream',
  'udpsock' and 'tcpsock' actions (exceptions from syswrite() and send()
  are now handled, and 'ustream' action no longer blocks on Linux when
  peer backlog queue is full).
* improved socket handling routines.
* improved error reporting for invalid command line arguments.
* starting from this version, a program provided with --timeout-script
  command line option is executed without shell interpretation.
* starting from this version, SEC uses Perl JSON::PP module instead of
  JSON module (JSON::PP is included in the standard Perl installation).


kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC-2.9.alpha2 released

2021-04-06 Thread Risto Vaarandi
hi all,

SEC-2.9.alpha2 has been released which can be downloaded from:
https://github.com/simple-evcorr/sec/releases/download/2.9.alpha2/sec-2.9.alpha2.tar.gz
The download link has also been provided in the SEC home page.

Compared to the 2.9.alpha1 version, this version introduces a number
of improvements to socket handling routines.

kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] 20th birthday of SEC

2021-03-23 Thread Risto Vaarandi
hi all,

on March 23 2001, SEC version 1.0 was released into public domain. I
would like to take the opportunity to thank all SEC users for creative
discussions in this mailing list during the last two decades. I would
also like to thank all people who have suggested new features or
supplied software and documentation fixes. I am especially grateful to
John P. Rouillard for many design proposals and new ideas that are now
part of the SEC code. Finally, my thanks will also go to long term SEC
package maintainers for their continuous work during more than a
decade -- Jaakko Niemi (Debian and Ubuntu), Stefan Schulze
Frielinghaus (RHEL, CentOS and Fedora), Malcolm Lewis (SLE and
openSUSE), Okan Demirmen (OpenBSD), and all other package maintainers
for platforms I might not be aware of.

Thank you all!

risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] executing multiple actions

2021-03-16 Thread Risto Vaarandi
hi Stuart,

I just saw a post with almost the same question as the previous one
(perhaps it was posted before my answer reached your mailbox), and my
apologies if information in this email is redundant.

>
> Correction -- this also produces the same error
>
> But this does not:
> # - Radius Auth Failure -
> #
> type=SingleWithSuppress
> ptype=regexp
> pattern=T(\d\d:\d\d:\d\d).*? (.*?) poll-radius.*?Radius auth request against 
> (.*?) failed
> desc=$3 Radius auth request failed
> action=write /home/tocops/.tocpipe ops $1 Radius on $3 failed; 
> action=shellcmd /opt/local/script/send-sms -m %s -s sec -r stuartk

In the above line, the 'action' keyword appears twice, and that's the
reason for the syntax error.
For fixing the problem, the 'action' keyword in front of the second
action (shellcmd) should be removed, and the line should be rewritten
as:

action=write /home/tocops/.tocpipe ops $1 Radius on $3 failed;
shellcmd /opt/local/script/send-sms -m %s -s sec -r stuartk

> window=60
>
> --sk
>

kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] executing multiple actions

2021-03-16 Thread Risto Vaarandi
hi Stuart,

if you want to specify multiple actions for the 'action' field of the
rule, semicolon should indeed be used as a separator. However, the
'action' keyword with an equal sign should appear just once in the
beginning of the rule field definition. Therefore, the example rule
from your post would need one small modification:

type=SingleWithSuppress
ptype=regexp
pattern=T(\d\d:\d\d:\d\d).*? (.*?) poll-radius.*?Radius auth request
against (.*?) failed
desc=$3 Radius auth request failed
action=write /home/tocops/.tocpipe ops $1 Radius on $3 failedwindow=5;
shellcmd /opt/local/script/send-sms -m %s -s sec -r stuartk
window=60

hope this helps,
risto

Kontakt Stuart Kendrick () kirjutas
kuupäeval K, 17. märts 2021 kell 00:48:
>
> I am struggling to execute multiple actions.  I don't see mention of how to 
> execute multiple actions in the sec man page 
> http://simple-evcorr.github.io/man.html ... but from this page:
> http://simple-evcorr.sourceforge.net/SEC-tutorial/article.html
> I believed that separating actions with semi-colons would be sufficient
>
>
> But perhaps not
>
>
> This works:
> # - Radius Auth Failure -
> #
> type=SingleWithSuppress
> ptype=regexp
> pattern=T(\d\d:\d\d:\d\d).*? (.*?) poll-radius.*?Radius auth request against 
> (.*?) failed
> desc=$3 Radius auth request failed
> action=shellcmd /opt/local/script/send-sms -m %s -s sec -r stuartk
> window=60
>
> As does this:
> # - Radius Auth Failure -
> #
> type=SingleWithSuppress
> ptype=regexp
> pattern=T(\d\d:\d\d:\d\d).*? (.*?) poll-radius.*?Radius auth request against 
> (.*?) failed
> desc=$3 Radius auth request failed
> action=write /home/tocops/.tocpipe ops $1 Radius on $3 failedwindow=5
> window=60
>
> But this does not:
> # - Radius Auth Failure -
> #
> type=SingleWithSuppress
> ptype=regexp
> pattern=T(\d\d:\d\d:\d\d).*? (.*?) poll-radius.*?Radius auth request against 
> (.*?) failed
> desc=$3 Radius auth request failed
> action=write /home/tocops/.tocpipe ops $1 Radius on $3 failedwindow=5; 
> action=shellcmd /opt/local/script/send-sms -m %s -s sec -r stuartk
> window=60
>
>
> 2021-03-16T15:09:56.146094-07:00 vishnu sec[7941]: Reading configuration from 
> /opt/local/etc/sec/service.conf
> 2021-03-16T15:09:56.146198-07:00 vishnu sec[7941]: Rule in 
> /opt/local/etc/sec/service.conf at line 8: Invalid action 'action=shellcmd 
> /opt/local/script/send-sms -m %s -s sec -r stuartk'
> 2021-03-16T15:09:56.146289-07:00 vishnu sec[7941]: Rule in 
> /opt/local/etc/sec/service.conf at line 8: Invalid action list ' write 
> /home/tocops/.tocpipe ops $1 Radius on $3 failedwindow=5; action=shellcmd 
> /opt/local/script/send-sms -m %s -s sec -r stuartk '
> 2021-03-16T15:09:56.146363-07:00 vishnu sec[7941]: No valid rules found in 
> configuration file /opt/local/etc/sec/service.conf
>
> Is executing multiple actions supported?  If so, do I need more than a 
> semi-colon in terms of syntax?
>
> --sk
>
> Stuart Kendrick
> Allen Institute
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] new features in 2.9.alpha1 version

2021-03-13 Thread Risto Vaarandi
hi all,

this email provides a more detailed description of major new features
in SEC-2.9.alpha1.

Firstly, one can use 'egptype' and 'egpattern' fields in EventGroup
rule that specify an additional event group matching condition to
conventional threshold conditions. The 'egptype' and 'egpattern'
fields define the *event group pattern* which can be a regular
expression, string pattern, or a Perl function. Event group pattern is
used for matching the *event group string* which reflects all events
the event correlation operation has seen within its event correlation
window.

For example, consider the EventGroup2 operation which has observed
three events, so that the earliest event has matched its 'pattern'
field, and the following two events its 'pattern2' field. In that
case, the event group string is "1 2 2". The event group string is
matched with the event group pattern only after *all* traditional
numeric threshold conditions have evaluated true.

To illustrate how event group patterns work, consider the following
EventGroup2 rule:

type=EventGroup2
ptype=SubStr
pattern=EVENT_A
thresh=2
ptype2=SubStr
pattern2=EVENT_B
thresh2=2
desc=Sequence of two or more As and Bs with 'A B' at the end
action=write - %s
egptype=RegExp
egpattern=1 2$
window=60

Also, suppose the following events occur, and each event timestamp
reflects the time SEC observes the event:

Mar 10 12:05:31 EVENT_B
Mar 10 12:05:32 EVENT_B
Mar 10 12:05:38 EVENT_A
Mar 10 12:05:39 EVENT_A
Mar 10 12:05:42 EVENT_B

When these events are observed by the above EventGroup2 rule, the rule
starts an event correlation operation at 12:05:31. When the fourth
event appears at 12:05:39, all threshold conditions (thresh=2 and
thresh2=2) become satisfied. Note that without 'egptype' and
'egpattern' rule fields, the operation would execute the 'write'
action. However, since these fields are present, the following event
group string is built from the first four events "2 2 1 1", and this
string is matched with the regular expression 1 2$ (the event group
pattern provided with the 'egpattern' field). Since there is no match,
the operation will *not* execute the 'write' action given with the
'action' field.

When the fifth event appears at 12:05:42, all threshold conditions are
again satisfied, and all observed events produce the following event
group string: "2 2 1 1 2". Since this time the event group string
matches the regular expression given with the 'egpattern' field, the
operation will write the string "Sequence of two or more As and Bs
with 'A B' at the end" to standard output with the 'write' action.

To summarize, the 'egptype' and 'egpattern' fields allow for matching
specific event sequences in a given time window (e.g., one can verify
if events appear in specific order).

The 2.9.alpha1 version is also supporting five new actions: 'cmdexec',
'spawnexec', 'cspawnexec', 'pipeexec', and 'reportexec'. These actions
are similar to 'shellcmd', 'spawn', 'cspawn', 'pipe', and 'report'
actions, but they execute command lines without shell interpretation.
For example, consider the following action definition:

cmdexec rm /tmp/report*

This action will execute the command line 'rm /tmp/report*', but
unlike the 'shellcmd' action, the asterisk is not treated as a file
pattern but just as a file name character. Therefore, the action will
remove the file with the name "/tmp/report*", and not the files
/tmp/report1 and /tmp/report2 if they are present in the /tmp
directory.

The new actions you can find in 2.9.alpha1 allow for executing
external programs in a more secure way and avoiding unexpected side
effects if shell metacharacters are injected into command lines. Also,
the SingleWithScript rule has an additional 'shell' rule field in the
new version for running external programs with or without shell
interpretation.

Finally, the new version is no longer using the Perl JSON module and
has switched to JSON::PP, since unlike the JSON module, JSON::PP is a
part of the standard Perl distribution. This means that SEC is only
using standard Perl modules now which come together with Perl, and
does not require any additional modules. Since Sys::Syslog and
JSON::PP modules might be missing from some old Perl distributions
(Perl versions 5.8, 5.10 and 5.12 usually don't have them installed by
default), the presence of these modules is not mandatory. If you have
such an old Perl distribution and don't want to install Sys::Syslog
and JSON::PP manually, SEC will simply run with a couple of
non-essential features disabled, producing a warning message if you
attempt to use these features.

As for other new features and changes in SEC-2.9.alpha1, please see
the changelog of the new version.

kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] SEC-2.9.alpha1 released

2021-03-12 Thread Risto Vaarandi
hi all,

SEC-2.9.alpha1 has been released which is available for download from
SEC home page (link for direct download:
https://github.com/simple-evcorr/sec/releases/download/2.9.alpha1/sec-2.9.alpha1.tar.gz).

This version is an alpha version of the upcoming 2.9 major release,
and it introduces a number of improvements, including five new actions
and enhancements into EventGroup rule. Trying out the new version and
providing feedback is very much appreciated :)

Here is the changelog for the 2.9.alpha1 version:

* added support for 'cmdexec', 'spawnexec', 'cspawnexec', 'pipeexec'
  and 'reportexec' actions.

* added support for 'shell' field in SingleWithScript rules.

* added support for 'egptype' and 'egpattern' fields in EventGroup rules.

* added support for %.sp built-in action list variable.

* added ipv6 support for 'tcpsock' and 'udpsock' actions.

* bugfixes for 'write', 'writen', 'owritecl', 'udgram', 'ustream',
  'udpsock' and 'tcpsock' actions.

* starting from this version, a program provided with --timeout-script
  command line option is executed without shell interpretation.

* starting from this version, SEC uses Perl JSON::PP module instead of
  JSON module (JSON::PP is included in the standard Perl installation).

kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] an update to SEC FAQ

2021-02-28 Thread Risto Vaarandi
hi all,

for your information, the SEC FAQ has been updated with an example
about matching input lines in UTF-8 and other encodings:
https://simple-evcorr.github.io/FAQ.html#23

kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Use global variables

2020-12-16 Thread Risto Vaarandi
hi Agustin,

Currently, there are no variables which could be set from one rule,
and be accessible in *all* fields of other rules.
However, the same action list variable can be accessed in all rules,
but the use of action list variables is limited to action* rule fields
only.

kind regards,
risto

Kontakt Agustín Lara Romero () kirjutas kuupäeval
K, 16. detsember 2020 kell 12:19:
>
> Hi Risto,
> Is it possible to use global variables?
> For example:
> VARIABLE1 = msg 1
> VARIABLE2 =  msg 2
>
> and after use these variables in a different rules:
> type=Single
> ptype=RegExp
> pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
> desc=use of VARIABLE1
> action=logonly $1 $VARIABLE1
>
> type=Single
> ptype=RegExp
> pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
> desc=use of VARIABLE2
> action=logonly $1 $VARIABLE2
>
> Thanks you!
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] How can I trigger all outstanding context actions on SEC_SHUTDOWN?

2020-12-16 Thread Risto Vaarandi
...one additional note -- SEC official documentation on the 'while'
action has other relevant examples about processing context event
stores (you can find them in the end of the "ACTIONS, ACTION LISTS AND
ACTION LIST VARIABLES" section of SEC man page:
https://simple-evcorr.github.io/man.html#lbAI).

kind regards,
risto


>
> hi Penelope,
>
> since 'obsolete' is a SEC action, it can not be called in Perl, but
> you rather need some sort of loop written in the SEC rule language.
> Fortunately, SEC supports the 'while' action that executes an action
> list as long as the given action list variable evaluates true in
> boolean context. That allows you to write a loop for processing a
> context event store, since there is 'getsize' action for finding the
> number of events in the store, and 'shift' (or 'pop') action for
> removing an element from the beginning (or end) of the store. For
> taking advantage of this functionality for your task, you just have to
> write relevant context names into the event store of some context, and
> then process this context with a loop.
>
> Here is an example ruleset that illustrates the idea:
>
> type=Single
> ptype=SubStr
> pattern=SEC_SHUTDOWN
> context=SEC_INTERNAL_EVENT
> desc=Save contexts msg_* into /tmp/report.* on shutdown
> action=lcall %ret -> ( sub { join("\n", grep { /^msg_/ } keys
> %main::context_list) } ); \
>fill BUFFER %ret; getsize %size BUFFER; \
>while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER 
> )
>
> type=single
> ptype=regexp
> pattern=create (\S+)
> desc=create the $1 context
> action=create $1 3600 ( report $1 /bin/cat > /tmp/report.$1 )
>
> type=single
> ptype=regexp
> pattern=add (\S+) (.+)
> desc=add string $2 to the $1 context
> action=add $1 $2
>
> The 'lcall' action in the first rule executes the following Perl code:
> join("\n", grep { /^msg_/ } keys %main::context_list)
> This code is matching all context names with the "msg_" prefix and
> joining such names into a multiline string.
> The following 'fill' action splits this multiline string by newline,
> and writes individual context names into the event store of the BUFFER
> context.
> The number of context names in the event store is then established
> with getsize %size BUFFER, and then the 'while' loop gets executed:
> while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER)
> Inside the loop, context names are taken from the event store one by
> one, and the 'obsolete' action is called for each context name.
>
> One note of caution -- 'obsolete' triggers the 'report' action which
> forks a separate process, and a forked process has 3 seconds for
> finishing its work before receiving TERM signal from SEC (if the
> process has to run longer, a signal handler must be set up for TERM).
>
> Hopefully the above rule example is useful.
>
> kind regards,
> risto
>
>
>
> Kontakt sec-user--- via Simple-evcorr-users
> () kirjutas kuupäeval T,
> 15. detsember 2020 kell 01:39:
> >
> > Hello!
> >
> > I'm dabbling with SEC, experimenting with adding lines into contexts and 
> > only when the context is finished, decide what to do with it.  Essentially 
> > it's taking a look at the group of log messages emitted by sendmail for 
> > every connection, looking for behaviour that is not consistent with being 
> > an honored guest on the internet, and blocking the source with iptables and 
> > ipset.
> >
> > The problem is that I'm testing with the same input file over and over, but 
> > the 'report' actions aren't running because the entire log file is 
> > processed in less than 10 seconds:
> >
> > sec --conf sendmail.test \
> >   --input /tmp/all.logs \
> >   --fromstart \
> >   --notail \
> >   --bufsize=1 \
> >   --log=- \
> >   --intevents \
> >   --intcontexts \
> >   --debug=50
> >
> > Rather than write some perl to run in the SEC_SHUTDOWN internal event to 
> > write the context buffers to files, I'd really rather just run the 
> > 'obsolete' action on all contexts.  Is there a straightforward way to do 
> > that?
> >
> > type=Single
> > ptype=SubStr
> > pattern=SEC_SHUTDOWN
> > context=SEC_INTERNAL_EVENT
> > desc=Save contexts msg_* into /tmp/report.* on shutdown
> > action=logonly; lcall %ret -> ( sub { my($context); \
> > foreach $context (keys %main::context_list) { obsolete $context; } \
> > } )
> >
> > Mon Dec 14 14:49:34 2020: Code 'CODE(0x560fca302fb8)' runtime error: Can't 
> > locate object method "obsolete" via package "msg_sendmail[4208]" (perhaps 
> > you forgot to load "msg_sendmail[4208]"?) at (eval 9) line 1.
> >
> > For better testing, it would be cool if SEC's idea of the current time 
> > could be derived from the timestamps in the log file instead of wall-clock 
> > time, so that context actions happen at the right time relative to log 
> > messages (rather than 30 seconds after the program ends! :-), but that's 
> > probably a bit too much to ask for.
> >
> > Thanks!
> >
> > --
> >
> > Penelope Fudd
> >
> > 

Re: [Simple-evcorr-users] How can I trigger all outstanding context actions on SEC_SHUTDOWN?

2020-12-15 Thread Risto Vaarandi
>
> For better testing, it would be cool if SEC's idea of the current time could 
> be derived from the timestamps in the log file instead of wall-clock time, so 
> that context actions happen at the right time relative to log messages 
> (rather than 30 seconds after the program ends! :-), but that's probably a 
> bit too much to ask for.
>

Implementing this idea is actually quite complex, since the internal
state of SEC is not only affected by new events in input log files.
There are many state changes which are associated with the system
clock, for example, actions which are executed by Calendar rules,
PairWithWindow rules, expiring contexts, etc. While some actions might
not necessarily change the state of SEC (e.g., sending an email to
someone does not influence event processing), there are actions which
have an impact on state. For example, if an expiring context creates a
synthetic event, this event might match other rules and trigger a
non-trivial event processing scheme. Also, if a Calendar rule executes
the 'addinput' action, a new input file is opened which might add a
large number of new input events into play, while creating a context
from expiring PairWithWindow operation might disable several
frequently matching rules in the rule base. Therefore, implementing a
simple artificial clock which is incremented according to the
timestamp of each new input event does not allow to test a large part
of the event correlation functionality. In order to do that, one must
also identify relevant time moments in between event timestamps from
input files, and carefully replay each such time moment. In short, a
proper implementation of an artificial clock is a complex issue, and
requires too much effort for too limited value.

kind regards,
risto


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] How can I trigger all outstanding context actions on SEC_SHUTDOWN?

2020-12-15 Thread Risto Vaarandi
hi Penelope,

since 'obsolete' is a SEC action, it can not be called in Perl, but
you rather need some sort of loop written in the SEC rule language.
Fortunately, SEC supports the 'while' action that executes an action
list as long as the given action list variable evaluates true in
boolean context. That allows you to write a loop for processing a
context event store, since there is 'getsize' action for finding the
number of events in the store, and 'shift' (or 'pop') action for
removing an element from the beginning (or end) of the store. For
taking advantage of this functionality for your task, you just have to
write relevant context names into the event store of some context, and
then process this context with a loop.

Here is an example ruleset that illustrates the idea:

type=Single
ptype=SubStr
pattern=SEC_SHUTDOWN
context=SEC_INTERNAL_EVENT
desc=Save contexts msg_* into /tmp/report.* on shutdown
action=lcall %ret -> ( sub { join("\n", grep { /^msg_/ } keys
%main::context_list) } ); \
   fill BUFFER %ret; getsize %size BUFFER; \
   while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER )

type=single
ptype=regexp
pattern=create (\S+)
desc=create the $1 context
action=create $1 3600 ( report $1 /bin/cat > /tmp/report.$1 )

type=single
ptype=regexp
pattern=add (\S+) (.+)
desc=add string $2 to the $1 context
action=add $1 $2

The 'lcall' action in the first rule executes the following Perl code:
join("\n", grep { /^msg_/ } keys %main::context_list)
This code is matching all context names with the "msg_" prefix and
joining such names into a multiline string.
The following 'fill' action splits this multiline string by newline,
and writes individual context names into the event store of the BUFFER
context.
The number of context names in the event store is then established
with getsize %size BUFFER, and then the 'while' loop gets executed:
while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER)
Inside the loop, context names are taken from the event store one by
one, and the 'obsolete' action is called for each context name.

One note of caution -- 'obsolete' triggers the 'report' action which
forks a separate process, and a forked process has 3 seconds for
finishing its work before receiving TERM signal from SEC (if the
process has to run longer, a signal handler must be set up for TERM).

Hopefully the above rule example is useful.

kind regards,
risto



Kontakt sec-user--- via Simple-evcorr-users
() kirjutas kuupäeval T,
15. detsember 2020 kell 01:39:
>
> Hello!
>
> I'm dabbling with SEC, experimenting with adding lines into contexts and only 
> when the context is finished, decide what to do with it.  Essentially it's 
> taking a look at the group of log messages emitted by sendmail for every 
> connection, looking for behaviour that is not consistent with being an 
> honored guest on the internet, and blocking the source with iptables and 
> ipset.
>
> The problem is that I'm testing with the same input file over and over, but 
> the 'report' actions aren't running because the entire log file is processed 
> in less than 10 seconds:
>
> sec --conf sendmail.test \
>   --input /tmp/all.logs \
>   --fromstart \
>   --notail \
>   --bufsize=1 \
>   --log=- \
>   --intevents \
>   --intcontexts \
>   --debug=50
>
> Rather than write some perl to run in the SEC_SHUTDOWN internal event to 
> write the context buffers to files, I'd really rather just run the 'obsolete' 
> action on all contexts.  Is there a straightforward way to do that?
>
> type=Single
> ptype=SubStr
> pattern=SEC_SHUTDOWN
> context=SEC_INTERNAL_EVENT
> desc=Save contexts msg_* into /tmp/report.* on shutdown
> action=logonly; lcall %ret -> ( sub { my($context); \
> foreach $context (keys %main::context_list) { obsolete $context; } \
> } )
>
> Mon Dec 14 14:49:34 2020: Code 'CODE(0x560fca302fb8)' runtime error: Can't 
> locate object method "obsolete" via package "msg_sendmail[4208]" (perhaps you 
> forgot to load "msg_sendmail[4208]"?) at (eval 9) line 1.
>
> For better testing, it would be cool if SEC's idea of the current time could 
> be derived from the timestamps in the log file instead of wall-clock time, so 
> that context actions happen at the right time relative to log messages 
> (rather than 30 seconds after the program ends! :-), but that's probably a 
> bit too much to ask for.
>
> Thanks!
>
> --
>
> Penelope Fudd
>
> sec-u...@ch.pkts.ca
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] using variables learned in rule A in rule B's perlfunc: possible?

2020-10-18 Thread Risto Vaarandi
hi Michael,

thanks a lot for sharing examples from your rulebase! I am sure they will
be helpful for people who have to tackle similar tasks in the future, and
will be searching the mailing list for relevant examples.

kind regards,
risto



Risto-
>
>
>
> Thank you for taking time to respond so thoroughly.  Your examples were
> quite clear.  I would not have thought to try to use global variables.
> Ultimately I chose your cached/context approach, which worked great.
>
>
>
> I ended up taking a monolithic config file of nearly 400 hard to
> maintain/organize rules and spread them more logically across 29 (and
> counting) configuration files.  The impetus was needing to re-use some (but
> not all) of the rules in a second network that I admin.
>
>
>
> In an attempt to give back, I included below my general approach in case
> others find it useful.
>
>
>
> Thanks again,
>
> -Michael
>
>
>
> ==/=
>
>
>
> # 000-main.config
>
> # this file will be the same on each network
>
>
>
> type=Single
>
> ptype=SubStr
>
> pattern=SEC_STARTUP
>
> desc=Started SEC
>
> action=assign %a /destinationfile.log
>
>
>
> type=Single
>
> ptype=RegExp
>
> pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+
>
> varmap=SYSLOG; hostname=1
>
> desc=hostname
>
> action=none
>
> continue=TakeNext
>
>
>
> type=Jump
>
> ptype=cached
>
> pattern=SYSLOG
>
> context=$+{hostname} -> ( sub { return 1 if $_[0]; return 0 } )
>
> cfset=001-all
>
> continue=EndMatch
>
>
>
> # This is a catch-all rule to dump to the logfile anything that didn't
> match above..
>
> type=Single
>
> ptype=RegExp
>
> pattern=.*
>
> desc=$0
>
> action=write %a $0
>
>
>
> -/
>
>
>
> #001-all.config
>
> # this file will be different between networks due to differences in
> vendors, device naming standards, etc
>
>
>
> type=Options
>
> joincfset=001-all
>
> procallin=no
>
>
>
> type=Jump
>
> ptype=cached
>
> pattern=SYSLOG
>
> context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^r-/ } )
>
> cfset=050-juniper
>
> continue=EndMatch
>
>
>
> type=Jump
>
> ptype=cached
>
> pattern=SYSLOG
>
> context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^[ts]-/ } )
>
>
>
> # This is a catch-all rule to dump to the logfile anything that doesn't
> match above..
>
> type=Single
>
> ptype=RegExp
>
> pattern=.*
>
> desc=$0
>
> action=write %a $0
>
>
>
> -/
>
>
>
> # 050-juniper.config
>
> # these files will be the same between networks, and where most of the
> rules and re-use will come in.
>
>
>
> type=Options
>
> joincfset=050-juniper
>
> procallin=no
>
>
>
> # near top due to frequency..
>
> type=Jump
>
> ptype=RegExp
>
> pattern=.+
>
> cfset=110-juniper-mx104
>
> continue=TakeNext
>
>
>
> # two examples, I have many stanzas like this for individual JunOS daemons
> [hence many files]
>
> type=Jump
>
> ptype=RegExp
>
> pattern= mgd\[[0-9]+\]:
>
> cfset=150-juniper-mgd
>
> continue=EndMatch
>
>
>
> type=Jump
>
> ptype=RegExp
>
> pattern= rpd\[[0-9]+\]:
>
> cfset=150-juniper-rpd
>
> continue=EndMatch
>
>
>
> # things that don’t match specific daemons
>
> type=Jump
>
> ptype=RegExp
>
> pattern=.+
>
> cfset=100-juniper-nodaemon
>
> continue=TakeNext
>
>
>
> # This is a catch-all rule to dump to the logfile anything that survived
>
> type=Single
>
> ptype=RegExp
>
> pattern=.*
>
> desc=$0
>
> action=write %a $0
>
>
>
> [FIN]
>
>
>
> *From:* risto.vaara...@gmail.com 
> *Sent:* Saturday, October 17, 2020 5:13 PM
> *To:* Michael Hare 
> *Cc:* simple-evcorr-users@lists.sourceforge.net
> *Subject:* Re: [Simple-evcorr-users] using variables learned in rule A in
> rule B's perlfunc: possible?
>
>
>
> hi Michael,
>
>
>
> there are a couple of ways to address this problem. Firstly, instead of
> using sec match variables, one can set up Perl's native variables for
> sharing data between rules. For example, the regular expression pattern of
> the first rule can be easily converted into perlfunc pattern, so that the
> pattern would assign the hostname to Perl global variable $hostname. This
> global variable can then be accessed in perfunc patterns of other rules.
> Here is an example that illustrates the idea:
>
>
>
> type=Single
> ptype=perlfunc
> pattern=sub { if ($_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) /) \
>   { $hostname = $1; } else { $hostname = ""; } return 1; }
> desc=hostname
> action=none
> continue=TakeNext
>
> type=Jump
> ptype=perlfunc
> pattern=sub { return 1 if $hostname =~ m/^first-use-case/ }
> cfset=rules-for-this-match-1
>
> type=Jump
> ptype=perlfunc
> pattern=sub { return 1 if $hostname =~ m/^second-use-case/ }
> cfset=rules-for-this-match-2
>
>
>
> Since SEC supports caching the results of pattern matching, one could also
> store the matching result from the first rule into cache, and then retrieve
> the result from cache with the 'cached' pattern type. Since this pattern
> type assumes that the name of the cached entry is provided in the 'pattern'
> field, the hostname check with a perl function has 

Re: [Simple-evcorr-users] using variables learned in rule A in rule B's perlfunc: possible?

2020-10-17 Thread Risto Vaarandi
hi Michael,

there are a couple of ways to address this problem. Firstly, instead of
using sec match variables, one can set up Perl's native variables for
sharing data between rules. For example, the regular expression pattern of
the first rule can be easily converted into perlfunc pattern, so that the
pattern would assign the hostname to Perl global variable $hostname. This
global variable can then be accessed in perfunc patterns of other rules.
Here is an example that illustrates the idea:

type=Single
ptype=perlfunc
pattern=sub { if ($_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) /) \
  { $hostname = $1; } else { $hostname = ""; } return 1; }
desc=hostname
action=none
continue=TakeNext

type=Jump
ptype=perlfunc
pattern=sub { return 1 if $hostname =~ m/^first-use-case/ }
cfset=rules-for-this-match-1

type=Jump
ptype=perlfunc
pattern=sub { return 1 if $hostname =~ m/^second-use-case/ }
cfset=rules-for-this-match-2

Since SEC supports caching the results of pattern matching, one could also
store the matching result from the first rule into cache, and then retrieve
the result from cache with the 'cached' pattern type. Since this pattern
type assumes that the name of the cached entry is provided in the 'pattern'
field, the hostname check with a perl function has to be implemented in a
context expression (the expression is evaluated after the pattern match).

Here is an example which creates an entry named SYSLOG in a pattern match
cache, so that all match variables created in the first rule can be
retrieved in later rules. Note that the entry is created in the 'varmap'
field which also sets up named match variable $+{hostname}. In further
rules, the 'cached' pattern type is used for retrieving the SYSLOG entry
from cache, and creating all match variables from this entry. In order to
check the hostname, the $+{hostname} variable that was originally set in
the first rule is passed into perlfunc patterns in the second and third
rule. Also, if you need to check more than just few match variables in
perlfunc pattern, it is more efficient to pass a reference to the whole
cache entry into the Perl function, so that individual cached match
variables can be accessed through the reference (I have added an additional
fourth rule into the example which illustrates this idea):

type=Single
ptype=RegExp
pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+
varmap=SYSLOG; hostname=1
desc=hostname
action=none
continue=TakeNext

type=Jump
ptype=cached
pattern=SYSLOG
context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^first-use-case/ } )
cfset=rules-for-this-match-1

type=Jump
ptype=cached
pattern=SYSLOG
context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^second-use-case/ } )
cfset=rules-for-this-match-2

type=Jump
ptype=cached
pattern=SYSLOG
context=SYSLOG :> ( sub { return 1 if $_[0]->{"hostname"} =~
m/^third-use-case/ } )
cfset=rules-for-this-match-3

A small side-note about the first rule -- you can also create the
$+{hostname} variable in the regular expression itself by rewriting it as
follows:
^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (?.+?) .+
And in that case, you can rewrite the 'varmap=SYSLOG; hostname=1' statement
in the first rule simply as 'varmap=SYSLOG'.

Hopefully these examples are useful and help you to tackle the problem.

kind regards,
risto



Hi-
>
> I'm sorry to ask what is probably very basic question, but I have
> struggling with this for awhile (I have perused the manual a lot and the
> mailing list a bit) and could use some guidance.
>
> The short version is: is there a way to take the results of a pattern
> match in one rule and use that value in a perlfunc in another?
>
> More verbosely, at this time I use SEC for network syslog exclusion;
> nothing fancy.  I would like to start using Jump rules based on hostname.
> Hostname is derived from the incoming log line.
>
> I thought I would be clever and use a single rule to determine if there
> was a hostname or not, save it somewhere reusable, and then launch jump
> rules based on that.
>
> something like
>
> type=Single
> ptype=RegExp
> pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+
> varmap= hostname=1
> desc=hostname
> action=assign %r $+{hostname}
> continue=TakeNext
>
> type=Jump
> ptype=perlfunc
> pattern=sub { return 1 if $+{hostname} =~ m/^first-use-case/ }
> cfset=rules-for-this-match-1
>
> type=Jump
> ptype=perlfunc
> pattern=sub { return 1 if $+{hostname} =~ m/^second-use-case/ }
> cfset=rules-for-this-match-2
>
> I know this doesn't work.  I understand that '%r' is not a perl hash, and
> is an action list variable, and that $+{hostname} is undef inside the
> type=Jump rule perlfunc.  I also know that %r is being set correctly, I see
> it in "variables -> r" if I do SIGUSR1 dump.
>
> So is it possible stash away a variable from one rule and use it in a Jump
> rule like above?  I can work around this easily by using a single rule like
> below, but if I have for example 20 jump permutations, it seems quite
> redundant to 

Re: [Simple-evcorr-users] Reset all rules

2020-10-08 Thread Risto Vaarandi
hi Agustin,

if you want to reset the entire state of SEC (not just event correlation
operations, but also contexts, action list variables and other data), you
can use 'sigemul HUP' action. This action will emulate the reception of the
HUP signal which is used to reset all internal state of SEC.

However, if you want to reset only the event correlation operations for one
or more rule files, you can use the following trick -- run an external
script (for example, with 'shellcmd' action) that uses 'touch' utility for
updating the timestamps of these rule files (for example, touch -c
/etc/sec/myrules*.sec), and then sends the ABRT signal to SEC process (for
example, kill -ABRT `cat /run/sec.pid`). The ABRT signal forces SEC to
reset event correlation operations for all rule files with updated
modification timestamps, and also forces SEC to reload rules from these
rule files.

kind regards,
risto

Hi Risto, My name is Agustín,
>
> Is it possible to reset all the rules when an event is received?
>
> Kind regards,
> Agustín
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Multiple Correlation Question

2020-08-06 Thread Risto Vaarandi
hi Suat,

one possible solution for addressing this task is to combine the EventGroup
rule with contexts. Since EventGroup rule allows matching unordered event
groups (e.g., events A, B and C can appear in any order), the purpose of
contexts is to force specific event matching order. The example given below
processes events EVENT_A, EVENT_B and EVENT_C, and for the sake of
simplicity, these events are matched with SubStr patterns. Also, for
simplifying cleanup procedure, the example rule sets up only one context
EVCONT for event correlation, and employs context aliases EVCONT_A and
EVCONT_B for indicating that events A and B have already been observed
(deleting EVCONT would automatically remove both alias names). Since three
events need to be processed, the rule type has been set to EventGroup3:

type=EventGroup3
init=create EVCONT
end=delete EVCONT
slide=reset 0; delete EVCONT
ptype=SubStr
pattern=EVENT_A
context=!EVCONT_A
count=alias EVCONT EVCONT_A
ptype2=SubStr
pattern2=EVENT_B
context2=EVCONT_A && !EVCONT_B
count2=alias EVCONT EVCONT_B
ptype3=SubStr
pattern3=EVENT_C
context3=EVCONT_A && EVCONT_B
desc=sequence A, B and C observed
action=write - %s; reset 0; delete EVCONT
window=600

When the above rule starts an event correlation operation, the operation
creates the context EVCONT (see the 'init' field), and when the operation
ends, EVCONT is deleted (see the 'end' field). Also, the window of the
operation is not sliding and when not all events have been observed by the
end of the 600 second window, the operation terminates itself with 'reset'
action and deletes the EVCONT context (see the 'slide' field). As mentioned
before, the EVCONT context has two alias names -- EVCONT_A indicates that
event A has already been observed, while EVCONT_B manifests the fact that
event B has been seen. The above rule has three patterns ('pattern',
'pattern2' and 'pattern3' fields) and normally, a match against any of
these patterns would start the event correlation operation. However, due to
'context2' and 'context3' fields, events B and C will not match this rule
initially, and only event A can match the rule in the beginning (because
unlike 'context2' and 'context3' fields, the boolean expression in the
'context' field evaluates true). When event A appears, the event
correlation operation is started, and after the event has been processed,
the operation also creates EVCONT_A alias name (see the 'count' field).
Note that the creation of this alias name means that event A no longer
matches this rule (since the boolean expression provided by 'context' field
now evaluates false), but event B is now matching (since boolean expression
in 'context2' field now evaluates true). In other words, the rule is now
expecting to see event B. Similarly, after event B has been observed, the
creation of the alias name EVCONT_B makes the rule to expect event C (see
'count2' and 'context3' fields). Finally, when event C appears, the string
"sequence A, B and C observed" is written to standard output, and the event
correlation operation will terminate itself with 'reset' action and delete
EVCONT context.

The above solution has one drawback -- when several instances of the same
event can appear in the sequence, some sequences might pass unnoticed. For
example, consider the following scenario of four events with timestamps:

12:00:00 event A
12:00:30 event A
12:05:00 event B
12:10:05 event C

The above solution would start an event correlation operation at 12:00:00
which would fail to see the expected sequence by 12:10:00 and thus
terminate. On the other hand, there is a valid sequence in the event stream
that starts from 12:00:30. If you want to handle advanced cases with
repeated events in sequences, a more complex solution is needed.

kind regards,
risto




Kontakt Suat Toksöz () kirjutas kuupäeval N, 6. august
2020 kell 10:19:

> Thanks for the answer. I am looking for window based detection, simple it
> is going to be something like SIEM log correlation. Within 10 min event A,B
> and C must occur and this three event must be in order (first A, then B
> last C)
>
> Thanks
> Suat Toksoz
>
> On Wed, Aug 5, 2020 at 11:58 PM Risto Vaarandi 
> wrote:
>
>> hi Suat,
>>
>> are you interested in some rule examples about detecting event sequences,
>> or are you investigating opportunities for creating a new rule type for
>> matching sequences of events? Many event sequences can be handled by
>> combining existing rules and contexts, so a new rule type might not be
>> needed for the task that you have. To clarify the task a little bit, should
>> the solution apply a sliding window based detection if the entire sequence
>> has not been observed within 10 minutes, or is it not important and
>> incomplete sequence after 10 minutes (say, A and B are present but C is
>> missing) terminates the event correlation scheme

Re: [Simple-evcorr-users] Multiple Correlation Question

2020-08-05 Thread Risto Vaarandi
hi Suat,

are you interested in some rule examples about detecting event sequences,
or are you investigating opportunities for creating a new rule type for
matching sequences of events? Many event sequences can be handled by
combining existing rules and contexts, so a new rule type might not be
needed for the task that you have. To clarify the task a little bit, should
the solution apply a sliding window based detection if the entire sequence
has not been observed within 10 minutes, or is it not important and
incomplete sequence after 10 minutes (say, A and B are present but C is
missing) terminates the event correlation scheme?

kind regards,
risto

Kontakt Suat Toksöz () kirjutas kuupäeval K, 5. august
2020 kell 15:52:

> hi all,
>
> is it possible to have multiple (3,4..) correlation rule on SEC?
>
> For example, If event *A* happens then event *B* happens then event *C*
> happens and all events happen within 10 min.
>
> --
>
> Best regards,
>
> *Suat Toksoz*
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] sec-2.8.3 released

2020-05-02 Thread Risto Vaarandi
hi all,

SEC version 2.8.3 has been released, and here is the change log for the new
version:

* added support for collecting rule performance data, and the --ruleperf
and --noruleperf command line options.

* improved dump file generation in JSON format (some numeric fields that
were reported as JSON strings are now reported as JSON numbers).

The new version can be downloaded from SEC home page (here is the link for
downloading it directly:
https://github.com/simple-evcorr/sec/releases/download/2.8.3/sec-2.8.3.tar.gz
).

Version 2.8.3 introduces the --ruleperf command line option which enables
performance data (CPU time) collection for rules, with performance data
being reported in the dump file. The CPU time reported for the rule does
not only reflect the cost of matching the rule against input events, but
all processing cost. For example, if there is a frequently matching Single
rule, its CPU time also includes the cost of executing rule action list.

Thanks to John Rouillard for suggesting the rule performance profiling
feature!

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] log files existence and accessibility

2020-04-08 Thread Risto Vaarandi
hi Richard,

it is good to hear the rule example is useful for you. Just one small note
-- the execution of 'lcall' action takes place in SEC process which is
single threaded, and custom perl code should not contain blocking function
calls (or anything that would run for significant amount of time).
Otherwise, the custom code would block the SEC process from doing anything
else.

This note is also relevant for my previous example, since perl 'open()'
function will block if it is invoked for a named pipe and the pipe does not
have a writer. For avoiding such situations, make sure that the file
pattern in the code will match regular files only, or check if the file is
a regular file before attempting to open it. Here is an example
modification to 'lcall' code from my previous post:

@files = grep { -f $_ } glob("*.log");

This code applies perl grep() function to original list of files, in order
to filter out files which are not regular files.

kind regards,
risto

Thank you, Risto, we could try to integrate lcall with calendar into our
> setup. SELinux was just theoretical example, we have no problem with that
> now, I was just seeking for bullet-proof solution.
>
> Richard
>
> st 8. 4. 2020 o 12:05 Risto Vaarandi 
> napísal(a):
>
>> hi Richard,
>>
>> if you want to find input files which can not be opened because of
>> permission issues, and want to conduct all checks specifically from SEC
>> process without forking anything, I would recommend to set up an 'lcall'
>> action that runs all checks from a Perl function. Since this function is
>> executed within SEC process, it will have the same permissions for
>> accessing the file as SEC itself. If you return one or more values from the
>> function, 'lcall' will set the output variable by joining return values
>> into single string, so that newline is acting as a separator between return
>> values. Also, if no values are returned from the function, output variable
>> will be set to perl 'undef'. After collecting the output from the function
>> in this way, you can provide the output variable to 'event' action that
>> creates synthetic event from its content. Note that if multi-line string is
>> submitted to 'event' action, several synthetic events are generated (one
>> for each line). And now that you have synthetic events about files that are
>> not accessible, you can process them further in any way you like (e.g., an
>> e-mail could be sent to admin about input file with wrong permissions).
>>
>> Here is an example rule that executes the input file check once a minute
>> from Calendar rule:
>>
>> type=Calendar
>> time=* * * * *
>> desc=check files
>> action=lcall %events -> ( sub { my(@files, @list, $file); \
>>@files = glob("*.log"); foreach $file (@files) { \
>>  if (!open(FILE, $file)) { push @list, "$file is not accessible";
>> } else { close(FILE); } \
>>} return @list; } ); \
>>if %events ( event %events )
>>
>> The action list of the Calendar rule consists of two actions -- the
>> 'lcall' action and 'if' action, with 'if' action executing 'event' action
>> conditionally. The 'lcall' action tries to open given input files and
>> verify that each open succeeds. For all files that it could not open,
>> 'lcall' returns a list of messages " is not accessible", and the list
>> is assigned to %events variable as a multi-line string (each line in the
>> multi-line string represents one message for some file). If the list is
>> empty, %events variable is set to perl 'undef' value. After 'lcall' action,
>> the 'if' action is executed which verifies that %events is true in perl
>> boolean context (if %events has been previously set to 'undef', that check
>> will fail). If the check succeeds (in other words, we have some messages to
>> report), the 'event' action will generate synthetic events from messages.
>>
>> That is one way how to check input files without forking anything from
>> SEC. However, if the root cause of the problem is related to SELinux, it is
>> probably much better to adjust SELinux configuration (perhaps by changing
>> file contexts), so that the problem would not appear.
>>
>> kind regards,
>> risto
>>
>>
>> Kontakt Richard Ostrochovský () kirjutas
>> kuupäeval E, 6. aprill 2020 kell 17:21:
>>
>>> Hello friends,
>>>
>>> I am thinking about how to monitor not only events from log files, but
>>> also those files existence and accessibility (for user running SEC) - in
>>> cases, where this is considered to be a problem.
>>>
>>> As I saw in 

Re: [Simple-evcorr-users] log files existence and accessibility

2020-04-08 Thread Risto Vaarandi
hi Richard,

if you want to find input files which can not be opened because of
permission issues, and want to conduct all checks specifically from SEC
process without forking anything, I would recommend to set up an 'lcall'
action that runs all checks from a Perl function. Since this function is
executed within SEC process, it will have the same permissions for
accessing the file as SEC itself. If you return one or more values from the
function, 'lcall' will set the output variable by joining return values
into single string, so that newline is acting as a separator between return
values. Also, if no values are returned from the function, output variable
will be set to perl 'undef'. After collecting the output from the function
in this way, you can provide the output variable to 'event' action that
creates synthetic event from its content. Note that if multi-line string is
submitted to 'event' action, several synthetic events are generated (one
for each line). And now that you have synthetic events about files that are
not accessible, you can process them further in any way you like (e.g., an
e-mail could be sent to admin about input file with wrong permissions).

Here is an example rule that executes the input file check once a minute
from Calendar rule:

type=Calendar
time=* * * * *
desc=check files
action=lcall %events -> ( sub { my(@files, @list, $file); \
   @files = glob("*.log"); foreach $file (@files) { \
 if (!open(FILE, $file)) { push @list, "$file is not accessible"; }
else { close(FILE); } \
   } return @list; } ); \
   if %events ( event %events )

The action list of the Calendar rule consists of two actions -- the 'lcall'
action and 'if' action, with 'if' action executing 'event' action
conditionally. The 'lcall' action tries to open given input files and
verify that each open succeeds. For all files that it could not open,
'lcall' returns a list of messages " is not accessible", and the list
is assigned to %events variable as a multi-line string (each line in the
multi-line string represents one message for some file). If the list is
empty, %events variable is set to perl 'undef' value. After 'lcall' action,
the 'if' action is executed which verifies that %events is true in perl
boolean context (if %events has been previously set to 'undef', that check
will fail). If the check succeeds (in other words, we have some messages to
report), the 'event' action will generate synthetic events from messages.

That is one way how to check input files without forking anything from SEC.
However, if the root cause of the problem is related to SELinux, it is
probably much better to adjust SELinux configuration (perhaps by changing
file contexts), so that the problem would not appear.

kind regards,
risto


Kontakt Richard Ostrochovský () kirjutas
kuupäeval E, 6. aprill 2020 kell 17:21:

> Hello friends,
>
> I am thinking about how to monitor not only events from log files, but
> also those files existence and accessibility (for user running SEC) - in
> cases, where this is considered to be a problem.
>
> As I saw in the past, these were logged into SEC log file, but higher
> debug level was required, so it is not suitable for production.
>
> There are handful of other options how to monitor it "externally", e.g.
> via some script, or other monitoring agent, but this is not ultimate
> solution, as e.g. SELinux may be configured, allowing external script or
> agent (running under the same user as SEC) to see nad open file for
> reading, but not SEC (theoretical caveat of such solution).
>
> So, has somebody some "best practise" for reliable and production-ready
> way, how to "self-monitor" log files being accessed by SEC, if:
>
>- they exist
>- they are accessible by SEC
>
> ?
>
> Thanks in advance.
>
> Richard
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] RV: IP correlation with EventGroup

2020-04-06 Thread Risto Vaarandi
hi John,

Hi Risto:
>
> ...

>
>
>
> >However, if you would like to suppress the output message that is
> generated
> >on 3rd input event and rather generate an output message "Events A , B and
> >C observed for IP 1.1.1.1" on 5th input event, it is not possible to
> >achieve that goal with EventGroup (or any other) rules, since after seeing
> >the 3rd event, it is not possible to know in advance what events will
> >appear in the future. In other words, SEC rules execute actions
> immediately
> >when a first matching set of events has been seen, and it is neither
> >possible to reprocess past events nor postpone actions in the hope of
> >better future match (which might never occur).
>
> I agree SEC has no built in delay mechanism for actions, you can
> create a context that expires in the future and runs an action on
> expiration.
>
> How about creating a delayed reporting context rather than
> writing:
>
>Events A and B observed for IP 1.1.1.1
>
> in EventGroup2, use an action like:
>
>   create report_$1_event_A_and_B 60 (write - %s)
>
> and add the action
>
>   destroy report_$1_event_A_and_B
>
> to EventGroup3.
>

That's another nice way for handling the problem. In my previous post, I
suggested to store all events for the same IP into the same context and
then use some condition to select most relevant event, but this approach
looks more lightweight and easy-to-use.


> This should prevent the reporting of A and B when A, B and C are
> seen. It will delay the reporting of A and B until the window to find
> A, B and C is done. This may not be desirable, but the laws of physics
> won't permit another option.
>
> Is there a variable that records how much time is left in the window?
> If so you can have the report_$1_event_A_and_B expire in that many
> seconds rather than in 60 seconds. (I assume the window expiration
> times of both EventGroups will be the same since they are triggered by
> the same event.)
>

> This is a bit of a hack but I think ti will work to delay the write
> action.
>
> Thoughts?
>

If one wants to query the number of remaining second in the event
correlation window, there is no such variable, but there is 'getwpos'
action for finding the beginning of the event correlation window of any
operation (in seconds since Epoch). For example, if one includes the
following action in the rule definition
getwpos %window 0
and this action gets executed by the event correlation operation, then the
%window variable is set to the time when the calling operation started.
Since %u action list variable holds a timestamp for the current moment (in
seconds since Epoch), one can calculate remaining seconds in the 60 second
window as follows:
getwpos %window 0; lcall %size %u %window -> ( sub { 60 - ($_[0] - $_[1]) }
)
The above function will find the difference between %u and %window
variables, and subtract the result from 60. If above action list gets
invoked during the last second of operation's lifetime, the function will
return 0, and using this value directly for context lifetime will create a
context with infinite lifetime. To handle this special case, one could
modify the above function to always return a positive value (e.g., 1
instead of 0).

kind regards,
risto


>
> Have a great week.
> --
> -- rouilj
> John Rouillard
> ===
> My employers don't acknowledge my existence much less my opinions.
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] RV: IP correlation with EventGroup

2020-04-06 Thread Risto Vaarandi
>
>
> However, if you would like to suppress the output message that is
> generated on 3rd input event and rather generate an output message "Events
> A , B and C observed for IP 1.1.1.1" on 5th input event, it is not possible
> to achieve that goal with EventGroup (or any other) rules, since after
> seeing the 3rd event, it is not possible to know in advance what events
> will appear in the future. In other words, SEC rules execute actions
> immediately when a first matching set of events has been seen, and it is
> neither possible to reprocess past events nor postpone actions in the hope
> of better future match (which might never occur).
>
>
To add one remark here -- it is possible to configure rules to store their
output into context, and if a context contains more than one event, select
one of these events for reporting after some event aggregation period,
discarding other events. However, in this case reporting will happen with a
delay which might not be acceptable for more critical events.

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] RV: IP correlation with EventGroup

2020-04-06 Thread Risto Vaarandi
hi Agustin,


> Hi Risto,
>
> Thank you very much for your help.
> I have another question related to this problem.
>
> Suppose we have the next entry in less than 60 seconds:
> EVENT_TYPE_A 1.1.1.1 <--- the beginning of input for SEC
> EVENT_TYPE_A 2.2.2.2
> EVENT_TYPE_B 1.1.1.1
> EVENT_TYPE_B 2.2.2.2
> EVENT_TYPE_C 1.1.1.1
> FINISH <--- (FINISH is also an event) the
> end of input for SEC
>
> We have the following rule:
> Rule 1:
> type=EventGroup2
> ptype=RegExp
> pattern=EVENT_TYPE_A ([\d.]+)
> continue=TakeNext
> ptype2=RegExp
> pattern2=EVENT_TYPE_B ([\d.]+)
> continue2=TakeNext
> desc=Events A and B observed for IP $1 within 60 seconds
> action=logonly Events A and B observed for IP $1
> window=60
>
> Rule 2:
> type=EventGroup3
> ptype=RegExp
> pattern=EVENT_TYPE_A ([\d.]+)
> continue=TakeNext
> ptype2=RegExp
> pattern2=EVENT_TYPE_B ([\d.]+)
> continue2=TakeNext
> ptype3=RegExp
> pattern3=EVENT_TYPE_C ([\d.]+)
> continue3=TakeNext
> desc=Events A, B and C observed for IP $1 within 60 seconds
> action=logonly Events A , B and C observed for IP $1
> window=60
>
> We get the following output:
>  Events A and B observed for IP 1.1.1.1
>  Events A and B observed for IP 2.2.2.2
>  Events A , B and C observed for IP 1.1.1.1
>
> I'm waiting for the next output:
>  Events A and B observed for IP 2.2.2.2
>  Events A , B and C observed for IP 1.1.1.1
>
> The idea is to reduce the output.
>

One approach that is relatively easy to implement is the following -- when
the first message for an IP address is generated, a context is created for
this IP address which will prevent further matches by rules for this IP.
Also, in all rule definitions, pattern* fields would be complemented with
context* fields which verify that the context for current IP address does
not exist. For example, in the following ruleset repeated messages for the
same IP address are suppressed for 300 seconds (that's the lifetime of
SUPPRESS_ contexts):

type=EventGroup2
ptype=RegExp
pattern=EVENT_TYPE_A ([\d.]+)
context=!SUPPRESS_$1
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_B ([\d.]+)
context2=!SUPPRESS_$1
continue2=TakeNext
desc=Events A and B observed for IP $1 within 60 seconds
action=write - %s; create SUPPRESS_$1 300
window=60

type=EventGroup3
ptype=RegExp
pattern=EVENT_TYPE_A ([\d.]+)
context=!SUPPRESS_$1
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_B ([\d.]+)
context2=!SUPPRESS_$1
continue2=TakeNext
ptype3=RegExp
pattern3=EVENT_TYPE_C ([\d.]+)
context3=!SUPPRESS_$1
continue3=TakeNext
desc=Events A, B and C observed for IP $1 within 60 seconds
action=write - %s; create SUPPRESS_$1 300
window=60

With these rules, the following output events would be produced for your
example input:
Events A and B observed for IP 1.1.1.1 within 60 seconds
Events A and B observed for IP 2.2.2.2 within 60 seconds

However, if you would like to suppress the output message that is generated
on 3rd input event and rather generate an output message "Events A , B and
C observed for IP 1.1.1.1" on 5th input event, it is not possible to
achieve that goal with EventGroup (or any other) rules, since after seeing
the 3rd event, it is not possible to know in advance what events will
appear in the future. In other words, SEC rules execute actions immediately
when a first matching set of events has been seen, and it is neither
possible to reprocess past events nor postpone actions in the hope of
better future match (which might never occur).

hope this helps,
risto


> Kind regards,
> Agustín
>
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] IP correlation with EventGroup

2020-04-05 Thread Risto Vaarandi
hi Agustin,

I have tried the rule from your e-mail, and I am able to get the output you
are expecting:

/usr/bin/sec --conf=test4.sec --input=-
SEC (Simple Event Correlator) 2.8.2
Reading configuration from test4.sec
1 rules loaded from test4.sec
No --bufsize command line option or --bufsize=0, setting --bufsize to 1
Opening input file -
Interactive process, SIGINT can't be used for changing the logging level
EVENT_TYPE_A 1.1.1.1 <--- the beginning of input for SEC
EVENT_TYPE_A 2.2.2.2
EVENT_TYPE_B 1.1.1.1
EVENT_TYPE_B 2.2.2.2 <--- the end of input for SEC
Writing event 'Events A and B observed for IP 1.1.1.1 within 60 seconds' to
file '-'
Events A and B observed for IP 1.1.1.1 within 60 seconds
Writing event 'Events A and B observed for IP 2.2.2.2 within 60 seconds' to
file '-'
Events A and B observed for IP 2.2.2.2 within 60 seconds


Are you sure that events for IP address 2.2.2.2 are separated by at most 60
seconds? If there is a larger time gap between those two events, the event
correlation operation for 2.2.2.2 will not produce expected output.

kind regards,
risto


Hi Risto,
> I'm sorry, I don't think I made myself clear.
> Thanks for your help, but it still doesn't work. Here's the problem:
>
> We have the following rule:
> type=EventGroup2
> ptype=RegExp
> pattern=EVENT_TYPE_A ([\d.]+)
> continue=TakeNext
> ptype2=RegExp
> pattern2=EVENT_TYPE_B ([\d.]+)
> continue2=TakeNext
> desc=Events A and B observed for IP $1 within 60 seconds
> action=write - %s
> window=60
>
>
> And if we have the following events (input):
>
> EVENT_TYPE_A 1.1.1.1
> EVENT_TYPE_A 2.2.2.2
> EVENT_TYPE_B 1.1.1.1
> EVENT_TYPE_B 2.2.2.2
> FINISH
>
>
> I'm waiting for the next output:
>
> Events A and B observed for IP 1.1.1.1 within 60 seconds
> Events A and B observed for IP 2.2.2.2 within 60 seconds
>
>
> But I get the next exit:
>
> Events A and B observed for IP 1.1.1.1 within 60 seconds
>
>
> In addition (in another example) if the following events were to occurred:
>
> EVENT_TYPE_A 1.1.1.1
> EVENT_TYPE_B 2.2.2.2
> FINISH
>
>
> The rule should not be activated because although events A and B have
> occurred, they have not been for the same IP.
>
>
> The main problem is that I want to correlate N differents events for the
> same IP, but there can be different IPs.
> I have tried to combine this rule with the creation of contexts containing
> the IP, but I cannot solve it.
>
> Kind regards,
> Agustín
>
> --
>
>
>
> hi Agustin,
>
> and thanks for feedback! Instead of developing one rule which addresses
> all scenarios, it is better to write a separate rule for each case. For
> example, for the first case EVENT_TYPE_A && EVENT_TYPE_B the rule would
> look like this:
>
> type=EventGroup2
> ptype=RegExp
> pattern=EVENT_TYPE_A ([\d.]+)
> continue=TakeNext
> ptype2=RegExp
> pattern2=EVENT_TYPE_B ([\d.]+)
> continue2=TakeNext
> desc=Events A and B observed for IP $1 within 60 seconds
> action=write - %s
> window=60
>
> This rule is able to match events of type A and B and extract an IP
> address from these events. Whichever event occurs first, the rule will
> start an event correlation operation for extracted IP address, and the
> operation will wait for the event of second type to arrive within 60
> seconds. If expected event arrives on time, the message "Events A and B
> observed for IP  within 60 seconds" will be written to standard output.
> Please note that after this message has been generated, the operation will
> continue to run until the end of 60 second window, and further events A and
> B will be silently consumed by the operation until the end of the window.
> If you want to avoid this message suppression, you can change the action
> list of the above rule as follows:
>
> action=write - %s; reset 0
>
> In the above action list, the 'reset' action will terminate the event
> correlation operation that invoked this action list, and message
> suppression will therefore not happen.
>
> In order to address the second scenario EVENT_TYPE_A && EVENT_TYPE_B &&
> EVENT_TYPE_C, you would use the following rule which is similar to the
> previous example:
>
> type=EventGroup3
> ptype=RegExp
> pattern=EVENT_TYPE_A ([\d.]+)
> continue=TakeNext
> ptype2=RegExp
> pattern2=EVENT_TYPE_B ([\d.]+)
> continue2=TakeNext
> ptype3=RegExp
> pattern3=EVENT_TYPE_C ([\d.]+)
> continue3=TakeNext
> desc=Events A, B and C observed for IP $1 within 60 seconds
> action=write - %s
> window=60
>
> And as for the final scenario (EVENT_TYPE_A || EVENT_TYPE_B &&
> EVENT_TYPE_D), everything depends on how to interpret it. Given the
> precedence of logical operators, I would interpret it as EVENT_TYPE_A ||
> (EVENT_TYPE_B && EVENT_TYPE_D), and in that case the following two rules
> would be sufficient:
>
> type=Single
> ptype=RegExp
> pattern=EVENT_TYPE_A ([\d.]+)
> continue=TakeNext
> desc=Event A observed for IP
> action=write - %s
>
> type=EventGroup2
> ptype=RegExp
> pattern=EVENT_TYPE_B ([\d.]+)
> continue=TakeNext
> 

Re: [Simple-evcorr-users] IP correlation with EventGroup

2020-04-05 Thread Risto Vaarandi
hi Agustin,

and thanks for feedback! Instead of developing one rule which addresses all
scenarios, it is better to write a separate rule for each case. For
example, for the first case EVENT_TYPE_A && EVENT_TYPE_B the rule would
look like this:

type=EventGroup2
ptype=RegExp
pattern=EVENT_TYPE_A ([\d.]+)
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_B ([\d.]+)
continue2=TakeNext
desc=Events A and B observed for IP $1 within 60 seconds
action=write - %s
window=60

This rule is able to match events of type A and B and extract an IP address
from these events. Whichever event occurs first, the rule will start an
event correlation operation for extracted IP address, and the operation
will wait for the event of second type to arrive within 60 seconds. If
expected event arrives on time, the message "Events A and B observed for IP
 within 60 seconds" will be written to standard output. Please note
that after this message has been generated, the operation will continue to
run until the end of 60 second window, and further events A and B will be
silently consumed by the operation until the end of the window. If you want
to avoid this message suppression, you can change the action list of the
above rule as follows:

action=write - %s; reset 0

In the above action list, the 'reset' action will terminate the event
correlation operation that invoked this action list, and message
suppression will therefore not happen.

In order to address the second scenario EVENT_TYPE_A && EVENT_TYPE_B &&
EVENT_TYPE_C, you would use the following rule which is similar to the
previous example:

type=EventGroup3
ptype=RegExp
pattern=EVENT_TYPE_A ([\d.]+)
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_B ([\d.]+)
continue2=TakeNext
ptype3=RegExp
pattern3=EVENT_TYPE_C ([\d.]+)
continue3=TakeNext
desc=Events A, B and C observed for IP $1 within 60 seconds
action=write - %s
window=60

And as for the final scenario (EVENT_TYPE_A || EVENT_TYPE_B &&
EVENT_TYPE_D), everything depends on how to interpret it. Given the
precedence of logical operators, I would interpret it as EVENT_TYPE_A ||
(EVENT_TYPE_B && EVENT_TYPE_D), and in that case the following two rules
would be sufficient:

type=Single
ptype=RegExp
pattern=EVENT_TYPE_A ([\d.]+)
continue=TakeNext
desc=Event A observed for IP
action=write - %s

type=EventGroup2
ptype=RegExp
pattern=EVENT_TYPE_B ([\d.]+)
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_D ([\d.]+)
continue2=TakeNext
desc=Events B and D observed for IP $1 within 60 seconds
action=write - %s
window=60

However, if you are actually dealing with the scenario (EVENT_TYPE_A ||
EVENT_TYPE_B) && EVENT_TYPE_D, you could use the following two rules:

type=EventGroup2
ptype=RegExp
pattern=EVENT_TYPE_A ([\d.]+)
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_D ([\d.]+)
continue2=TakeNext
desc=Events A and D observed for IP $1 within 60 seconds
action=write - %s
window=60

type=EventGroup2
ptype=RegExp
pattern=EVENT_TYPE_B ([\d.]+)
continue=TakeNext
ptype2=RegExp
pattern2=EVENT_TYPE_D ([\d.]+)
continue2=TakeNext
desc=Events B and D observed for IP $1 within 60 seconds
action=write - %s
window=60

Finally, please note that I have used 'continue*=TakeNext' statements in
all rule definitions, since they are matching the same set of events and
the statements will allow events to be matched and processed by following
rules in the rule base (I have assumed that all 4 rules are in the same
rule file). The 'continue' statements are also necessary if in addition to
above 4 rules, you have also other rules in the remaining rule file which
need to match events A, B, C and D.

hope this helps,
risto

Kontakt Agustín Lara Romero () kirjutas kuupäeval P,
5. aprill 2020 kell 02:58:

> Hi Risto,
>
> Yes, your suppositions in the points 1, 2 and 3 are correct.
> The events can appear in any order.
> The expected time window for this events is 60 seconds
>
> Kind regards,
> Agustín
>
>
> *Cc:* simple-evcorr-users@lists.sourceforge.net <
> simple-evcorr-users@lists.sourceforge.net>
> *Asunto:* Re: [Simple-evcorr-users] IP correlation with EventGroup
>
> hi Agustin,
>
> Hi Risto,
> My name is Agustín, I'm working with the SEC and I have a problem that I
> can't solve.
> I have different events such as:
> EVENT_TYPE_A FROM 1.1.1.1
> EVENT_TYPE_A FROM 2.2.2.2
> EVENT_TYPE_B FROM 1.1.1.1
> EVENT_TYPE_B FROM 2.2.2.2
> EVENT_TYPE_C FROM 1.1.1.1
> EVENT_TYPE_C FROM 2.2.2.2
> EVENT_TYPE_D FROM 2.2.2.2
> FINISH
>
> And I want to get SEC to correlate the events for each IP when the FINISH
> event comes in with the following logic:
>
>
>- For each IP:
>   - (INPUT FOR SAME IP)
>  - EVENT_TYPE_A && EVENT_TYPE_B
>   - (OUPUT)
>  - MATCH_1 FOR IP
>   - (INPUT FOR SAME IP)
>  - EVENT_TYPE_A && EVENT_TYPE_B && EVENT_TYPE_C
>   - (OUPUT)
>  - MATCH_2 FOR IP
>   - (INPUT FOR SAME IP)
>  - EVENT_TYPE_A || EVENT_TYPE_B && EVENT_TYPE_D
>   - (OUPUT)
>  - MATCH_3 FOR IP
>
>
>
> Before 

Re: [Simple-evcorr-users] IP correlation with EventGroup

2020-04-04 Thread Risto Vaarandi
hi Agustin,

Hi Risto,
> My name is Agustín, I'm working with the SEC and I have a problem that I
> can't solve.
> I have different events such as:
> EVENT_TYPE_A FROM 1.1.1.1
> EVENT_TYPE_A FROM 2.2.2.2
> EVENT_TYPE_B FROM 1.1.1.1
> EVENT_TYPE_B FROM 2.2.2.2
> EVENT_TYPE_C FROM 1.1.1.1
> EVENT_TYPE_C FROM 2.2.2.2
> EVENT_TYPE_D FROM 2.2.2.2
> FINISH
>
> And I want to get SEC to correlate the events for each IP when the FINISH
> event comes in with the following logic:
>
>
>- For each IP:
>   - (INPUT FOR SAME IP)
>  - EVENT_TYPE_A && EVENT_TYPE_B
>   - (OUPUT)
>  - MATCH_1 FOR IP
>   - (INPUT FOR SAME IP)
>  - EVENT_TYPE_A && EVENT_TYPE_B && EVENT_TYPE_C
>   - (OUPUT)
>  - MATCH_2 FOR IP
>   - (INPUT FOR SAME IP)
>  - EVENT_TYPE_A || EVENT_TYPE_B && EVENT_TYPE_D
>   - (OUPUT)
>  - MATCH_3 FOR IP
>
>
>
Before suggesting anything, I'd like to clarify some details of the problem
you have. Have I understood correctly that you are dealing with the
following three scenarios?

1) if events of type A and type B are observed for the same IP address, you
would like to trigger an action for this IP address,
2) if events of type A, B and C are observer for the same IP address, you
would like to trigger an action for this IP address,
3) if you see either event of type A, or events of type B and D for the
same IP address, you would like to trigger an action for this IP address.

Also, is the order of events important or can they appear in any order? And
what is the expected time window for these events? (Is it 60 seconds as
your rule example suggests?)

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SEC CPU utilization

2020-04-03 Thread Risto Vaarandi
>
> I mentiond as an offhand remark to Risto a profile mode that would
> count not only every rule that lead to an action, but every time the
> rule executed its regular expression. Having some sort of profile mode
> (not to be run in production) would help identify these sorts of
> issues.
>
>
I have a question about how to potentially implement this feature -- would
it be useful to calculate the total CPU time spent for processing the rule
in profile mode? That would not only include the CPU time for matching the
pattern against the event, but also the CPU time for evaluating the context
expression or executing the action list. Total CPU time would be more
useful if one is dealing with frequently matching rules (say, the ones
which match 80-90% of incoming events), and wants to identify most
expensive rule. Also, some pattern types like Cached and TValue are known
to have low computational cost, but they are sometimes used with more
complex context expressions (e.g., involving the execution of custom Perl
code). Total CPU time would allow to identify such bottlenecks.

I've added some alpha-level code into SEC and here is a small example case
for a rule with badly written regular expression:

type=single
ptype=regexp
pattern=^([0-9]+,?)*[[:punct:]]$
desc=test pattern
action=write - test pattern has matched

The regular expression expects an event which is consisting of numbers that
are optionally separated with commas, and the sequence of numbers must be
followed by a single punctuation mark. However, when one provides a very
long string consisting of digits only without terminating punctuation mark,
the regular expression engine will attempt to divide the sequence into all
possible sub-sequences during backtracking phase. Here is a fragment for
SEC dump file that highlights this issue:

Performance statistics:

Run time: 142 seconds
User time: 63.92 seconds
System time: 0.05 seconds
Child user time: 0 seconds
Child system time: 0 seconds
Processed input lines: 1

Rule usage statistics:


Statistics for the rules from test3.sec
(loaded at Fri Apr  3 18:23:03 2020)

Rule 1 line 1 matched 0 events (test pattern) CPU time: 63.71

As one can see, SEC has processed just one event which has not matched its
only rule, but unsuccessful matching process took more than a minute. Of
course, it is possible to disable unnecessary backtracking by using
possessive quantifier:
^([0-9]+,?)*+[[:punct:]]$

Finding CPU time spent for processing a rule has one caveat -- processing
of Jump rules is a recursive process that might involve further jumps into
other rulesets, and CPU time of the Jump rule would indicate the cost of
entire recursive process. There is no easy way to subtract the CPU time of
child rules, since they might be invoked by other Jump rules that have
matched different events.

Does the idea outlined in this e-mail sound reasonable? I have also looked
into other implementation ideas, but the current one looks most useful and
involving least amount of overhead in the code base.

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SEC CPU utilization

2020-04-02 Thread Risto Vaarandi
hi Richard,


> We were doing log monitoring migration from HPOM to open-source monitoring
> tool, and using SEC for duplicate events flow reduction before passing to
> monitoring agent, in the manner as HPOM agent with built-in correlations
> was used, so the design of rules and correlations is tributary to how it
> was implemented in HPOM. There were hundreds to thousands of pattern
> conditions in HPOM per host, and the structure of their sections was as
> follows:
>
>- HPOM: suppress unmatched conditions -> SEC: Suppress with NRegExp
>- HPOM: suppress matched conditions -> SEC: Suppress with RegExp
>- HPOM: message conditions (with configured time-based correlations)
>-> SEC: Single with RegExp and GoTo -> duplicate suppress time-based
>correlations, each consisting of 3-4 subsequent rules (Single,
>PairWithWindow, SingleWithSuppress, depending on duplicate suppress
>correlation type)
>
> We decided to automate conversion of HPOM configurations to SEC rules, so
> here was not too much space for conceptual improvements over HPOM concepts
> (e.g. by doing deeper analysis of configurations and actual log traffic),
> and we relied on the premise, that those HPOM configurations are OK, and
> tuned by years of development and operations, so the automated conversion
> was 1:1.
>
> Cca 50 log files per host are of several types (according to message
> structure), but each file was monitored in HPOM independently on each
> other, therefore after 1:1 conversion also in SEC is each file monitoring
> independently, however, there is some maybe uglier "configuration
> redundancy" for log files of the same type, as it was in HPOM. The static
> order of conditions in HPOM is preserved also in generated SEC rules.
>
>
Since I have used HPOM in the past, perhaps I can offer some comments and
advise here. As far as I remember, HPOM agent does not support hierarchical
arrangement of log file monitoring policies and rules. You mentioned that
existing HPOM configuration was converted on 1:1 basis -- does that mean
that SEC is configured to use a number of rule files, where each rule file
corresponds to some HPOM policy, and all rules are applied against all
input messages from 50 input files? Since such use of rules is not very
efficient, perhaps you could introduce hierarchical rule arrangement in the
following way. First, assign each rule file to a configuration file set
with Options rule. For example, if you have an HPOM policy called "sshd",
you can use the following Options rule in the beginning of corresponding
rule file:

type=Options
joincfset=sshd
procallin=no

Secondly, run SEC with --intcontexts flag which will set an internal
context for each file (e.g., the file /var/log/secure will have the context
_FILE_EVENT_/var/log/secure by default, but you can also set a custom
context name in command line). Finally, create a special rule file (e.g.,
main.sec) which will route messages to relevant rule files. For example,
suppose that HPOM policies "sshd" and "sudo" have been used for
/var/log/secure, and there are two SEC rule files that contain Options
rules as described above. For handling messages from /var/log/secure, you
can enter the following Jump rule into main.sec rule file:

type=Jump
context=[ _FILE_EVENT_/var/log/secure ]
ptype=TValue
pattern=True
desc=direct /var/log/secure events to relevant rules files
cfset=sshd sudo

Such configuration ensures that messages of /var/log/secure are being
matched against relevant rules only, and rule files not associated with
/var/log/secure will not be applied. Also, only the Jump rules from
main.sec are applied against the entire event stream, while rules from
other files receive their input from Jump rules in main.sec and are thus
applied only for relevant messages. Perhaps you have already applied this
technique, but if not, I would definitely recommend to use this
optimization, since it is very likely to reduce the CPU load.

hope this helps,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] action-list checking if log file is already open by SEC

2020-04-02 Thread Risto Vaarandi
hi Richard,

let me provide few comments here. Since you mentioned setting up an
associative array (Perl hash) that contains input file names, contexts are
actually implemented as a Perl hash as well in SEC code. Checking for the
presence of the context involves a search for some key in a hash, and it is
very fast in Perl. Therefore, if you are concerned about search speed,
using your own hash would not offer significant performance improvements.
However, using a custom hash has one important advantage -- internal hash
of SEC contexts has to be scanned periodically (by default once a second),
in order to detect expired contexts and execute their action lists. If you
have stored input file names in a custom hash, you don't have that
housekeeping overhead.

For estimating the impact of that overhead, you could run couple of tests
with a larger number of contexts in your environment. I have just done
couple of quick experiments on my 4 year old laptop (intel core i5) which
involve creating N contexts and letting SEC run without any other input for
about 15-20 minutes, measuring consumed CPU time at the end of the
experiment. With 1,000 contexts, CPU was utilized by 0.4%, with 10,000
contexts the utilization was 2.2%, and with 100,000 contexts the
utilization reached 16.3%. Given those figures, I would say that if you
have only few thousand contexts, housekeeping should not consume too much
of your CPU time. But if you want to employ contexts for different purposes
like implementing a large blacklist of IP addresses, custom Perl hash is
probably a better way to go (and it would be easier to search it from
PerlFunc patterns).

hope this helps,
risto

Kontakt Richard Ostrochovský () kirjutas
kuupäeval N, 2. aprill 2020 kell 01:18:

> Hi Risto,
>
> thank you for the solution. There is also concern about potential
> performance impact in case of e.g. thousands of files being added by
> addinput with creating extra context for each of them.
>
> Another way could be e.g. maintaining (associative) array in action list,
> with keys of paths to addinput files, reduced by dropinput, similarly as
> you suggest with context creation.
>
> And maybe the most efficient, having built-in way, how to access this list
> of open files, without custom implementation of it.
>
> How do you see these potential options in comparison by performance?
>
> Richard
>
> št 20. 2. 2020 o 21:23 Risto Vaarandi 
> napísal(a):
>
>> hi Richard,
>>
>> I think this scenario is best addressed by creating a relevant SEC
>> context when 'addinput' action is called. In fact, handling such scenarios
>> is one of the purposes of contexts, and here is an example rule which
>> illustrates this idea:
>>
>> type=single
>> ptype=regexp
>> pattern=start monitoring (\S+)
>> context=!FILE_$1_MONITORED
>> desc=add $1 to list of inputs
>> action=addinput $1; create FILE_$1_MONITORED
>>
>> Whenever "start monitoring " event appears, the rule will match
>> only if context FILE__MONITORED does not exist. If rule matches,
>> it executes 'addinput' action for the given file and creates the context,
>> in order to manifest the fact that 'addinput' has already been executed for
>> the given file. Also, as you can see from the above rule, the presence of
>> the context for a file will prevent the execution of 'addinput' again for
>> this file. In order to keep contexts in sync with files that are monitored,
>> the context for a file should be deleted when 'dropinput' action is
>> executed for it.
>>
>> Note that when HUP signal is received, SEC will stop monitoring input
>> files set up with 'addinput' action. However, on receiving HUP signal SEC
>> will also drop all its contexts, so there is no need to take any extra
>> steps in that case.
>>
>> Hope this helps,
>> risto
>>
>> Kontakt Richard Ostrochovský () kirjutas
>> kuupäeval N, 20. veebruar 2020 kell 21:43:
>>
>>> Hello Risto and friends,
>>>
>>> having mechanism for dynamic opening (addinput) and closing (dropinput)
>>> files, I would like to be able to check, if the file is already opened,
>>> before trying to open it again, to avoid it. That way I would like to
>>> eliminate this error message from SEC log (present also in debug level 3):
>>>
>>> Dynamic input file '/path/to/file.log' already exists in the list of
>>> inputs, can't add
>>>
>>> This information is present in sec.dump, but maybe there exists more
>>> instant and straightforward way how to achieve it (without parsing
>>> intermediary files).
>>>
>>> Thank you.
>>>
>>> Richard
>>> 

[Simple-evcorr-users] updates to SEC FAQ

2020-04-01 Thread Risto Vaarandi
hi all,

SEC FAQ has received couple of updates:
*) Q24 (https://simple-evcorr.github.io/FAQ.html#24) that describes the use
of 'addinput' and 'dropinput' actions has been updated with a second
example about tracking log files with timestamps in file names,
*) new entry Q27 (https://simple-evcorr.github.io/FAQ.html#27) has been
added that provides an example about receiving input events with socat tool
from TCP and UDP ports.

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Interesting command line behaviour

2020-03-04 Thread Risto Vaarandi
hi James,

you are observing this behavior since --detach option involves changing
working directory to root directory (that's a standard part of turning the
process into a daemon). Actually, when you look into debug messages from
SEC, there is also a message about directory change in the terminal. Since
the input file and rule file names are relative, they will be not found in
/, and to fix this issue, absolute paths have to be used with --detach
option.

hope this helps,
risto

Kontakt James Lay () kirjutas kuupäeval K, 4.
märts 2020 kell 21:58:

> Hey all,
>
> So...I'm testing some stuff.when I have an entry with logonly things
> work fine.  I'm trying to run sec against a file and have the action of
> pipe work, but the pipe command is echoed to the screen.  As I was
> testing I noticed something oddin a dir I have sec-test.conf and
> tabbedfile.  Command:
>
> sec --conf=sec-test.conf --input=tabbedfile
> SEC (Simple Event Correlator) 2.8.2
> Reading configuration from sec-test.conf
> 4 rules loaded from sec-test.conf
> No --bufsize command line option or --bufsize=0, setting --bufsize to 1
> Opening input file tabbedfile
> Interactive process, SIGINT can't be used for changing the logging level
>
> however when I try --detach I get:
>
> sec --conf=sec-test.conf  --input=tabbedfile --detach
> SEC (Simple Event Correlator) 2.8.2
> Changing working directory to /
> Reading configuration from sec-test.conf
> Can't open configuration file sec-test.conf (No such file or directory)
> No --bufsize command line option or --bufsize=0, setting --bufsize to 1
> Opening input file tabbedfile
> Input file tabbedfile does not exist!
>
> No matter where in the command I put --detach tabbedfile suddenly is not
> found.  Odd behaviour.  Thank you.
>
> James
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] action-list checking if log file is already open by SEC

2020-02-20 Thread Risto Vaarandi
hi Richard,

I think this scenario is best addressed by creating a relevant SEC context
when 'addinput' action is called. In fact, handling such scenarios is one
of the purposes of contexts, and here is an example rule which illustrates
this idea:

type=single
ptype=regexp
pattern=start monitoring (\S+)
context=!FILE_$1_MONITORED
desc=add $1 to list of inputs
action=addinput $1; create FILE_$1_MONITORED

Whenever "start monitoring " event appears, the rule will match
only if context FILE__MONITORED does not exist. If rule matches,
it executes 'addinput' action for the given file and creates the context,
in order to manifest the fact that 'addinput' has already been executed for
the given file. Also, as you can see from the above rule, the presence of
the context for a file will prevent the execution of 'addinput' again for
this file. In order to keep contexts in sync with files that are monitored,
the context for a file should be deleted when 'dropinput' action is
executed for it.

Note that when HUP signal is received, SEC will stop monitoring input files
set up with 'addinput' action. However, on receiving HUP signal SEC will
also drop all its contexts, so there is no need to take any extra steps in
that case.

Hope this helps,
risto

Kontakt Richard Ostrochovský () kirjutas
kuupäeval N, 20. veebruar 2020 kell 21:43:

> Hello Risto and friends,
>
> having mechanism for dynamic opening (addinput) and closing (dropinput)
> files, I would like to be able to check, if the file is already opened,
> before trying to open it again, to avoid it. That way I would like to
> eliminate this error message from SEC log (present also in debug level 3):
>
> Dynamic input file '/path/to/file.log' already exists in the list of
> inputs, can't add
>
> This information is present in sec.dump, but maybe there exists more
> instant and straightforward way how to achieve it (without parsing
> intermediary files).
>
> Thank you.
>
> Richard
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] How to introduce new match variable

2020-02-19 Thread Risto Vaarandi
hi Dusan,

you can find my comments below:

>
> I try to add new variable using “context” and :> operator also using
“lcall” action but no luck.
> Any idea how to achieve this?
>
> This is what I have produced so far:
>
> Config file: dusko.sec
> 
> rem=Rule 1
> type=Single
> ptype=RegExp
> pattern=^(?\S+) (?\S+)$
> varmap=MY_EVENT
> continue=TakeNext
> desc=Parsing Event
> action=write - R1: Parsing event: $+{EVENT} $+{SEVERITY}
>
> rem=Rule 2
> type=Single
> ptype=Cached
> pattern=MY_EVENT
> context=MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } )
> desc=Introducing new variable
> action=lcall %o MY_EVENT -> ( sub { $_[0]->{"NEW"} = "value" } ); \
> write - R2: NEW = $+{NEW}
>

Rule #2 is not having an expected effect, since SEC rule matching involves
several steps in the following order:
1) pattern is matched against an incoming event
2) if pattern matched the event, collect match variable values for
substitutions (e.g., substitutions in 'context' field of the rule)
3) evaluate the context expression of the rule (provided with 'context'
field)

If any new match variables are created during step 3, they are not used
during substitutions within the current rule, since the set of match
variables and their values were fixed during previous step. However, the
match variable would be visible in the following rules. In order to make
the variable visible immediately in the current rule, you can enclose the
context expression in square brackets [ ], which means that context
expression has to be evaluated *before* the pattern match (in other words,
step 3 would be taken before step 1 now). For example:

rem=Rule 2
type=Single
ptype=Cached
pattern=MY_EVENT
context=[ MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } ) ]
desc=Introducing new variable
action=write - R2: NEW = $+{NEW}

The use of [ ] operator involves one caveat -- since match variables (e.g.,
$1 or $2) are produced by pattern match, they will not have any values yet
when context expression is evaluated, and are therefore not substituted.
However, this is not a problem for the above rule, since the context
expression in this rule contains no references to match variables (such as
$1 or $+{NEW}).

>
> Also if I want to replace “->” with “:>” for lcall action:
> action=lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); \
> write - R2: NEW = $+{NEW}
>
> I got compilation error:
> Rule in ./dusko.sec at line 10: Eval '{"NEW"} = "value" } )' didn't
return a code reference: syntax error at (eval 9) line 1, near "} ="
> Unmatched right curly bracket at (eval 9) line 1, at end of line
> Rule in ./dusko.sec at line 10: Invalid action list ' lcall %o MY_EVENT
:> ( sub { $_[0]->{"NEW"} = "value" } ); write - R2: NEW = $+{NEW} '

This is because the :> operator for 'lcall' action was introduced in
sec-2.8.0, and is not supported by previous versions (such as sec-2.7.X).
When I tried your rule with sec-2.8.2, everything worked fine, but testing
it with sec-2.7.12 produced the same error message. Therefore I suspect
that you have an earlier version than 2.8.0, and would recommend to upgrade
to 2.8.2 (the latest version). But with the above workaround, you would not
need 'lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } )' action
anyway.

Hope this helps,
risto

>
> Thanks for any help,
> Dusan
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] how not to keep monitored files permanently open (not only) on NFS

2020-02-07 Thread Risto Vaarandi
hi Richard,

In this context I am also curious, what would be the effect of using
> --check-timeout / --poll-timeout, if the log file will be closed or remain
> open during timeout... I am trying to find a way, how to use SEC in "close
> after read" mode - used to use this mode in previous log event correlation
> solution, because keeping log files "always open" causes described problem
> with their deletion (by external archivation script) on NFS...
>
> From SEC manual: "Each input file is tracked both by its name and i-node,
> and input file rotations are handled seamlessly. If the input file is
> recreated or truncated, SEC will reopen it and process its content from the
> beginning. If the input file is removed (i.e., there is just an i-node left
> without a name), SEC will keep the i-node open and wait for the input file
> recreation."
>
> Maybe it would be sufficient having an option to (immediately?) close
> (re)moved file, instead of keeping original i-node open until its
> recreation in its original location.
>
>
 This behavior is intentional and necessary, in order to not miss events
that are written into input file. For example, consider the following
situation:
1) process X is running and writing its events into a log file which is
monitored by SEC
2) log rotation tool (e.g., logrotate) will delete the log file
3) log rotation tool will send a signal to process X, forcing the process
to reopen the log file (this step will recreate the log file on disk)
Note that after step 2 we have a situation where process X is still writing
into nameless file and could log additional events that SEC needs to
process. Therefore, closing the log file immediately without waiting for
the appearance of new log file on disk involves the risk of missing events.
That risk increases with custom log rotation scripts which might involve a
larger time gap between steps 2 and 3. One could also imagine other similar
scenarios like accidental removal of log file from disk, and that is the
reason why SEC does not close the log file when its name disappears from
directory tree.

Hope this helps,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] how not to keep monitored files permanently open (not only) on NFS

2020-02-04 Thread Risto Vaarandi
hi Richard,

I have never used SEC for monitoring files on NFS file systems, but I can
provide few short comments on how input files are handled. After SEC has
successfully opened an input file, it will be kept open permanently. When
input file is removed or renamed, input file is still kept open (at that
point the file handle is pointing to disk blocks that no longer have a
name). When a new file appears on disk with the same name as original input
file, SEC will close the file handle pointing to nameless disk blocks and
will open the new input file, starting to process it from the beginning.
However, this operation is atomic and the input file will never show as
"Closed" in the dump file. Status "Closed" in dump file usually indicates
that SEC was not able to open the input file when it was started or
restarted with a signal (a common situation when file does not exist or
there is no permission to open the file), but it can also indicate that SEC
has closed the file due to IO error when reading from file (that should
normally not happen and usually means a serious disk issue).

In order to find out why input file is in closed state, I would recommend
to check the log of SEC (you can set it up with --log command line option).
For example, if the input file did not exist when SEC was started or
restarted with a signal, there should be the following lines in the log:

Opening input file test.log
Input file test.log does not exist!

Also, if file is closed due to IO error, there is a relevant message in the
log, for example: Input file test.log IO error (Input/output error),
closing the file.

Hope these comments are helpful,
risto


Hi Risto and friends,
>
> I am unsure about one conceptual question about how SEC keeps open
> monitored files.
>
> Using SEC as systemd service, when files stored in NFS (opened via
> addinput) being watched by SEC are moved elsewhere, and then their removal
> is tried, NFS persistently keeps .nfsNUMBER files in existence, unable to
> remove whole folder of them. This functionality of NFS is described e.g.
> here:
> https://www.ibm.com/su
> pport/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8
> 
>
>
> It seems, that SEC running as service (not re-opening and closing each
> monitored file in each "run") is keeping watched files permanently open,
> and it is not desirable in such setup.
>
> But: when I look into the dump, some files have "status: Open" and some
> "status: Closed", so maybe this my assumption about log files permanently
> opened by SEC is not correct - I am confused.
>
> How it is in reality? Has somebody experience with described behaviour
> (SEC+NFS, but it could arise also in other setups) and how to overcome it?
>
> Thank you.
>
> Richard
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SEC rules performance monitoring and tuning

2020-02-04 Thread Risto Vaarandi
hi Richard,

That's an interesting question. Dump files in JSON format have been
supported only across few recent versions (from 2.8.0 to 2.8.2) and don't
have long history behind them, but so far their format has stayed the same.
As for dump files in text format, there have been a number of changes since
early versions of SEC, but most of these changes have been related to
adding sections with new data into dump file, not changing the format of
existing information. I did a quick test with version 2.6.2 (released in
2012) and version 2.8.2 (most recent version), and the only difference
which can influence parsing is how rule usage statistics is represented.
With version 2.6.2, a single space character is used for separating words
in each printed line, and rule usage statistics looks like follows:

Statistics for the rules from test.sec
(loaded at Tue Feb  4 17:54:34 2020)

Rule 1 at line 1 (test) has matched 1 events
Rule 2 at line 7 (test) has matched 0 events
Rule 3 at line 16 (test) has matched 0 events
Rule 4 at line 23 (test) has matched 0 events

However, in version 2.8.2 the numerals in rule usage statistics are
space-padded on the left, and the width of each numeric field is determined
by the number of characters in largest numeral (that format change was
introduced in 2013 for readability reasons:
https://sourceforge.net/p/simple-evcorr/mailman/message/30185995/). For
example, if you have three rules that have matched 1, 823 and 7865 times,
the first numeral is printed as "   1", the second one as " 823", and the
third one as "7865". Here is an example of rule usage statistics data for
version 2.8.2:

Statistics for the rules from test.sec
(loaded at Tue Feb  4 17:53:30 2020)

Rule 1 line  1 matched 1 events (test)
Rule 2 line  7 matched 0 events (test)
Rule 3 line 16 matched 0 events (test)
Rule 4 line 23 matched 0 events (test)

The other differences between dump files are some extra data that is
printed for version 2.8.2 (for example, effective user and group IDs of the
SEC process). So one can say that the format of textual dump file has not
gone through major changes during the last 7-8 years, and also, there are
no plans for such changes in upcoming versions.

kind regards,
risto


Hi Risto,
>
> thank you for positive answer - dumping period in minutes in enough, not
> needed many times per second. And also for useful tips - I already noticed
> JSON option, just considering, that default is maybe more universally
> usable, because extra modules (JSON.pm) may not be installed or installable
> on any systems (not only due technical reasons, but also security policies
> - more code auditing needed). Maybe the question also could be, how
> frequent are changes of dump file formats during SEC development, which
> could break monitoring implemented for particular version of SEC.
>
> Richard
>
> ut 28. 1. 2020 o 12:40 Risto Vaarandi 
> napísal(a):
>
>> hi Richard,
>>
>> as I understand from your post, you would like to create SEC dump files
>> periodically, in order to monitor performance of SEC based on these dump
>> files. Let me first provide some comments on performance related question.
>> Essentially, creation of dump file involves a pass over all major internal
>> data structures, so that summaries about internal data structures can be
>> written into dump file. In fact, SEC traverses all internal data structures
>> for housekeeping purposes anyway once a second (you can change this
>> interval with --cleantime command line option). Therefore, if you re-create
>> the dump file after reasonable time intervals (say, after couple of
>> minutes), it wouldn't noticeably increase CPU consumption (things would
>> become different if dumps would be generated many times a second).
>>
>> For production use, I would provide couple of recommendations. Firstly,
>> you could consider generating dump files in JSON format which might make
>> their parsing easier. That feature was requested couple of years ago
>> specifically for monitoring purposes (
>> https://sourceforge.net/p/simple-evcorr/mailman/message/36334142/), and
>> you can activate JSON format for dump file with --dumpfjson command line
>> option. You could also consider using --dumpfts option which adds numeric
>> timestamp suffix to dump file names (e.g., /tmp/sec.dump.1580210466). SEC
>> does not overwrite the dump file if it already exists, so having timestamps
>> in file names avoids the need for dump file rotation (you would still need
>> to delete old dump files from time to time, though).
>>
>> Hopefully this information is helpful.
>>
>> kind regards,
>> risto
&g

Re: [Simple-evcorr-users] SEC rules performance monitoring and tuning

2020-01-28 Thread Risto Vaarandi
hi Richard,

as I understand from your post, you would like to create SEC dump files
periodically, in order to monitor performance of SEC based on these dump
files. Let me first provide some comments on performance related question.
Essentially, creation of dump file involves a pass over all major internal
data structures, so that summaries about internal data structures can be
written into dump file. In fact, SEC traverses all internal data structures
for housekeeping purposes anyway once a second (you can change this
interval with --cleantime command line option). Therefore, if you re-create
the dump file after reasonable time intervals (say, after couple of
minutes), it wouldn't noticeably increase CPU consumption (things would
become different if dumps would be generated many times a second).

For production use, I would provide couple of recommendations. Firstly, you
could consider generating dump files in JSON format which might make their
parsing easier. That feature was requested couple of years ago specifically
for monitoring purposes (
https://sourceforge.net/p/simple-evcorr/mailman/message/36334142/), and you
can activate JSON format for dump file with --dumpfjson command line
option. You could also consider using --dumpfts option which adds numeric
timestamp suffix to dump file names (e.g., /tmp/sec.dump.1580210466). SEC
does not overwrite the dump file if it already exists, so having timestamps
in file names avoids the need for dump file rotation (you would still need
to delete old dump files from time to time, though).

Hopefully this information is helpful.

kind regards,
risto

Kontakt Richard Ostrochovský () kirjutas
kuupäeval E, 27. jaanuar 2020 kell 22:04:

> Greetings, friends,
>
> there is an idea to monitor high-volume event flows (per rules or log
> files, e.g. in bytes per time unit) from dump files from *periodic
> dumping*.
>
> The question is, if this is recommended for *production use* - answer
> depends at least on if dump creation somehow slows down or stops SEC
> processing, or it has not significant overhead (also for thousands of
> rules, complex configurations).
>
> Are there some best practises for (or experiences with) performance
> self-monitoring and tuning of SEC?
>
> Thank you.
>
> Richard
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SEC + AI (machine learning)

2020-01-23 Thread Risto Vaarandi
hi Richard,

Next step would be integrating AI (machine learning) with SEC somehow, so
> that user won't need to configure correlations statically, but they would
> configure and self-optimize automatically. (There still could be some input
> needed from the user, but system would be also able to react on changing
> log traffic, and self-evolve.)
>
> Something like ELK+AI has usable in the log monitoring area.
>
> Maybe some integration with MXNet?
>
> http://blogs.perl.org/users/sergey_kolychev/2017/02/machine-learning-in-perl.html
>
> Does anybody have any experience in this area, to explain some more or
> less theoretical or practical setup of AI-generated SEC rules? (I am pretty
> sure, that this is out of scope of SEC itself, and SEC would'nt know, that
> AI is dynamically generating its rules on the background and probably
> nobody has working solution, but maybe we could invent something together.)
>
>
Machine learning is a very wide area with a large number of different
methods and algorithms around. These methods and algorithms are usually
divided into two large classes:
*) supervised algorithms which assume that you provide labeled data for
learning (for example, a log file where some messages are labeled as
"normal" and some messages as "system_fault"), so that the algorithm can
learn from labeled examples how to distinguish normal messages from errors
(note that in this simplified example, only two labels were used, but in
more complex cases you could have more labels in play)
*) unsupervised algorithms which are able to distinguish anomalous or
abnormal messages without any previous training with labeled data
So my first question is -- what is your actual setup and do you have the
opportunity of using training data for supervised methods, or are
unsupervised methods a better choice? After answering this question, you
can start studying most promising methods more closely.

Secondly, what is your actual goal? Do you want to:
1) detect an individual anomalous message or a time frame containing
anomalous messages from event logs,
2) produce a warning if the number of messages from specific class (e.g.
login failures) per N minutes increases suddenly to an unexpectedly large
value,
3) use some tool for (semi)automated mining of new SEC rules,
4) something else?

For achieving first goal, there is no silver bullet, but perhaps I can
provide few pointers to some relevant research papers (note that there are
many other papers in this area):
https://ieeexplore.ieee.org/document/4781208
https://ieeexplore.ieee.org/document/7367332
https://dl.acm.org/doi/10.1145/3133956.3134015

For achieving the second goal, you could consider using time series
analysis methods. You could begin with a very simple moving average based
method like the one described here:
https://machinelearnings.co/data-science-tricks-simple-anomaly-detection-for-metrics-with-a-weekly-pattern-2e236970d77
or you could employ more complex forecasting methods (before starting, it
is probably a good idea to read this book on forecasting:
https://otexts.com/fpp2/)

If you want to mine new rules or knowledge for SEC (or for other tools)
from event logs, I have actually done some previous research in this
domain. Perhaps I can point you to a log mining utility called LogCluster (
https://ristov.github.io/logcluster/) which allows for mining line patterns
and outliers from textual events logs. Also, couple of years ago, an
experimental system was created which was using LogCluster in a fully
automated way for creating SEC Suppress rules, where these rules were
essentially matching normal (expected) messages. Any message not matching
these rules was considered an anomaly and was logged separately for manual
review. Here is the paper that provides an overview of this system:
https://ristov.github.io/publications/noms18-log-anomaly-web.pdf

Hopefully these pointers will offer you some guidance what your precise
research question could be, and what is the most promising avenue for
continuing. My apologies if my answer was raising new questions, but
machine learning is a very wide area with large number of methods for many
different goals.

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] using original message in TValue rule action

2020-01-20 Thread Risto Vaarandi
hi Richard,

there are several pattern types like TValue and SubStr which have been
designed for fast matching and which do not support match variables
(including $0). Handling of match variables involves additional
computational cost, since after successful match, all variables in rule
definition have to be substituted with values from the match. So if you
want to get the entire line that reaches a given rule, the simplest
solution is to use $0 variable with RegExp pattern .? or .
Since you also mentioned the idea of defining special action list variable
(%e), it will involve the following subtle issue. Unlike match variables,
action list variables get substituted not immediately after the match, but
at the moment when action list gets executed. However, there are rules
where action list execution can happen much later than the match against
the rule -- just consider the 'action' field of PairWithWindow rule where
execution is triggered by reading from system clock, not a pattern match. A
similar issue will come up with Calendar rule that doesn't have a pattern
at all, or action lists of SEC contexts when contexts expire. If the action
list variable is defined as "input line currently under processing", such
variable will not make much sense when action list execution is triggered
by system clock. For this reason, using $0 is a better solution, since it
is substituted immediately after a successful pattern match.

kind regards,
risto

Kontakt Richard Ostrochovský () kirjutas
kuupäeval E, 20. jaanuar 2020 kell 17:40:

> Hello,
>
> I was find out the answer in manual and also archive of this forum, but
> without success, and the question seems very basic to me, so I assume 2 (3)
> possible alternative answers:
>
> - it is so easy, that I will bang my head
> - it is not possible at all (in current version of SEC)
> - (RegExp .* is equally efficient as TValue)
>
> Assuming, that using TValue instead of RegExp or any other rule type in
> cases, where I don't need filtering of or extraction from log messages, is
> most computing power efficient, I am trying to find out a straightforward
> way, how to use the original event text in event action of TValue rule.
>
> $0 seems not to be working for TValue (I understand, that it is
> RegExp-specific) in rule like this:
>
> type=Single
> ptype=TValue
> pattern=TRUE
> context=SVC_:tmp::home:user:somelog.log#MULTI-LINE &&
> SVC_:tmp::home:user:somelog.log #MULTI-LINE_MESSAGE
> desc= SVC_:tmp::home:user:somelog.log #MULTI-LINE_MESSAGE lines filter
> action=add ( SVC_:tmp::home:user:somelog.log #MULTI-LINE_MESSAGE) $0
>
> $0 literally is added to context in this case.
>
> ("#" is meant to be part of the context name, not any kind of comment.)
>
> Does somebody have any advice, how to use original event text in
> TValue-type rule, without "compromising" the performance? (Assuming, that
> the easiest solution, the replacement of TValue with RegExp nad TRUE with
> .* would do the job, but won't be as fast as TValue.) Maybe new predefined
> variable could be available (e.g. %e as event) independently on rule type.
>
> Thank you in advance.
>
> Richard
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Public Docker image for SEC

2019-12-16 Thread Risto Vaarandi
hi Andres,
so far, official sec distribution has not had a docker image, since sec is
packaged for common linux and bsd distributions, and it doesn't have many
dependencies (just standard perl is needed without any exotic modules).
That has made sec very easy to deploy.
I had a quick look into the repository and it seems that the container runs
sec as a process connected to rsyslog, with rsyslog acting as event
collector for sec. Is my understanding correct? If so, is that container
targeted for collecting logs in a small trusted network segment where
encryption is not needed? I noticed that rsyslog has been configured to
receive logs in plain text, hence the question.
kind regards,
risto

Kontakt Andres Pihlak () kirjutas kuupäeval E, 16.
detsember 2019 kell 12:40:

> Hello,
>
> I had a need for SEC docker container because it makes life much easier.
> Unfortunately, there isn't public image for that so I created it myself.
> Furthermore, I added CI pipeline to build those docker images. Repository
> is here: https://github.com/apihlak/SEC. Is there plan to add official
> Dockerfile and image to simple-evcorr repository or is there any
> suggestions to make my repository better to help out community?
>
> Regards,
> Andres
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] RegExp modifiers

2019-12-09 Thread Risto Vaarandi
hi Richard,

Kontakt Richard Ostrochovský () kirjutas
kuupäeval E, 9. detsember 2019 kell 01:57:

> Hello colleagues,
>
> I was searching for the answer here:
> https://simple-evcorr.github.io/man.html
> https://sourceforge.net/p/simple-evcorr/mailman/simple-evcorr-users/
> and haven't found the answer, so I'am putting new question here:
>
> Does SEC in pattern= parameters support RegExp modifiers (
> https://perldoc.perl.org/perlre.html#Modifiers) somehow?
>

If you enclose a regular expression within /.../, SEC does not treat
slashes as separators but rather as parts of regular expression, therefore
you can't provide modifiers in the end of regular expression after /.
However, Perl regular expressions allow for modifiers to be provided with
(?) construct. For example, the following pattern matches the
string "test" in case insensitive way:
pattern=(?i)test
In addition, you can use such modifiers anywhere in regular expression
which makes them more flexible than modifiers after /. For example, the
following pattern matches strings "test" and "tesT":
pattern=tes(?i)t

In SEC FAQ, there is also a short discussion on this topic:
https://simple-evcorr.github.io/FAQ.html#13)


> E.g. modifiers /x or /xx allow writing more readable expressions by
> ignoring unescaped whitespaces (implies possible multi-line regular
> expressions). It could be practical in case of more complex expressions, to
> let them being typed more legibly. Some simpler example:
>
> pattern=/\
> ^\s*([A-Z]\s+)?\
> (?\
>(\
>   ([\[\d\-\.\:\s\]]*[\d\]]) |\
>   (\
>  (Mon|Tue|Wed|Thu|Fri|Sat|Sun) \s+\
>  (Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) \s+ \d+ \s+
> \d\d:\d\d:\d\d \s+ ([A-Z]+\s+)?\d\d\d\d\
>   )\
>)\
> ) (?.*)/x
>
>
It is a tricky question, since SEC configuration file format allows to
natively provide regular expressions in multiple lines without the use of
(?x) modifier. If any line in rule definition ends with backslash, the
following line is appended to the current line and backslash is removed
during configuration file parsing. For example, the following two pattern
definitions are equivalent:

pattern=test: \
(\S+) \
(\S+)$

pattern=test: (\S+) (\S+)$

However, it is important to remember that SEC converts multi-line rule
fields into single-line format before any other processing, and that
includes compiling regular expressions. In other words, if you consider the
first multi-line regular expression pattern definition above, SEC actually
sees it as "test: (\S+) (\S+)$" when it compiles this expression. This
introduces the following caveat -- when using (?x) modifier for introducing
a comment into multi-line regular expression, the expression is converted
into single line format before expression is compiled and (?x) has any
effect, and therefore the comment will unexpectedly run until the end of
regular expression. Consider the following example:

pattern=(?x)test:\
# this is a comment \
(\S+)$

Internally, this definition is first converted to single line format:
pattern=(?x)test:# this is a comment (\S+)$
However, this means that without the comment the expression looks like
(?x)test:
which is not what we want.

To address this issue, you can limit the scope of comments with (?#...)
constructs that don't require (?x). For example:

pattern=test:\
(?# this is a comment )\
(\S+)$

During configuration file parsing this expression is converted into
"test:(?# this is a comment )(\S+)$", and after dropping the comment it
becomes "test:(\S+)$" as we expect.

Hope this helps,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] generate SEC configurations automatically

2019-12-02 Thread Risto Vaarandi
>
>
> This SEC admin, as I see it, is still also from user perspective, tightly
> bound with SEC technology and user needs to know, how SEC rules and their
> compositions work internally. I am imagining something, that is
> technology-agnostic, just describing logic of correlations, and from that
> "formal description" generating SEC (or something other,
> technology-agnostic means, that it may not to be dependent on SEC or other
> specific technology, user needs just "what", but not "how"). Therefore
> configuration DB should not operate with SEC-specific entities, but rather
> something more general. Maybe something, which could be described with
> something like EMML (Event Management Markup Language - never used that, I
> don't know it, I just found that it exists).
>
> Levels of formal description of correlation types:
>
> - *user (admin) level* (higher-level, less detailed) - containing enough
> information to configure and use described correlation type
>
> - *developer level* (lower-level, more detailed) - containing enough
> information, to generate lower-level code (SEC configurations) from formal
> description (from configuration DB, maybe with support of something like
> EMML)
>
> Hope this helps to undertand what I wanted to say :).
>

Developing another event correlation language and tools for supporting it
sounds like a separate major software development project to me.
Before embarking on that, have you considered using configuration
management frameworks (e.g., Salt or Ansible)? Among other things, they
allow you to create configuration files in automated fashion after the user
has set some generic parameters. It might not be exactly what you have
envisioned, but it is definitely cheaper workaround than rolling out an
entirely new solution.

kind regards,
risto


> Richard
>
>>
>> hope this helps,
>> risto
>>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] generate SEC configurations automatically

2019-12-01 Thread Risto Vaarandi
hi Richard and John,

Hi Richard:
>
> In message
> ,
> Richard_Ostrochovsk writes:
> >this post loosely follows this one:
> >https://sourceforge.net/p/simple-evcorr/mailman/message/36867007/.
> >
> >Being monitoring consultant and developer, I have
> >an idea to hide complexity of SEC configurations,
> >and still to allow to configure it also for
> >"regular" administrators without any developer or
> >SEC background.
>
> Being a "regular" administrator, I claim admins
> that can't program/use programming methods won't
> be admins much longer. If the companies you work
> for are depending on (rapidly turning over) junior
> admins to administer something as important as
> monitoring and correlation you have a difficult
> job ahead of you.
>
> Knowing regular expressions at the very least is
> required to use SEC.


I agree with John's opinion. Also, regular expressions are not something
specific to SEC, but they are used by many other event log processing and
network monitoring tools such as syslog-ng, rsyslog, suricata, etc. While
learning regular expressions requires some time from a newcomer, it is
difficult to see how a monitoring engineer/admin would manage without this
knowledge. I would also argue that creating more advanced event correlation
schemes (not only for SEC, but for any other event correlation tool)
definitely requires at least some development skills, since one has to
essentially write down event processing algorithms here.

Having said that, there are certainly ways to improve future versions of
SEC -- for example, re-evaluating input file patterns after short time
intervals (suggestion from one of the previous posts) is worth considering
and I've written it down as an idea for the next version.

Getting performance info out
> of SEC is better, but still
> difficult. E.G. finding and fixing expensive/poor
> regular expressions can result in a significant
> improvement of performance/throughput along with
> hierarchical structuring of the rulesets.
>
> >Imagined concept illustration:
> >
> >[configuration DB] -> [generator(s)] -> [SEC configurations]
> >
> >The punch line is, that user won't need to know
> >anything about SEC, but will need to understand
> >logic of correlations employed, and their
> >parameters
>
> I assume you mean regular expressions, threholds,
> actions and commands etc.
>
> >(configuration DB may have some kind of GUI). In
> >the background, higher-level correlations will be
> >translated to respective SEC rules.
>

It is certainly possible to create such a configuration DB for a number of
well known scenarios. For example, one could set up a database for common
Cisco fault messages (maybe using examples from here:
https://github.com/simple-evcorr/rulesets/blob/master/cisco/cisco-syslog.sec),
and use it for automated creation of some specific rules, where the end
user can change few simpler parameters (e.g., time windows and thresholds
of event correlation rules). However, it is also important to understand
the limitations of this approach -- whenever new event processing
algorithms need to be written for new scenarios and event log types,
someone with developer skills needs to step in. If you want to move the
entire development work into GUI, maybe secadmin package that John has
discussed below will provide some ideas?


> There was a web interface referenced back in 2007
> at:
>
>
> https://sourceforge.net/p/simple-evcorr/mailman/simple-evcorr-users/thread/op.tnxvfselqmj61d%40genius/#msg6200380
>
> The url's are dead but I have a copy of secadmin
> from 2009 I have put it up at:
>
>   https://www.cs.umb.edu/~rouilj/sec/secadmin.tar.gz
>
> It doesn't use a back end db IIRC but it was
> supposed to provide some guidance on creating the
> correlation rules. Note that it would have to be
> updated as new rules have been added since it was
> developed. Also I think it ily supported the basic
> sec correlations.
>
> Risto do you remember this?
>

After checking the link, I do recollect this post, but didn't have time to
closely study this package at a time. However, I downloaded this package
today and looked into it on a test virtual machine. This package is
essentially a CGI script written for Apache web server, and despite being
rolled out in 2006, it found it working on centos7 platform after few minor
tweaks. The package offers a web based GUI for editing SEC rules, where
most rule fields can be defined in text boxes, while values for some fields
(e.g., rule type) can be selected from pull-down menus. I think it's a nice
package for editing rules via web based interface, but one definitely has
to update for newer SEC versions (since the GCI script was created in
November 2006, it works with 2.4.X versions, while most recent sec-2.8.X
contains many improvements over 2.4.X).

hope this helps,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net

Re: [Simple-evcorr-users] "multi-line" and multi-file logs - out of box

2019-11-28 Thread Risto Vaarandi
hi Richard,

just one followup thought -- have you considered sec native multi-line
patterns such as RegexpN for handling multi-line logs? Of course, there are
scenarios where the value of N (max number of lines in a multi-line event)
can be very large and is difficult to predict, and for such cases an event
converter (like the one from the previous post) is probably the best
approach. However, if you know the value of N and events can be matched
with regular expressions, RegexpN pattern type can be used for this task.
For example, if events have the brace separated format described in the
previous post, and events can contain up to 20 lines, one could utilize the
following Regexp20 pattern for matching:

type=Single
ptype=RegExp20
pattern=(?s)^(?:.+\n)?\{\n(.+\n)?\}$
desc=match a multiline event between braces
action=write - $1

Also, if you want to convert such multi-line events into a single-line
format with builtin features, sec 'rewrite' action allows for that. In the
following example, the first rule takes the multi-line data between braces
and replaces each newline with a space character, and resulting single-line
string (with a prefix "Converted event: ") is used for overwriting sec
event buffer. The second rule is now able to match such converted events:

type=Single
ptype=RegExp20
pattern=(?s)^(?:.+\n)?\{\n(?:(.+)\n)?\}$
continue=TakeNext
desc=convert a multiline event between braces to single-line format
action=lcall %ret $1 -> ( sub { my($t) = $_[0]; $t =~ s/\n/ /g; return $t;
} ); \
   rewrite 20 Converted event: %ret

type=Single
ptype=RegExp
pattern=Converted event: (.*)
desc=match any event
action=write - $1

Maybe above examples are helpful for getting additional insights into
different ways of processing multi-line events.

kind regards,
risto


hi Richard,
>>>
>> ...
>>
>>> In the current code base, identifying the end of each line is done with
>>> a simple search for newline character. The newline is searched not with a
>>> regular expression, but rather with index() function which is much faster.
>>> It is of course possible to change the code, so that a regular expression
>>> pattern is utilized instead, but that would introduce a noticeable
>>> performance penalty. For example, I made couple of quick tests with
>>> replacing the index() function with a regular expression that identifies
>>> the newline separator, and when testing modified sec code against log files
>>> of 4-5 million events, cpu time consumption increased by 25%.
>>>
>>
>> Hmm, this is interesting. The philosophically principial question came to
>> my mind, if this penalty could be decreased (optimized), when doing
>> replacements of these regular newline characters ("\n") and matching
>> endings of "lines" with regexp - through rules (or by other more external
>> way) - before further processing by subsequent rules, instead of potential
>> built-in feature (used optionally on particular logfiles).
>>
>>
> Perhaps I can add few thoughts here. Since the number of multi-line
> formats is essentially infinite, converting multi-line format into
> single-line representation externally (i.e., outside sec) offers most
> flexibility. For instance, in many cases there is no delimiter as such
> between messages, but beginning and end of the message contain different
> character sequences that are part of the message. In addition, any lines
> that are not between valid beginning and end should be discarded. It is
> clear that using one regular expression for matching delimiters is not
> addressing this scenario properly. Also, one can imagine many other
> multi-line formats, and coming up with a single builtin approach for all of
> them is not possible. On the other hand, a custom external converter allows
> for addressing a given event format exactly as we like. For example,
> suppose we are dealing with the following format, where multi-line event
> starts with a lone opening brace on a separate line, and ends with a lone
> closing brace:
>
> {
>   line1
>   line2
>   ...
> }
>
> For converting such events into a single line format, the following simple
> wrapper could be utilized (written in 10 minutes):
>
> #!/usr/bin/perl -w
> # the name of this wrapper is test.pl
>
> if (scalar(@ARGV) != 1) { die "Usage: $0 \n"; }
> $file = $ARGV[0];
> if (!open(FILE, "tail -F $file |")) { die "Can't start tail for $file\n"; }
> $| = 1;
>
> while () {
>   chomp;
>   if (/^{$/) { $message = $_; }
>   elsif (/^}$/ && defined($message)) {
> $message .= $_;
> print $message, "\n";
> $message = undef;
>   }
>   elsif (defined($message)) {
> $message .= $_;
>   }
> }
>
> If this wrapper is then started from sec with 'spawn' or 'cspawn' action,
> multi-line events from monitored file will appear as single-line synthetic
> events for sec. For example:
>
> type=Single
> ptype=RegExp
> pattern=^(?:SEC_STARTUP|SEC_RESTART)$
> context=SEC_INTERNAL_EVENT
> desc=fork the converter when sec is started or restarted
> action=spawn 

Re: [Simple-evcorr-users] "multi-line" and multi-file logs - out of box

2019-11-27 Thread Risto Vaarandi
hi Richard,

Risto, thank you for your pre-analysis about multi-lines with regexp, and
> also for suggestions about multi-files yet more sophisticated solution.
>
> My comments are also inline:
>
> st 27. 11. 2019 o 15:07 Risto Vaarandi 
> napísal(a):
>
>> hi Richard,
>>
> ...
>
>> In the current code base, identifying the end of each line is done with a
>> simple search for newline character. The newline is searched not with a
>> regular expression, but rather with index() function which is much faster.
>> It is of course possible to change the code, so that a regular expression
>> pattern is utilized instead, but that would introduce a noticeable
>> performance penalty. For example, I made couple of quick tests with
>> replacing the index() function with a regular expression that identifies
>> the newline separator, and when testing modified sec code against log files
>> of 4-5 million events, cpu time consumption increased by 25%.
>>
>
> Hmm, this is interesting. The philosophically principial question came to
> my mind, if this penalty could be decreased (optimized), when doing
> replacements of these regular newline characters ("\n") and matching
> endings of "lines" with regexp - through rules (or by other more external
> way) - before further processing by subsequent rules, instead of potential
> built-in feature (used optionally on particular logfiles).
>
>
Perhaps I can add few thoughts here. Since the number of multi-line formats
is essentially infinite, converting multi-line format into single-line
representation externally (i.e., outside sec) offers most flexibility. For
instance, in many cases there is no delimiter as such between messages, but
beginning and end of the message contain different character sequences that
are part of the message. In addition, any lines that are not between valid
beginning and end should be discarded. It is clear that using one regular
expression for matching delimiters is not addressing this scenario
properly. Also, one can imagine many other multi-line formats, and coming
up with a single builtin approach for all of them is not possible. On the
other hand, a custom external converter allows for addressing a given event
format exactly as we like. For example, suppose we are dealing with the
following format, where multi-line event starts with a lone opening brace
on a separate line, and ends with a lone closing brace:

{
  line1
  line2
  ...
}

For converting such events into a single line format, the following simple
wrapper could be utilized (written in 10 minutes):

#!/usr/bin/perl -w
# the name of this wrapper is test.pl

if (scalar(@ARGV) != 1) { die "Usage: $0 \n"; }
$file = $ARGV[0];
if (!open(FILE, "tail -F $file |")) { die "Can't start tail for $file\n"; }
$| = 1;

while () {
  chomp;
  if (/^{$/) { $message = $_; }
  elsif (/^}$/ && defined($message)) {
$message .= $_;
print $message, "\n";
$message = undef;
  }
  elsif (defined($message)) {
$message .= $_;
  }
}

If this wrapper is then started from sec with 'spawn' or 'cspawn' action,
multi-line events from monitored file will appear as single-line synthetic
events for sec. For example:

type=Single
ptype=RegExp
pattern=^(?:SEC_STARTUP|SEC_RESTART)$
context=SEC_INTERNAL_EVENT
desc=fork the converter when sec is started or restarted
action=spawn ./test.pl my.log

type=Single
ptype=RegExp
pattern=\{importantmessage\}
desc=test
action=write - important message was received

The second rule fires if the following 4-line event is written into my.log:

{
important
message
}

My apologies if the above example is a bit laconic, but hopefully it
conveys the overall idea how to set up an event converter. And writing a
suitable converter is often taking not that much time, plus you get
something which is tailored exactly to your needs :)

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] "multi-line" and multi-file logs - out of box

2019-11-27 Thread Risto Vaarandi
hi Richard,

these are interesting questions and you can find my comments inline:

Hello guys,
>
> ...

>
> My question is, if you see, how some of this things could be accomplished
> in more generic way, without special configurations of correlation rules.
> It would be great having SEC supporting such use cases "out of box", e.g.
> by:
>
> - having configurable line delimiter pattern (regular expression)
>

In the current code base, identifying the end of each line is done with a
simple search for newline character. The newline is searched not with a
regular expression, but rather with index() function which is much faster.
It is of course possible to change the code, so that a regular expression
pattern is utilized instead, but that would introduce a noticeable
performance penalty. For example, I made couple of quick tests with
replacing the index() function with a regular expression that identifies
the newline separator, and when testing modified sec code against log files
of 4-5 million events, cpu time consumption increased by 25%.

Introducing a custom delimiter also raises another important question --
should it also be used for actions that utilize newline as a natural
separator? For example, 'add' and 'event' actions split data into events by
newline (if present in the data), 'spawn' action assumes that events from a
child process are separated by newlines, etc. Should all these actions be
changed to use the new separator?

Given the performance penalty and other delimiter related questions, this
idea needs careful thinking before implementing it in the code. (Before
moving forward, it would be also interesting to know how many users would
see this idea worth implementing.)


> - accepting wildcard pattern as specification of input log file, to
> "monitor them all" (also dynamically adding newly created files matching
> wildcard and removing disappeared)
>

It would be easier to implement that functionality, since input file
patterns are re-evaluated on some signals, and in principle it is possible
to invoke similar code after short time periods (e.g., once in 5 seconds).
However, sec-2.8.X has 'addinput' and 'dropinput' actions which offer more
general interface for dynamically adding and dropping inputs. For example,
it is possible to start an external script with 'spawn' action which can
detect input files not just with wildcard match but also more advanced
criteria, and generate synthetic events like NEWFILE_ for input files
that need opening. These synthetic events can be easily captured by a rule
which invokes 'addinput' action for relevant files. I acknowledge that this
functionality is somewhat different from providing wildcards in command
line and requires writing your own script, but you can actually do more
advanced things here.

kind regards,
risto


> I don't have clue, how hard would be implementation of such things
> directly in SEC (maybe question to Risto?), or if do you see also other,
> more straightforward, solutions, without bringing more complexity to SEC
> rules, I would be grateful for your know-how sharing.
>
> Have a nice days.
>
> Richard
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Pass variable to second rule - json pattern

2019-11-26 Thread Risto Vaarandi
hi Andres,

the %user action list variable gets indeed overwritten if multiple
deployments for different services are ongoing simultaneously. However, you
can utilize DEPLOY_STARTED_ context for storing the user name for
the given service (provided that you are not using this context already for
storing other event data). Saving the user name into the event store of the
context can be accomplished with the 'add action, for example:

add DEPLOY_STARTED_$+{service} $+{user}

If you augment your first rule with this action, the rule might look as
follows:

type=Single
ptype=PerlFunc
pattern=sub { if ($_[0] =~ /({.+\"status\":\"started\".+})/) { \
  return SecJson::json2matchvar($1); } return 0; }
context=!DEPLOY_STARTED_$+{service}
continue=TakeNext
desc=Started deploy of service $+{service} by user $+{user}
action=create DEPLOY_STARTED_$+{service} 1800; add
DEPLOY_STARTED_$+{service} $+{user}

For retrieving the user name from the event store of the service context,
the 'copy' action can be harnessed in the second rule. For example, the
following modification to the rule will get the user name from context and
assign it to the %user action list variable, and %user variable is then
employed by the 'write' action:

type=Single
ptype=PerlFunc
pattern=sub { if ($_[0] =~ /({ .+\"message":\".+\",
\"service_name\":\".+\", \"region\":\".+\" })/) { \
  return SecJson::json2matchvar($1); } return 0; }
context=DEPLOY_STARTED_$+{service}
continue=TakeNext
desc=Container was killed by service
action=copy DEPLOY_STARTED_$+{service} %user; \
   write $+{service_name}_$+{region}_%user

A small side note -- if DEPLOY_STARTED_ context is not used
anywhere else in your rule base and its only purpose is to just connect
above two rules together, you could also consider deleting the context with
'delete' action in the second rule (but that will entirely depend on the
nature of your rule base).

Hope this helps,
risto

Kontakt Andres Pihlak () kirjutas kuupäeval T, 26.
november 2019 kell 15:05:

> Hello,
>
> I have rule that creates context if software deploy is started. The json
> message also consists user variable which I like to pass on to second rule
> that do not have this user in the pattern. Tried multiple solutions but
> unfortunately I was unable to do that.
>
> * First rule creates context DEPLOY_STARTED_myservice. And I want to pass
> $+{user} variable to other rule. I tried to pass this argument with "assign
> %user $+{user}" but this raises problem that if the same time other deploy
> is started then this user variable is overwritten and second rule gets
> wrong username.
> type=Single
> ptype=PerlFunc
> pattern=sub { if ($_[0] =~ /({.+\"status\":\"started\".+})/) { \
>   return SecJson::json2matchvar($1); } return 0; }
> context=!DEPLOY_STARTED_$+{service}
> continue=TakeNext
> desc=Started deploy of service $+{service} by user $+{user}
> action=create DEPLOY_STARTED_$+{service} 1800
>
> * Second rule waits that context DEPLOY_STARTED_myservice is created and I
> need to pass user parameter to this rule. User do not exists on the pattern.
> type=Single
> ptype=PerlFunc
> pattern=sub { if ($_[0] =~ /({ .+\"message":\".+\",
> \"service_name\":\".+\", \"region\":\".+\" })/) { \
>   return SecJson::json2matchvar($1); } return 0; }
> context=DEPLOY_STARTED_$+{service}
> continue=TakeNext
> desc=Container was killed by service
> action=write $+{service_name}_$+{region}_$+{user};
>
> Regards,
> Andres
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Close input file if there are now events after specified duration

2019-10-04 Thread Risto Vaarandi
hi Alberto,

if the input file has been provided to SEC with --input command line
option, there is currently no way to close it. (It is only possible to
increase the status polling time frame for input files with --check-timeout
option, in order to reduce the system load if there is a large number of
infrequently modified inputs.)

However, if the input file has been set up with 'addinput' action, it is
possible to execute 'dropinput' action for this file which closes it and
removes from the list of inputs. There are two ways how to potentially
trigger the execution of 'dropinput' action. Firstly, one could use a
context for closing the file, where context lifetime is extended on each
event that appears in the input file, and 'dropinput' gets executed when
the context expires. A slightly different context-based example has been
provided in SEC FAQ, where a file is kept open for a specific date and
closed when the day ends:
http://simple-evcorr.github.io/FAQ.html#24

There is also another approach -- with --input-timeout=N and
--timeout-script=S options, one can execute an external script S if no
events have been observed in input file during the last N seconds. From
that script, it is possible to write a message to control file or FIFO
which is monitored by SEC, and have a rule which matches that message and
executes 'dropinput' action. (The same control file could be utilized for
other purposes, such as reloading configuration and log rotation:
http://simple-evcorr.github.io/FAQ.html#26.)

kind regards,
risto



Kontakt Alberto Corton Padilla () kirjutas kuupäeval R,
4. oktoober 2019 kell 16:58:

> Hi,
>
> Is there a way to close an input file if no new lines have been written
> for a specified duration?
>
> Regards,
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Accessing A Perl Hash From Pattern1 In Pattern 2

2019-10-04 Thread Risto Vaarandi
hi David,

I would second to Rock's recommendation to use regular expressions in the
Pair rule. Firstly, two PerlFunc patterns implement only regular expression
matching and there isn't anything additional (such as arithmetic
operations) which would require the use of Perl. Therefore, it is easier to
express these patterns as RegExp patterns.

There is another major benefit -- if you use RegExp patterns in Pair or
PairWithWindow rules, you can include match variables (e.g., $1, $2) in
'pattern2' field which are substituted with values when 'pattern' field
matches. On the other hand, you can *not* use match variables inside Perl
code given with PerlFunc patterns, since this code is compiled only once
when SEC starts. During event processing, the same already compiled code is
executed for pattern matching purposes, and no substitution of match
variables can take place (substitution would change the code and would thus
require recompiling).

One small sidenote -- if you would like to use named match variables in
RegExp patterns, regular expressions support them natively with
"" construct in the beginning of each capture group. For example,
the following expression will set $+{user}, $+{global_address} and
$+{local_address} variables:

User <(?[^\s]+)>.+IP
<(?[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})>.+IPv4
Address <(?[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})>

kind regards,
risto

Kontakt David Thomas () kirjutas kuupäeval N, 3.
oktoober 2019 kell 22:37:

> I'm running into an issue with a correlation I'm trying to implement and
> I'm hoping you can help.
>
> Event 1 happens when a user logs into a vpn.  It has the user's name the
> global address and the local address assigned by the vpn.
> Event 2 happens when the user logs off the vpn.  It has the user's name,
> the global address, the duration and amount of traffic.
>
> My objective is to get the local address from event 1 and combine it with
> the information from event 2.
>
> I'm using a hash to get the name and both addresses from event 1.  Then in
> pattern 2 I reference that to see if the user name and global address match
> and add the local address from the hash.  What I'm trying now is below.
>
> I'm getting messages from action2 tcp sock so it seems like I'm matching
> the pattern but the values of the hash keys that come from pattern 1 are
> empty.
>
> Here is an example of what I'm getting:
> VPN Disconnect - User="" Global Address="" Local Address=""
> Duration="0h:03m:07s" Xmit Bytes="1689622 Rcv Bytes="34370"
>
> Here is the .sec file I'm currently using.  I'm hoping someone can point
> out what I'm doing wrong.  Thanks!
>
> type=pair
> ptype=PerlFunc
> pattern=sub { my(%var); \
> if ($_[0] !~ /User <([^\s]+)>.+IP
> <([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>.+IPv4 Address
> <([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})>/) { return 0; } \
> $var{"user"} = $1; \
> $var{"global_address"} = $2; \
> $var{"local_address"} = $3; \
> return \%var; }
> desc=Get Name - Global Address - Local Address
> action=tcpsock 10.3.0.85:514 LzEC VPN Address Mapping - User="$+{user}" -
> Global Address ="$+{global_address}" - Local Address =
> "$+{local_address}"%{.nl};
> ptype2=PerlFunc
> pattern2=sub { my(%var); \
> if ($_[0] !~ /Username = $+{user}.+IP =
> $+{global_address}.+Duration: ([0-9]{1,2}h:[0-9]{1,2}m:[0-9]{1,2}s).+xmt:
> ([0-9]+).+rcv: ([0-9]+)/) { return 0; } \
> $var{"duration"} = $1; \
> $var{"xmit_bytes"} = $2; \
> $var{"rcv_bytes"} = $3; \
> return \%var; }
> desc2=Add Local Address To Disconnect Message
> action2=tcpsock 10.3.0.85:514 LzEC VPN Disconnect - User="$+{user}"
> Global Address="$+{global_address}" Local Address="$+{local_address}"
> Duration="$+{duration}" Xmit Bytes="$+{xmit_bytes} Rcv
> Bytes="$+{rcv_bytes}"%{.nl};
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Maintaining events while modifying rules

2019-09-09 Thread Risto Vaarandi
hi David,

restarting SEC means that all event correlation operations executed by
previous instance will be lost. Although it is possible to write custom
rules for storing SEC contexts to disk at shutdown and load them at restart
(e.g., see http://simple-evcorr.github.io/FAQ.html#15), there is no easy
way for doing that with event correlation operations. For example, if the
user deletes some rules before restart, SEC might load operations which no
longer have a parent rule.

However, if you don't restart SEC but rather send it the ABRT signal, it
will check the modification times of all rule files, and reload rules only
from files which have been modified (note that all previous event
correlation operations associated with modified files are terminated). This
means that if you keep critical rules in separate rule files which you
don't modify, ABRT signal does not destroy their event correlation state
after you have modified other files in your rule base.

For example, suppose you have two rule files test.sec and test2.sec with
the following content, where test.sec contains a critical rule:

# test.sec -- critical rules that don't change

type=Pair
ptype=RegExp
pattern=test (\d+)
desc=test no $1 started
action=logonly
ptype2=SubStr
pattern2=end $1
desc2=test no $1 ended
action2=logonly

# test2.sec -- rules under development

type=SingleWithThreshold
ptype=regexp
pattern=myevent (\d+)
desc=count instances of myevent $1
action=logonly
window=3600
thresh=3

Suppose SEC receives the following four input events:

test 1
test 2
myevent 1
myevent 2

The first two events match the Pair rule from test.sec which creates two
event correlation operations, with one of them waiting for event 'end 1'
and the other one for event 'end 2'. On the other hand, the last two events
match the SingleWithThreshold rule from test2.sec which creates two
counting operations. If you change test2.sec now and send ABRT signal to
SEC process, you should see the following debug messages from SEC which
indicate that two SingleWithThreshold operations were terminated:

SIGABRT received: soft restart of SEC
Reading configuration from test2.sec
...
Terminating all event correlation operations started from test2.sec, number
of operations: 2

However, since test.sec was not modified, its content is not reloaded and
Pair event correlation operations will continue to run. For example, if
event 'end 1' would arrive now, one of the operations would match it and
execute the 'logonly' action, producing the following message: "test no 1
ended".

Hope this example was helpful,
risto

Kontakt David Thomas () kirjutas kuupäeval E, 9.
september 2019 kell 18:13:

> I'm about to implement an SEC rule that will be fairly critical to our
> business.  It is a 'Pair' rule and at any time I may have multiple events
> that have matched pattern 1 and are waiting for pattern 2.
>
>  But I have a number of other use cases for SEC that I'm eager to
> implement.  If at all possible I'd like to do this in a way that maintains
> the events waiting for pattern 2.
>
> Currently I have SEC installed in a docker container and when I modify or
> add a rule (which I keep in separate .sec files) I restart the docker.
> This runs a command:
> /usr/bin/perl -w /usr/local/bin/sec --log=/dev/stdout --debug=5
> --input=/var/log/logzilla/sec.log --input=/var/log/logzilla/sec/*.log
> --conf=/etc/logzilla/sec/*.sec
>
> Will this lose the events that are waiting for pattern 2?
>
> If so is there an alternative way to add additional rules (or modify
> existing rules) that will keep those events?
>
> Thanks.
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] (no subject)

2019-09-04 Thread Risto Vaarandi
hi Santhosh,

is my understanding correct that you would like to match an IP address with
a regular expression, and perform a lookup into %ioc hash table within the
same regular expression? I need to study the documentation before
suggesting how this could be done, but I would advise against this
approach. Firstly, if you embed code in the regular expression, it would
become lot more complex and less readable. And secondly, matching such an
expression against input events might become more expensive. Therefore, it
is better to perform hash table lookups in a separate code block after a
successful regular expression match. Since IP addresses can only appear in
two locations in your input events, you could rewrite the PerlFunc pattern
to extract both IP addresses and have two statements for checking their
presence in %ioc hash table. For example:

type=Single
ptype=PerlFunc
pattern=sub { if ($_[0] !~ /ASA-\S+: Teardown \S+ connection \d+ for
outside: ([\d.]+).* identity: ([\d.]+)/) { return 0; } \
if (exists($ioc{$1})) { return ($1, $ioc{$1}); } \
if (exists($ioc{$2})) { return ($2, $ioc{$2}); } \
return 0; }
desc=Connection to IP address $1 with IoC information $2
action=pipe 'Log matched IoC:%{.nl}IoC: $2%{.nl}Log: $0' /bin/mail
mail...@example.com

If both IP addresses are present in %ioc hash table, the above rule will
return the IP address after "outside:" field with the ioc information.

Also, if you have other input events with different formats which might
contain an arbitrary number of IP addresses, you can include a loop in
PerlFunc pattern for extracting IP addresses iteratively with a regular
expression. If you are interested in a relevant example, please let me know
and I will post it to mailing list.

kind regards,
risto

Kontakt Santhosh Kumar () kirjutas kuupäeval K,
4. september 2019 kell 05:50:

> Hello Risto
>
> Though this query is not related. Just got curious about using the
> variable in a pattern directly instead of matching later.
>
> Log: "ASA-6-302016: Teardown UDP connection 806353 for outside:
> 187.189.195.208 <http://187.189.195.208:8443/>/24057 to
>
> identity: 172.18.124.136/161 duration 0:02:01 bytes 313"
>
> As per the rule, IPv4 is extracted to a variable $1 and match against %ioc 
> hash table. Instead, is it possible to  match the IOC IP's with any part of 
> the log(either in outside:([\d.]+)  or in identity:([\d.]+)).
>
> Regards,
>
> san
>
>
>
> On Tue, Sep 3, 2019 at 12:47 PM Santhosh Kumar 
> wrote:
>
>> Hello risto
>>
>> I ran the tests with real logs. Suggested method works exactly as
>> expected.
>>
>> This resolves many of my other queries. Thank you for prompt response.
>>
>> Regards,
>> Santhosh
>>
>>
>> On Fri, Aug 30, 2019, 20:59 Risto Vaarandi 
>> wrote:
>>
>>> hi Santhosh,
>>>
>>> since your task involves not only matching IP addresses against a
>>> blacklist but also includes reporting IoC information for a bad IP address,
>>> I would recommend loading IoC data from file into a Perl hash which allows
>>> for quick lookups. The example ruleset below uses a global hash %ioc which
>>> is a global variable and can thus be accessed from all rules:
>>>
>>> type=Single
>>> ptype=RegExp
>>> pattern=^(SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)$
>>> context=SEC_INTERNAL_EVENT
>>> desc=load IoC information on SEC startup and HUP or ABRT signals
>>> action=lcall %count -> ( sub { %ioc = (); \
>>>if (!open(FILE, "/home/risto/IOC_data_proposal.txt")) { return
>>> -1; } \
>>>while () { if (/^\s*(([\d.]+):\d+\s*-.*)/) { \
>>>  $ioc{$2} = $1; } \
>>>} close(FILE); return scalar keys %ioc; } ); \
>>>logonly Loaded %count entries from IoC data file
>>>
>>> type=Single
>>> ptype=PerlFunc
>>> pattern=sub { if ($_[0] !~ /ASA-\S+: Teardown \S+ connection \d+ for
>>> outside: ([\d.]+)/) { return 0; } \
>>> if (!exists($ioc{$1})) { return 0; } return ($1, $ioc{$1}); }
>>> desc=Connection to IP address $1 with IoC information $2
>>> action=pipe 'Log matched IoC:%{.nl}IoC: $2%{.nl}Log: $0' /bin/mail
>>> some...@example.com
>>>
>>> The first rule loads IoC information from file into %ioc hash table
>>> whenever SEC is started or HUP or ABRT signal is received by SEC. IP
>>> addresses serve as keys of the hash table, while each value is an entire
>>> line from the IoC file. For example, if the file contains the following two
>>> lines
>>>
>>> 187.163.222.244:465 - emotet
>>> 187.1

Re: [Simple-evcorr-users] spawn timeout and exit code

2019-09-03 Thread Risto Vaarandi
hi Pedro,

a followup to your question -- instead of using a Perl based wrapper that
was described in the previous post, there is a simpler solution. If you are
running SEC on a recent Linux distribution, you can use the 'timeout'
command line tool which allows for limiting the run time of a command line
and ending it with a user defined signal. For example, consider the
following 'spawn' action that uses 'timeout' tool for executing /bin/myprog
and terminating it with signal 15 (TERM) if /bin/myprog is still running
after 10 seconds:

action=spawn ( /usr/bin/timeout -s 15 10 /bin/myprog; /bin/echo -e "\nExit
code: $?" )

If 'spawn' is defined in this way, all lines from standard output of
/bin/myprog will be captured by SEC as synthetic events. Also, after
/bin/myprog exits or is terminated by 'timeout' tool, synthetic event "Exit
code: " will be generated for reporting the exit code of /bin/myprog.
Note that if /bin/myprog timed out, 'timeout' tool returns the exit code
124. In order to distinguish synthetic events in a better way from other
similar events, you can use the 'cspawn' action with SEC --intcontexts
command line option for generating events with a custom context. For
example, if you have the following action

action=cspawn TESTPROG (/usr/bin/timeout -s 15 10 /bin/myprog; /bin/echo -e
"\nExit code: $?")

then the following rule will match the synthetic event with the exit code
of /bin/myprog:

type=Single
ptype=RegExp
pattern=^Exit code: (\d+)$
context=TESTPROG
desc=catch the exit code of command
action=logonly Command has terminated with exit code $1

I have also updated the relevant FAQ entry with an example involving the
'timeout' tool: http://simple-evcorr.github.io/FAQ.html#20

kind regards,
risto


Kontakt Risto Vaarandi () kirjutas kuupäeval T,
30. juuli 2019 kell 00:50:

> hi Pedro,
>
> these are interesting questions. As for fetching the exit code from spawn,
> SEC does that and produces a warning message if it is non-zero, but there
> is no equivalent to bash $? variable. Firstly, command lines are executed
> asynchronously by spawn, and secondly, many such command lines may be
> running simultaneously. It is therefore difficult to tell which command's
> exit code $? is currently holding. For addressing this issue, it is
> probably best to write a simple wrapper script around the program that
> needs execution, and pass the exit code value to SEC as a synthetic event
> (just like program's standard output is passed). For example, the following
> simple script creates "TEST_EXIT_" synthetic event:
>
> #!/bin/bash
>
> /bin/false
> echo TEST_EXIT_$?
>
> However, running a child process from SEC for at most given number of
> seconds is a trickier issue. Although the spawn action does not have
> parameters for setting this timeout, this task can again be handled with a
> wrapper script, and there is a relevant example in the SEC FAQ:
> http://simple-evcorr.github.io/FAQ.html#20
>
 ...

>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] (no subject)

2019-08-30 Thread Risto Vaarandi
hi Santhosh,

since your task involves not only matching IP addresses against a blacklist
but also includes reporting IoC information for a bad IP address, I would
recommend loading IoC data from file into a Perl hash which allows for
quick lookups. The example ruleset below uses a global hash %ioc which is a
global variable and can thus be accessed from all rules:

type=Single
ptype=RegExp
pattern=^(SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)$
context=SEC_INTERNAL_EVENT
desc=load IoC information on SEC startup and HUP or ABRT signals
action=lcall %count -> ( sub { %ioc = (); \
   if (!open(FILE, "/home/risto/IOC_data_proposal.txt")) { return -1; }
\
   while () { if (/^\s*(([\d.]+):\d+\s*-.*)/) { \
 $ioc{$2} = $1; } \
   } close(FILE); return scalar keys %ioc; } ); \
   logonly Loaded %count entries from IoC data file

type=Single
ptype=PerlFunc
pattern=sub { if ($_[0] !~ /ASA-\S+: Teardown \S+ connection \d+ for
outside: ([\d.]+)/) { return 0; } \
if (!exists($ioc{$1})) { return 0; } return ($1, $ioc{$1}); }
desc=Connection to IP address $1 with IoC information $2
action=pipe 'Log matched IoC:%{.nl}IoC: $2%{.nl}Log: $0' /bin/mail
some...@example.com

The first rule loads IoC information from file into %ioc hash table
whenever SEC is started or HUP or ABRT signal is received by SEC. IP
addresses serve as keys of the hash table, while each value is an entire
line from the IoC file. For example, if the file contains the following two
lines

187.163.222.244:465 - emotet
187.189.195.208:8443 - emotet

the %ioc hash table will contain the following mappings (keys and values
are separated by ->):

187.163.222.244 -> 187.163.222.244:465 - emotet
187.189.195.208 -> 187.189.195.208:8443 - emotet

Currently, the first rule assumes that IoC file is in the same format as
you described in your e-mail, and the rule uses regular expression
^\s*(([\d.]+):\d+\s*-.*) for parsing the file and extracting relevant
information. Should the format of the file change, this regular expression
needs to be adjusted accordingly. Also, the rule finds the number of
entries loaded from IoC file, stores it in %count action list variable and
logs a debug message with this value into SEC log file. If the rule was
unable to open the file, the value -1 is logged which is useful for
troubleshooting purposes.

The second rule uses PerlFunc pattern for matching incoming ASA firewall
events and first verifies that incoming event matches the regular expression
ASA-\S+: Teardown \S+ connection \d+ for outside: ([\d.]+)
If there is a match, IP address of remote host is extracted and assigned to
$1 variable, and %ioc hash table is looked up for IoC information for that
IP address. If lookup is successful, PerlFunc pattern returns a list with
two elements (IP address, IoC info) which are mapped to match variables $1
and $2 by SEC ($0 variable will hold the entire matching event log line).
The match variables are then used by 'pipe' action for sending an e-mail to
relevant mailbox.

Hope this helps,
risto

Kontakt Santhosh Kumar () kirjutas kuupäeval R,
30. august 2019 kell 06:37:

> Hello
>
> Thanks for the brief..!!
>
> In my case, udgram creates a event loop however tcpsock functionality
> achived the goal.
>
> Regarding second scenario, the goal is to match the IP address and notify.
>
> "ASA-6-302016: Teardown UDP connection 806353 for outside: 187.189.195.208 
> <http://187.189.195.208:8443/>/24057 to
> identity: 172.18.124.136/161 duration 0:02:01 bytes 313"
>
> List of IOC's stored in a file(IOC_data_proposal.txt) as below.  the rule
> suppose to match the IP address from the file(IOC_data_proposal.txt) though
> the file contains other informations like port number and name of the
> malware, etc. which is not part of the log.
>
> The reason for not removing extra data(port, ioc name) from the file(
> IOC_data_proposal.txt) is that its required for a notification.
>
> 
>
> IOC_data_proposal.txt
>
> 187.163.222.244:465 - emotet
>
> 187.189.195.208:8443 - emotet
>
> 188.166.253.46:8080 - emotet
>
> 189.209.217.49:80 <http://189.209.217.49/>  - heartbleed
>
> 
>
>
> Expected mail Output:
>
>
> Log Matched IOC,
>
> IOC: 187.189.195.208:8443 - emotet
>
> Log: "ASA-6-302016: Teardown UDP connection 806353 for outside:
> 187.189.195.208 <http://187.189.195.208:8443/>/24057 to
>
> identity: 172.18.124.136/161 duration 0:02:01 bytes 313"
>
>
> regards,
> Santhosh
>
> On Wed, Aug 28, 2019 at 4:18 AM Risto Vaarandi 
> wrote:
>
>> hi Santhosh,
>>
>> Kontakt Santhosh Kumar () kirjutas kuupäeval
>> T, 27. august 2019 kell 04:55:
>>
>>> Hello Risto
>>>
>>>
>>> I’ve been running tests on SE

Re: [Simple-evcorr-users] Help calling perl to get hostname

2019-08-29 Thread Risto Vaarandi
hi Clayton,

Also, for completeness, here’s what worked for our user.
>
> This rule detects a BGP routing tunnel going down. The rule then waits for
> a matching "Up" event from the same host with the same neighbor IP. Once
> detected, it then measures the total time the route was down and generates
> a new event.
>
> In the rule, this user’s company also uses the hostname to extract the
> location ID (they have stores across the US) and the BGP Tunnel ID.
>
> The hostname reverse lookups are something like 1234lo5678.foo.com
>
> These generated events then go into a LogZilla trigger to kick off an
> automation. Cool stuff.
>

>
> Sample events:
>
> %BGP-5-ADJCHANGE: neighbor 10.1.0.1 Down BGP Notification received
>
> %BGP-5-ADJCHANGE: neighbor 10.1.0.1 Up
>
>
>
> type=Single
>
> ptype=SubStr
>
> pattern=SEC_STARTUP
>
> context=SEC_INTERNAL_EVENT
>
> continue=TakeNext
>
> desc=Load the Socket module and terminate if it is not found
>
> action=eval %ret (require Socket); \
>
>if %ret ( logonly Socket module loaded ) else ( eval %o exit(1) )
>
>
>
> type=Pair
>
> ptype=RegExp
>
> continue=dontcont
>
> pattern=neighbor \*?(\d+\.\d+\.\d+\.\d+) Down
>
> varmap= nip=1
>
> desc=BGP Neighbor $1 Down
>
> action=lcall %hostname $1 -> ( sub { my $host = scalar
> gethostbyaddr(Socket::inet_aton($_[0]), Socket::AF_INET); if
> (!defined($host)) { $host = $_[0]; } return $host; } ); \
>
> eval %storenumber ( if ("%hostname" =~ /(\d+)lo/) { "$$1"; } else {
> "NA"; } ); \
>
> eval %tunnel ( if ("%hostname" =~ /lo(\d+)/) { "$$1"; } else { "NA"; }
> ); \
>
> eval %TS (time()); \
>
> tcpsock 10.1.0.1:514 SEC BGP Neighbor status="Down"
> hostname="%hostname" store="%storenumber" tunnel="%tunnel"
> ECRule="01-bgp-flap-detection" ECRulenum="1"%{.nl};
>
> ptype2=RegExp
>
> pattern2=neighbor $+{nip} Up
>
> desc2=BGP Neighbor $1 Up
>
> action2=eval %TT ( time() - %TS ); \
>
> tcpsock 10.1.0.1:514 SEC BGP Neighbor status="Up"
> hostname="%hostname" store="%storenumber" tunnel="%tunnel" downtime="%TT"
> ECRule="01-bgp-flap-detection" ECRulenum="2"%{.nl}
>
>
thanks for sharing the ruleset with the mailing list!

I have one small side note -- is it possible to have several "neighbor 
Down" error conditions for different IP addresses that are open in
parallel? If so, each "neighbor  Down" event will overwrite the values
of %hostname, %storenumber and other action list variables, and when error
conditions are resolved, most recent values are reported for all
conditions. For resolving this issue, it is best to store those values in
match variables, since they are unique for each event correlation
operation. Storing all relevant data in match variables has one additional
benefit -- all event processing takes place in event matching pattern
(albeit more complex than a regular expression), and rest of the rule
definition becomes more simple.

So here is another version of this ruleset:

type=Single
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
desc=Load the Socket module and terminate if it is not found
action=eval %ret (require Socket); \
   if %ret ( logonly Socket module loaded ) else ( eval %o exit(1) )

type=Pair
ptype=PerlFunc
pattern=sub { my(%var); \
if ($_[0] !~ /neighbor \*?(\d+\.\d+\.\d+\.\d+) Down/) { return 0; }
\
$var{"nip"} = $1; $var{"ts"} = time(); \
$var{"hostname"} = scalar
gethostbyaddr(Socket::inet_aton($var{"nip"}), Socket::AF_INET); \
if (!defined($var{"hostname"})) { $var{"hostname"} = $var{"nip"} };
\
$var{"storenumber"} = "NA"; $var{"tunnel"} = "NA"; \
if ($var{"hostname"} =~ /(\d+)lo/) { $var{"storenumber"} = $1; } \
if ($var{"hostname"} =~ /lo(\d+)/) { $var{"tunnel"} = $1; } \
return \%var; }
desc=BGP Neighbor $+{nip} Down
action=tcpsock 10.1.0.1:514 SEC BGP Neighbor status="Down"
hostname="$+{hostname}" store="$+{storenumber}" tunnel="$+{tunnel}"
ECRule="01-bgp-flap-detection" ECRulenum="1"%{.nl};
ptype2=RegExp
pattern2=neighbor $+{nip} Up
desc2=BGP Neighbor %+{nip} Up
action2=lcall %TT %+{ts} -> ( sub { time() - $_[0] } ); \
tcpsock 10.1.0.1:514 SEC BGP Neighbor status="Up"
hostname="%+{hostname}" store="%+{storenumber}" tunnel="%+{tunnel}"
downtime="%TT" ECRule="01-bgp-flap-detection" ECRulenum="2"%{.nl}


This ruleset uses PerlFunc pattern for not only matching the "neighbor 
Down" event and setting the $+{nip} match variable, but it consolidates all
Perl code fragments from the rule into a single function which also sets
$+{hostname}, $+{storenumber}, $+{tunnel} and $+{ts} match variables. The
match variables are created by storing all data in a Perl hash, and
returning a reference to this hash from the pattern function (that's a
standard way for creating named match variables in a PerlFunc pattern).
This rule definition is able to handle simultaneous error conditions for
several IP addresses, and also uses the same naming scheme for all relevant
data which is perhaps more clear. (The only 

Re: [Simple-evcorr-users] Help calling perl to get hostname

2019-08-28 Thread Risto Vaarandi
hi Clayton,

for testing purposes, I have separated the problematic part of the rule
into a simple Single rule. As you can see, the rule calls the 'eval' action
for resolving an IP address and then writes it to standard output with
'write' action:

type=single
ptype=regexp
pattern=neighbor \*?(\d+\.\d+\.\d+\.\d+) Down
desc=BGP Neighbor $1 Down
action=eval %hostname ( $line = `perl -MSocket -E "say scalar
gethostbyaddr(inet_aton(\"$1\"), AF_INET)"`; \
   if ($line !~ //) { "$$1"; } else { "$1"; } ); \
   write - hostname: %hostname

The code fragment provided for 'eval' action has two issues, and I'll try
to explain them below. The first issue is related to interpretation of the
command line
perl -MSocket -E "say scalar gethostbyaddr(inet_aton(\"$1\"), AF_INET)"
At first glance, this command line appears to work when one tries it with a
specific IP address -- for example, when executing
perl -MSocket -E "say scalar gethostbyaddr(inet_aton(\"127.0.0.1\"),
AF_INET)"
in a terminal window, the command line returns string "localhost" as
expected.

However, when sec calls the 'eval' action which essentially means executing
this code as a Perl program, this code gets interpreted and backslashes in
front of " symbols are removed. Therefore, the command line which gets
forked from 'eval' action is actually
perl -MSocket -E "say scalar gethostbyaddr(inet_aton("127.0.0.1"), AF_INET)"
This command line gets interpreted by shell before it gets executed by
separate Perl process, and as a result, the Perl process will see the
following code:
say scalar gethostbyaddr(inet_aton(127.0.0.1), AF_INET)
However, since the parameter for inet_aton() is not a string, the function
fails.

For addressing this issue, you might use a pair of two backslashes for
masking, for example:
$line = `perl -MSocket -E "say scalar gethostbyaddr(inet_aton(\\"$1\\"),
AF_INET)"`
As an alternative, you could also enclose the entire say statement in
apostrophes which disables interpretation for it:
$line = `perl -MSocket -E 'say scalar gethostbyaddr(inet_aton("$1"),
AF_INET)'`

The other issue is related to the following code fragment:
if ($line !~ //) { "$$1"; } else { "$1"; }
Firstly, the regular expression // matches an empty string which can be
found in any input string, and therefore the check ($line !~ //) is always
false. As a result, the 'eval' action would always execute the else-branch
and return an IP address. Also, the regular expression // does not contain
any capture groups and therefore reference $$1 can't possibly return a
value. For addressing this issue, the code fragment could be rewritten as:
if ($line =~ /^(\S+)/) { "$$1"; } else { "$1"; }

After introducing the changes above, the rule would return a hostname as it
should. However, the rule is still sub-optimal, since it essentially first
compiles and executes Perl code, in order to fork another process for
compiling and executing another piece of Perl code. Below is a much more
efficient alternative which loads perl Socket module when sec starts up,
and resolves IP addresses to hostnames via fast 'lcall' action that avoids
expensive code compilation before each execution:

type=Single
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
continue=TakeNext
desc=Load the Socket module and terminate if it is not found
action=eval %ret (require Socket); \
   if %ret ( logonly Socket module loaded ) else ( eval %o exit(1) )

type=single
ptype=regexp
pattern=neighbor \*?(\d+\.\d+\.\d+\.\d+) Down
desc=BGP Neighbor $1 Down
action=lcall %hostname $1 -> ( sub { my $host = scalar
gethostbyaddr(Socket::inet_aton($_[0]), Socket::AF_INET); if
(!defined($host)) { $host = $_[0]; } return $host; } ); \
   write - hostname: %hostname

All the work is done by anonymous function
sub { my $host = scalar gethostbyaddr(Socket::inet_aton($_[0]),
Socket::AF_INET); if (!defined($host)) { $host = $_[0]; } return $host; }
which resolves IP address parameter to hostname with gethostbyaddr(), and
returns the IP address itself if it does not resolve.

Hopefully these examples are helpful,
risto

Kontakt Clayton Dukes () kirjutas kuupäeval K, 28.
august 2019 kell 08:25:

> Hi folks,
>
> I have a user with a rule that is trying to use perl gethostbyaddr but it
> doesn’t seem to be returning anything.
>
> Can someone point out what’s wrong here?
>
> I added the `write` statement at the end and the file just writes the IP,
> not the reverse lookup hostname.
>
>
>
> # Tracks BGP tunnel downtime
>
> type=pair
>
> ptype=regexp
>
> continue=dontcont
>
> pattern=neighbor \*?(\d+\.\d+\.\d+\.\d+) Down
>
> desc=BGP Neighbor $1 Down
>
> action=eval %hostname ( $line = `perl -MSocket -E "say scalar
> gethostbyaddr(inet_aton(\"$1\"), AF_INET)"`; \
>
> if ($line !~ //) { "$$1"; } else { "$1"; } ); \
>
> eval %storenumber ( if ("%hostname" =~ /lo/) { "$$1"; } else { "NA"; }
> ); \
>
> eval %tunnel ( if ("%hostname" =~ /lo(\d+)/) { "$$1"; } else { "NA"; }
> ); \
>
> eval %TS (time()); 

Re: [Simple-evcorr-users] (no subject)

2019-08-27 Thread Risto Vaarandi
hi Santhosh,

Kontakt Santhosh Kumar () kirjutas kuupäeval T,
27. august 2019 kell 04:55:

> Hello Risto
>
>
> I’ve been running tests on SEC for a while and stuck with below points.
> I’m not familiar with Perl though I tried to find a solution from sec mail
> bucket but no luck, please suggest if this can be achieved with high
> performance,
>
>
>
>1. I could see a log drops when I tested with the event rate of 15000
>logs/sec. A simple SEC rule to receive and forward all the logs to a
>destination. The output shows relatively less number of logs. This also
>increases the cpu usage from 0.3% to 45%
>
> 
>
> Type=single
>
> Ptype=regexp
>
> Pattern=([.\d]+)
>
> Desc=$1
>
> Action=pipe $0 nc syslog101 514
>
> 
>

The above rule is very inefficient, since 'pipe' action pipes the value of
$0 to "nc syslog101 514" command and then closes the pipe, forcing the nc
command to terminate. In other words, each time the above rule matches an
event, new command is forked, and if you have 15000 events per second, the
rule attempts to fork 15000 processes each second. This imposes
considerable load on the system and if you send in events at a high rate,
the rule might easily exhaust your system resources.

Instead of forking nc on each matching event, I would recommend to utilize
'tcpsock' action which transmits events over a single TCP socket (since you
haven't used -u flag with nc tool, I assume that your syslog server listens
on port 514/tcp). For example, consider the following rule (the rule
terminates each transmitted line with the newline):

type=single
ptype=regexp
pattern=([.\d]+)
desc=test event $1
action=tcpsock syslog101:514 $0%.nl

If your syslog server speaks BSD syslog protocol (
https://tools.ietf.org/html/rfc3164), and incoming events are not in that
format, you could use sec builtin action list variables for formatting the
event and providing fields that syslog server expects (such as timestamp).
For example, the following rule transmits each event line over TCP in BSD
syslog format with priority 14 (facility of "user" and severity of "info"),
with hostname "myhost", with program name "myprog", and using newline as a
separator between messages:

type=single
ptype=regexp
pattern=([.\d]+)
desc=test event $1
action=tcpsock syslog101:514 <14>%.monstr %.mdaystr %.hmsstr myhost myprog:
$0%.nl

Finally, as David suggested, you can also pass messages to local syslog
server via /dev/log socket, and let the local syslog server handle the
messages (note that unlike for 'tcpsock' in previous example, there is no
need for hostname and terminating newline for 'udgram' action):

type=single
ptype=regexp
pattern=([.\d]+)
desc=test event $1
action=udgram /dev/log <14>%.monstr %.mdaystr %.hmsstr myprog: $0

If your local syslog server is rsyslog, you could have the following
rsyslog rule for forwarding messages:

if $programname == "myprog" then @@syslog101:514

As you can see, there are several ways for achieving your goal, and
hopefully above examples are helpful for selecting the most convenient
solution.


>
>1. On a different scenario, I was interested to match the logs with
>list of IOC’s. Here i was trying to mail the detected log along with IOC
>name. I could achieve it to certain level as mentioned in example but no
>luck with this cases, "Split IP's from the IOC file and use it on the
>“pattern” to match IP from logs"
>
> 
>
> IOC_data_proposal.txt
>
> 187.163.222.244:465 - emotet
>
> 187.189.195.208:8443 - emotet
>
> 188.166.253.46:8080 - emotet
>
> 189.209.217.49:80  - heartbleed
>
> 
>
> Please check and share some insights.
>

I am not sure I fully understood what exactly you want to achieve here. Can
you provide some examples of input events and what output you would like to
generate on each match?

kind regards,
risto



>
>
>
> Eg: I currently tested below case and its working fine as this is a
> straight forward IOC matches.
>
> 
>
> #Current Rule for matching IOC:
>
> type=Single
>
> ptype=RegExp
>
> pattern=(?:SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)
>
> desc=load IOC data
>
> action=logonly; delete IP; create IP; \
>
>lcall %iocevents -> (sub{scalar `cat 
> /usr/local/bin/sec-rules/ioc_data.txt`});
> \
>
>cevent IOC_IP 0 %iocevents;
>
>
>
> type=Single
>
> ptype=RegExp
>
> pattern=.
>
> context=IOC_IP
>
> desc=create an entry
>
> action=logonly; alias IOC IOC_$0
>
>
>
> type=Single
>
> ptype=regexp
>
> context=IOC_$2
>
> pattern= syslog.*hostname=([\w\-\d]+).*IP=([\d\.]+)
>
> desc=Matched host & ip: $2 && $3
>
> action=pipe '$0' mail -s ‘%s’ ‘test123.gmail.com’
>
>
>
> IOC_data.txt
>
> 187.163.222.244
>
> 187.189.195.208
>
> 188.166.253.46
>
> 189.209.217.49
>
> 187.163.222.244
>
> 187.189.195.208
>
> 188.166.253.46
>
> 189.209.217.49
>
> 
>
>
>
> Regards,
>
> san
>

Re: [Simple-evcorr-users] spawn timeout and exit code

2019-07-29 Thread Risto Vaarandi
hi Pedro,

these are interesting questions. As for fetching the exit code from spawn,
SEC does that and produces a warning message if it is non-zero, but there
is no equivalent to bash $? variable. Firstly, command lines are executed
asynchronously by spawn, and secondly, many such command lines may be
running simultaneously. It is therefore difficult to tell which command's
exit code $? is currently holding. For addressing this issue, it is
probably best to write a simple wrapper script around the program that
needs execution, and pass the exit code value to SEC as a synthetic event
(just like program's standard output is passed). For example, the following
simple script creates "TEST_EXIT_" synthetic event:

#!/bin/bash

/bin/false
echo TEST_EXIT_$?

However, running a child process from SEC for at most given number of
seconds is a trickier issue. Although the spawn action does not have
parameters for setting this timeout, this task can again be handled with a
wrapper script, and there is a relevant example in the SEC FAQ:
http://simple-evcorr.github.io/FAQ.html#20
The example wrapper is universal and works not only for spawn but also
other actions that fork child processes from SEC. Also, in addition to
limiting the run time of child processes, one can define a signal which is
used for terminating the child process.

However, since in your case you also need to capture the standard output
and exit code of the command that was started by spawn, I have modified the
example wrapper from FAQ a bit:

#!/usr/bin/perl -w
#
# wrapper.pl

if (scalar(@ARGV) < 3) { exit(1); }
$int = shift @ARGV;
$sig = shift @ARGV;

$cmd = join(" ", @ARGV);

$SIG{TERM} = sub { $term{$$} = 1; };

if (!pipe(READ, WRITE)) { exit(1); }

$pid = fork();

if ($pid == -1) {
   exit(1);
} elsif ($pid == 0) {
   $SIG{TERM} = 'DEFAULT';
   if (exists($term{$$})) { exit(0); }
   close(READ);
   if (!open(STDOUT, ">")) { exit(1); }
   exec($cmd);
   exit(1);
} else {
   $SIG{TERM} = sub { kill TERM, $pid; exit(0); };
   if (exists($term{$$})) { kill TERM, $pid; exit(0); };
   $SIG{ALRM} = sub { kill $sig, $pid;
  print "Command $cmd timed out\n";
  exit(0); };
   alarm($int);
   close(WRITE);
   @lines = ;
   chomp(@lines);
   waitpid($pid, 0);
   $exitcode = $? >> 8;
   print "Command $cmd output: ", join(" ", @lines), "\n";
   print "Command $cmd exit code: $exitcode\n";
   exit($exitcode);
}

The first parameter of the wrapper is timeout in seconds, the second
parameter the number of the signal which is sent to child process on
timeout expiration, and the rest of the parameters define the command line
to be executed. The wrapper forks the command line as a child process,
acting as an intermediary between SEC and command line. The wrapper sets a
timer for itself with the alarm() system call, delivering the signal to
child when the timer expires, and reporting child standard output and exit
code back to SEC if the child process finishes before timeout. Child
standard output is collected through a pipe and reported in two lines, with
the first line representing entire standard output and the second line the
exit code of the child. If SEC is shut down, the wrapper also forwards the
TERM signal received from SEC to child process, so that the command line
would not stay in the process table after SEC has finished.

Here is an example ruleset that utilizes the wrapper for starting command
lines with spawn, and collecting their outputs and exit codes:

type=Single
ptype=RegExp
pattern=start (\d+) (.+)
desc=allow child process to run for $1 seconds and terminate it with TERM
(15) signal
action=spawn ./wrapper.pl $1 15 $2

type=Single
ptype=RegExp2
pattern=Command (.+) output: (.*)\nCommand \1 exit code: (\d+)
desc=Catch the output and exit code of the child process
action=write - command $1, output $2, exit code $3

type=Single
ptype=RegExp
pattern=Command (.+) timed out
desc=Child process has timed out
action=write - command $1 has timed out

The first rule runs a command line via wrapper, e.g., if line "start 1
sleep 60" is provided to SEC, command line "sleep 60" is allowed to run for
1 second. Since this command line runs for 1 minute, it gets terminated
after 1 second with TERM signal (signal number 15), and wrapper generates
the following synthetic event "Command sleep 60 timed out". This event is
matched by the third rule which writes the string "command sleep 60 has
timed out" to standard output. On the other hand, if line "start 5
/bin/date" is provided to SEC, /bin/date is allowed to run for 5 seconds
which is more than enough for successful completion. Therefore, the wrapper
reports back output and exit code from /bin/date which are matched by the
second rule, and a string similar to the following gets written to standard
output:
"command /bin/date, output Tue Jul 30 00:44:16 EEST 2019, exit code 0"
(the second rule utilizes the RegExp2 pattern for matching two consecutive
synthetic events received 

Re: [Simple-evcorr-users] EVENTGROUP RULE and match variables

2019-06-28 Thread Risto Vaarandi
hi Jia,

thanks for an interesting question! SEC match variables are set to new
values after each pattern match and they don't have any persistence over
several matches. Since there is only one capture group in each regular
expression of your example rule, all four patterns are setting the $1
variable, and the previous value of $1 is not accessible.

However, there is a way to address this issue with 'count' fields of
EventGroup rule which allow for executing a custom action on every pattern
match. If you would store the current value of $1 variable into a
persistent object like a SEC context, the value could be retrieved later.
For example, consider the following rule example which is using four
contexts P1,...,P4 for this purpose:

type=EventGroup4
init=create P1; create P2; create P3; create P4
end=delete P1; delete P2; delete P3; delete P4
ptype=RegExp
pattern=(\d+)A
count=fill P1 $1
ptype2=RegExp
pattern2=(\d+)B
count2=fill P2 $1
ptype3=RegExp
pattern3=(\d+)C
count3=fill P3 $1
ptype4=RegExp
pattern4=(\d+)D
count4=fill P4 $1
desc=test
action=copy P1 %p1; copy P2 %p2; copy P3 %p3; copy P4 %p4; \
   write - %p1, %p2, %p3, %p4 at %t.
window=10

The 'init' and 'end' fields of the rule will create and delete four
contexts P1, P2, P3 and P4 for holding the values of $1 match variable.
Contexts are created when the counting operation starts, and deletion
happens when the operation finishes its work. There are also four count*
fields in rule definition which store the value of $1 into corresponding
context with 'fill' action. Finally, in the action field the values will be
retrieved from contexts with 'copy' actions and written to standard output.

One can of course shorten this rule example and use %p1,...,%p4 action list
variables only, dropping contexts P1,...,P4 altogether. In that case,
'init' and 'end' fields are not necessary and count* fields would look like
this: count=assign %p1 $1. Also, there wouldn't be a need for 'copy'
actions. Unfortunately, if the 'desc' field contains match variables and
the rule can run several event correlation operations simultaneously, the
same set of four variables would be used by all simultaneously running
operations which is probably not what you want. However, since context
names are allowed to contain match variables, you can utilize variables
which are unique for each operation for naming the contexts. For example,
if $2 holds the IP address and the 'desc' field is defined as "desc=test
for IP $2", the rule runs a separate operation for each IP address. For
keeping the data for each operation separate and safe from overwriting by
another operation, you can simply use contexts P1_$2, P2_$2, P3_$2 and
P4_$2.

One final note -- each pattern in EventGroup rule can match several times,
and if that happens, above rule example stores the *last* value for given
pattern. For example, if you have the following events within 10 seconds:
4C
5D
1A
2A
3B

the rule would produce the following output:
2, 3, 4, 5 at Fri Jun 28 19:11:51 2019.

If you would like to store the *first* value which the pattern has seen
during the lifetime of the operation, you could use the following 'count'
field:
count=getsize %o P1; if %o ( none ) else ( fill P1 $1 )
However, since the event correlation window of EventGroup rule is sliding,
the stored value might originate from event which is already outside the
window when the operation terminates (you will never have this issue if you
store last values like the example rule does).

I hope my answer was helpful,
risto

Kontakt Xinying Sun () kirjutas kuupäeval R, 28.
juuni 2019 kell 17:25:

> Hello SEC Users,
> I have a question about match variables of EVENTGROUP RULE. For example:
>
> type=EventGroup4
> ptype=RegExp
> pattern=(\d+)A
> ptype2=RegExp
> pattern2=(\d+)B
> ptype3=RegExp
> pattern3=(\d+)C
> ptype4=RegExp
> pattern4=(\d+)D
> desc=test
> action=write - $1, $2, $3, $4 at %t.
> window=10
>
> Then input:
> 1A
> 2B
> 3C
> 4D
> It can output 1,2,3,4 at...
>
> Is there any method that can modify the example to match every patterns' (
> ) content?
> Thanks,
> Jia
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


[Simple-evcorr-users] sec-2.8.2 released

2019-06-02 Thread Risto Vaarandi
hi all,

today, SEC-2.8.2 has been released which can be downloaded from:
https://github.com/simple-evcorr/sec/releases/download/2.8.2/sec-2.8.2.tar.gz

Here is the changelog for the new version:

--- version 2.8.2

* added support for 'varset' action.

* fixed a bug where reference to $:{cacheentry:varname} match variable
  for non-existing pattern match cache entry would create an empty entry.

The new 'varset' action is similar to 'varset' context expression operator,
and checks if given pattern match cache entry exists. The 2.8.2 version
also includes documentation updates discussed in the mailing list after the
release of the previous version.

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Correlation Upon Aggregation

2019-05-13 Thread Risto Vaarandi
hi Santhosh,

since you are using SingleWithSuppress rule for aggregation, is my
understanding correct that the term "aggregation" means generating a syslog
message on the first matching event, suppressing the following matching
events during 300 seconds? If so, you don't need the PairWithWindow rule
but can accomplish your task with a SingleWithSuppress rule that you
already have in your rulebase. All you need to do is to set up a file which
contains IP addresses of interest, and load it when SEC starts or the file
is updated. Here is a simple ruleset that implements this task:

type=Single
ptype=RegExp
pattern=^(?:SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)$
context=SEC_INTERNAL_EVENT
desc=load blacklist of bad IP addresses
action=delete BADIP; create BADIP; \
   lcall %o -> ( sub { $mtime = (stat("/tmp/badip.txt"))[9] } ); \
   cspawn BadIp cat /tmp/badip.txt

type=Calendar
time=* * * * *
context= -> ( sub { my($temp) = (stat("/tmp/badip.txt"))[9]; \
if (!defined($temp)) { return 0; } \
if (!defined($mtime) || $temp != $mtime) \
  { $mtime = $temp; return 1; } \
return 0; } )
desc=reload updated blacklist of bad IP addresses
action=delete BADIP; create BADIP; cspawn BadIp cat /tmp/badip.txt

type=Single
ptype=RegExp
pattern=^\s*((?:\d{1,3}\.){3}\d{1,3})\s*$
context=BadIp
desc=set up a blacklist entry for IP address $1
action=alias BADIP BADIP_$1


Note that for reading the file with blacklist entries, I have used 'cspawn'
action, since it is more efficient and simple than combination of 'lcall'
and 'cevent' in your ruleset.
Also, I have included additional Calendar rule which checks the blacklist
file once a minute, and reloads the blacklist if file modification time has
changed (the modification time has been memorized in Perl $mtime global
variable).

Once the blacklist has been loaded, you could use the following
SingleWithSuppress rules for reacting to first IDS event which is observed
for a specific combination of source IP address and attack name, and
suppress the following events for the same combination during 300 seconds:

type=SingleWithSuppress
ptype=regexp
pattern=IDS.*src=([\d.]+).*attack_name=(\S+)
context=BADIP_$1
desc=Security Alert $2 for blacklisted IP $1
action=udpsock syslog01:514 <13>%.monstr %.mdaystr %.hmsstr myhost sec: %s
window=300

type=SingleWithSuppress
ptype=regexp
pattern=IDS.*src=([\d.]+).*attack_name=(\S+)
context=!BADIP_$1
desc=Security Alert $2 for non-blacklisted IP $1
action=udpsock syslog01:514 <13>%.monstr %.mdaystr %.hmsstr myhost sec: %s
window=300

Note that the 'pipe' action in your rule example has invalid syntax, since
there is no pipe (|) symbol between the event string and external command
line. Also, it is inefficient to fork a process each time an event needs to
be sent to central syslog server, and 'udpsock' action is a much better
alternative since it only sets up a single UDP socket for talking to server
(documentation of 'udpsock' action actually contains an example of
communicating with remote syslog server, and I have used it in above rules).

I hope that above examples are helpful.

kind regards,
risto

Kontakt Santhosh Kumar () kirjutas kuupäeval E,
13. mai 2019 kell 13:56:

> Hi Risto
>
>
>
> Greetings..!!
>
>
>
> I would like to get your suggestions on event correlation upon
> aggregation. Below rule aggregate events with whitelisting criteria.
>
>
>
> ---
>
> type=Single
>
> ptype=RegExp
>
> pattern=(?:SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)
>
> desc=load blacklist
>
> action=logonly; delete WL; create WL; \
>
> lcall %events -> (sub{scalar `cat
> /usr/local/bin/sec-rules/whitelist.txt`}); \
>
> cevent Whitelist 0 %events
>
>
>
> type=Single
>
> ptype=RegExp
>
> pattern=.
>
> context=Whitelist
>
> desc=create a whitelist entry
>
> action=logonly; alias WL WL_$0
>
>
>
> type=SingleWithSuppress
>
> ptype=regexp
>
> context=!WL_$2
>
> pattern=IDS.*dst=([\d\.]+).*attack_name=([\w\:\-\/\.\()\s]+)
>
> desc=Suppressed $2 Security Alert towards $1
>
> action= pipe '<5>$0' | nc syslog01 514
>
> window=300
>
> ---
>
>
>
> Now will "pairwithwindow" rule on top this helps me to achieve correlation
> based on Dst. IP($1) field from IDS logs with Threat Intel IP(which is
> stored in a file).
>
>
>
> Conditions to meet are,
>
> Condition 1: Need to forward Aggregated + Correlated log to external
> syslog server.
>
> Condition 2: If Correlation is not matching, Just Aggregated log should be
> forwarded to external syslog server.
>
>
>
> Please suggest me with best practices.
>
>
>
> Regards,
>
> Santhosh S
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Question on rule

2019-03-18 Thread Risto Vaarandi
hi James,

yes, you can specify several actions in the 'action' field of each rule --
all you have to do is to separate actions with a semicolon. For example,
the following 'pipe' and 'shellcmd' actions will issue an email alert and
generate a syslog message with /usr/bin/logger:

action = pipe '%s' /usr/local/bin/sendEmail; shellcmd /usr/bin/logger -p
daemon.err -t sec 'Alert fired by sec: %s'

kind regards,
risto

Kontakt James Lay () kirjutas kuupäeval E, 18.
märts 2019 kell 16:37:

> Last question on this...I usually add an email alert using sendemail
> like so to my rules:
>
> action = pipe '%s' /usr/local/bin/sendEmail
>
> Can I add a second action to this?  Thank you!
>
> James
>
> On 2019-03-18 08:11, James Lay wrote:
> > Wow thanks so much RistoI love the way you actually explain what's
> > going on...really appreciate it!
> >
> > James
> >
> > On 2019-03-16 05:36, Risto Vaarandi wrote:
> >> hi James,
> >>
> >> for addressing this problem, you could try the following EventGroup
> >> rule:
> >>
> >> type=EventGroup
> >> ptype=RegExp
> >>
> pattern=^\S+\s+\S+\s+((?:\d{1,3}\.){3}\d{1,3})\s+\d+\s+(?:\d{1,3}\.){3}\d{1,3}\s+88\s+AS\s+(\S+)\s+(\S+)\s+F\s+KDC_ERR_PREAUTH_FAILED
> >> context=!WORKSTATION_$1_LOGIN_FAILURE_$2  &&  !ALERT_ISSUED_FOR_$1
> >> count=create WORKSTATION_$1_LOGIN_FAILURE_$2 60
> >> desc=Workstation IP $1 has failed to login with three different user
> >> accounts
> >> action=write - %s; create ALERT_ISSUED_FOR_$1 3600
> >> window=60
> >> thresh=3
> >>
> >> The regular expression of this EventGroup rule will set $1 match
> >> variable to IP address of the workstation and $2 to user account. I
> >> have assumed that the user account is provided by "user/domain" field
> >> in your example event. The EventGroup rule matches login failure
> >> events and starts event counting operations for these events, so that
> >> there is a separate operation for each workstation IP address (because
> >> 'desc' field of the rule contains $1 variable).
> >>
> >> After the regular expression has matched a login failure event, the
> >> 'context' field of the rule makes sure that the context
> >> WORKSTATION__LOGIN_FAILURE_ does not exist. Note that
> >> the presence of this context indicates that the login failure event
> >> for the given user account and workstation IP has already been counted
> >> during the last 60 seconds. If this context does not exist, event
> >> matches the rule and will be counted by the operation that runs for
> >> the given workstation IP address. After the event has been counted,
> >> the context WORKSTATION__LOGIN_FAILURE_ will be
> >> created for 60 seconds (see the 'count' field of the rule) which
> >> prevents login failure event for the same workstation and user account
> >> counted twice in the window of 60 seconds. Due to the use of these
> >> context, a counting operation which runs for some workstation IP
> >> address can only observe 3 events within 60 seconds if three user
> >> accounts for these events are *all* different.
> >>
> >> Finally, after a counting operation has issued an alarm (see the
> >> 'action' field), the rule also sets up a context ALERT_ISSUED_FOR_
> >> for 1 hour. The purpose of this context is to suppress repeated alarms
> >> if the workstation continues to probe user accounts after initial
> >> alarm. Without this context, you might get a new alarm about the same
> >> workstation after each 60 seconds, while ALERT_ISSUED_FOR_ context
> >> suppresses such repeated alarms for 1 hour.
> >>
> >> In order to illustrate how the EventGroup rule works, suppose the
> >> following five events appear for workstations 10.1.1.1 and 10.1.1.2
> >> [1]:
> >>
> >> 2019-03-15T10:50:30-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1
> >> 62469   192.168.1.1  88  AS  bob/mydomain
> >> krbtgt/domain   F   KDC_ERR_PREAUTH_FAILED  -
> >> 2037-09-12T20:48:05-0600   -   T   T   -
> >> 2019-03-15T10:50:31-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1
> >> 62471   192.168.1.1  88  AS  alice/mydomain
> >> krbtgt/domain   F   KDC_ERR_PREAUTH_FAILED  -
> >> 2037-09-12T20:48:05-0600   -   T   T   -
> >> 2019-03-15T10:50:32-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1
> >> 62472   192.168.1.1  88  AS  alice/mydomain
> >> krbtgt/domain   F   K

Re: [Simple-evcorr-users] Question on rule

2019-03-16 Thread Risto Vaarandi
hi James,

for addressing this problem, you could try the following EventGroup rule:

type=EventGroup
ptype=RegExp
pattern=^\S+\s+\S+\s+((?:\d{1,3}\.){3}\d{1,3})\s+\d+\s+(?:\d{1,3}\.){3}\d{1,3}\s+88\s+AS\s+(\S+)\s+(\S+)\s+F\s+KDC_ERR_PREAUTH_FAILED
context=!WORKSTATION_$1_LOGIN_FAILURE_$2  &&  !ALERT_ISSUED_FOR_$1
count=create WORKSTATION_$1_LOGIN_FAILURE_$2 60
desc=Workstation IP $1 has failed to login with three different user
accounts
action=write - %s; create ALERT_ISSUED_FOR_$1 3600
window=60
thresh=3

The regular expression of this EventGroup rule will set $1 match variable
to IP address of the workstation and $2 to user account. I have assumed
that the user account is provided by "user/domain" field in your example
event. The EventGroup rule matches login failure events and starts event
counting operations for these events, so that there is a separate operation
for each workstation IP address (because 'desc' field of the rule contains
$1 variable).

After the regular expression has matched a login failure event, the
'context' field of the rule makes sure that the context
WORKSTATION__LOGIN_FAILURE_ does not exist. Note that the
presence of this context indicates that the login failure event for the
given user account and workstation IP has already been counted during the
last 60 seconds. If this context does not exist, event matches the rule and
will be counted by the operation that runs for the given workstation IP
address. After the event has been counted, the context
WORKSTATION__LOGIN_FAILURE_ will be created for 60 seconds
(see the 'count' field of the rule) which prevents login failure event for
the same workstation and user account counted twice in the window of 60
seconds. Due to the use of these context, a counting operation which runs
for some workstation IP address can only observe 3 events within 60 seconds
if three user accounts for these events are *all* different.

Finally, after a counting operation has issued an alarm (see the 'action'
field), the rule also sets up a context ALERT_ISSUED_FOR_ for 1 hour.
The purpose of this context is to suppress repeated alarms if the
workstation continues to probe user accounts after initial alarm. Without
this context, you might get a new alarm about the same workstation after
each 60 seconds, while ALERT_ISSUED_FOR_ context suppresses such
repeated alarms for 1 hour.

In order to illustrate how the EventGroup rule works, suppose the following
five events appear for workstations 10.1.1.1 and 10.1.1.2:

2019-03-15T10:50:30-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1
62469   192.168.1.1  88  AS  bob/mydomain
krbtgt/domain   F   KDC_ERR_PREAUTH_FAILED  -
2037-09-12T20:48:05-0600   -   T   T   -
2019-03-15T10:50:31-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1 62471
  192.168.1.1  88  AS  alice/mydomainkrbtgt/domain
F   KDC_ERR_PREAUTH_FAILED  -   2037-09-12T20:48:05-0600
-   T   T   -
2019-03-15T10:50:32-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1 62472
  192.168.1.1  88  AS  alice/mydomainkrbtgt/domain
F   KDC_ERR_PREAUTH_FAILED  -   2037-09-12T20:48:05-0600
-   T   T   -
2019-03-15T10:50:34-0600CAnW7i1DW5gZp7c6Vd  10.1.1.2
41916   192.168.1.1  88  AS  bob/mydomain
krbtgt/domain   F   KDC_ERR_PREAUTH_FAILED  -
2037-09-12T20:48:05-0600   -   T   T   -
2019-03-15T10:50:38-0600CAnW7i1DW5gZp7c6Vd  10.1.1.1 62473
  192.168.1.1  88  AS  donald/mydomainkrbtgt/domain
F   KDC_ERR_PREAUTH_FAILED  -   2037-09-12T20:48:05-0600
-   T   T   -

After seeing these events, the EventGroup rule would start two counting
operations for workstations 10.1.1.1 and 10.1.1.2 (operation for 10.1.1.1
is started at 10:50:30 and operation for 10.1.1.2 at 10:50:34). Also, the
operation which runs for 10.1.1.1 would fire an alarm when the fifth event
appears (at 10:50:38), since at that point the operation has see login
failure events for three distinct user accounts bob/mydomain,
alice/mydomain and donald/mydomain. Note that the operation would not count
the third event, since alice/mydomain has already been observed during the
last 60 seconds (see the second event). Also, the fourth event is
irrelevant for the counting operation which runs for 10.1.1.1, since the
workstation IP address is different from 10.1.1.1 in this event.

I hope this example is helpful.

kind regards,
risto

Kontakt James Lay () kirjutas kuupäeval R, 15.
märts 2019 kell 22:13:

> So I have a log file that logs user login attempts made to a domain
> controller like so (bro/zeek):
>
> 2019-03-15T10:50:30-0600CAnW7i1DW5gZp7c6Vd  cx.x.x.x
> 62469   sx.x.x.x  88  AS  user/domainkrbtgt/domain
> F   KDC_ERR_PREAUTH_FAILED  -   2037-09-12T20:48:05-0600
> -   T   T   -
>
> In looking at the man page at:  http://simple-evcorr.github.io/man.html
> 

[Simple-evcorr-users] an update to sec FAQ

2019-01-30 Thread Risto Vaarandi
hi all,

sec FAQ has been updated with a new entry about setting up a control file
or fifo for issuing commands to sec:
http://simple-evcorr.github.io/FAQ.html#26
The rule example under the new entry should be particularly useful if the
OS platform does not support signals natively (e.g., windows).

kind regards,
risto
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] list of suppress events

2018-12-20 Thread Risto Vaarandi
hi Eli,

all data structures created by user are kept in a dedicated namespace
main::SEC, in order to avoid clashes with variables in sec code. That
applies to all Perl code written by user (such as code used for PerlFunc
patterns, lcall actions, etc.).

As a side note -- if there is a special need to access sec internal data
structures, it can be done by referring to main:: namespace. The "Perl
Integration" section of the official documentation (
http://simple-evcorr.github.io/man.html#lbBB) contains more detailed
discussion on namespaces with a relevant example.

Hope this helps,
risto



Kontakt Kagan, Eli () kirjutas kuupäeval N, 20.
detsember 2018 kell 20:38:

> Thanks Risto. I’ll give it a try.
>
>
>
> Are you keeping user defined perl variables in a separate namespace?
>
>
>
> -- ek
>
>
>
> *From:* Risto Vaarandi 
> *Sent:* Tuesday, December 18, 2018 1:45 PM
> *To:* Kagan, Eli 
> *Cc:* simple-evcorr-users@lists.sourceforge.net
> *Subject:* Re: [Simple-evcorr-users] list of suppress events
>
>
>
> hi Eli,
>
>
>
> if you would like to have regular expressions stored in an external file
> and load them at startup and restarts, you could use the following ruleset.
> The first rule loads patterns from a file when sec is started or has
> received HUP or ABRT signal. The rule assumes that each line contains a
> regular expression, compiles these expressions and stores them into the
> array @plist.  The second example rule compares an input line with regular
> expression patterns from @plist, producing a match if any of the patterns
> matches:
>
>
>
> type=Single
> ptype=RegExp
> pattern=^(SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)$
> context=SEC_INTERNAL_EVENT
> desc=load suppress patterns
> action=lcall %o -> ( sub { @plist = (); \
>  if (!open(FILE, "patterns.txt")) { return 0; } \
>  my(@lines) = ; close(FILE); chomp(@lines); \
>  @plist = map { qr/$_/ } @lines; return scalar(@plist); } ); \
>if %o ( logonly %o patterns loaded ) else ( logonly No patterns
> loaded )
>
>
> type=Suppress
> ptype=PerlFunc
> pattern=sub { foreach $p (@plist) { if ($_[0] =~ $p) { return 1; } }
> return 0; }
>
>
>
> Since both the PerlFunc pattern and regular expressions are compiled
> before usage, their evaluation does not involve any extra overhead. In
> fact, when comparing the performance of a list of five regular expression
> patterns against five Suppress rules on 1 million non-matching input events
> on my laptop, I observed an external list being about 10% faster. However,
> the actual performance of both approaches depends heavily on input events
> and patterns, so I would recommend to benchmark the above ruleset vs
> Suppress rules on your log data.
>
>
>
> kind regards,
>
> risto
>
>
>
>
>
>
>
> Kontakt Kagan, Eli () kirjutas kuupäeval T, 18.
> detsember 2018 kell 19:34:
>
> Howdy,
>
>
>
> I’d like to have a separate file containing a list of regex patterns to
> suppress. That is, instead of creating a multitude of Suppress events for
> each patter I would like to have a PerlFunc Suppress rule that would use an
> external list. Ideally that list should be loaded at startup.
>
>
>
> Is there a simple way to create something like that and if so what the
> performance impact would be versus generating individual suppress rules
> with a config script?
>
>
>
> Thanks,
>
> Eli
>
>
>
> DXC Technology Company -- This message is transmitted to you by or on
> behalf of DXC Technology Company or one of its affiliates. It is intended
> exclusively for the addressee. The substance of this message, along with
> any attachments, may contain proprietary, confidential or privileged
> information or information that is otherwise legally exempt from
> disclosure. Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient of this message, you are
> not authorized to read, print, retain, copy or disseminate any part of this
> message. If you have received this message in error, please destroy and
> delete all copies and notify the sender by return e-mail. Regardless of
> content, this e-mail shall not operate to bind DXC Technology Company or
> any of its affiliates to any order or other contract unless pursuant to
> explicit written agreement or government initiative expressly permitting
> the use of e-mail for such purpose. --.
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] list of suppress events

2018-12-18 Thread Risto Vaarandi
hi Eli,

if you would like to have regular expressions stored in an external file
and load them at startup and restarts, you could use the following ruleset.
The first rule loads patterns from a file when sec is started or has
received HUP or ABRT signal. The rule assumes that each line contains a
regular expression, compiles these expressions and stores them into the
array @plist.  The second example rule compares an input line with regular
expression patterns from @plist, producing a match if any of the patterns
matches:

type=Single
ptype=RegExp
pattern=^(SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART)$
context=SEC_INTERNAL_EVENT
desc=load suppress patterns
action=lcall %o -> ( sub { @plist = (); \
 if (!open(FILE, "patterns.txt")) { return 0; } \
 my(@lines) = ; close(FILE); chomp(@lines); \
 @plist = map { qr/$_/ } @lines; return scalar(@plist); } ); \
   if %o ( logonly %o patterns loaded ) else ( logonly No patterns
loaded )


type=Suppress
ptype=PerlFunc
pattern=sub { foreach $p (@plist) { if ($_[0] =~ $p) { return 1; } } return
0; }

Since both the PerlFunc pattern and regular expressions are compiled before
usage, their evaluation does not involve any extra overhead. In fact, when
comparing the performance of a list of five regular expression patterns
against five Suppress rules on 1 million non-matching input events on my
laptop, I observed an external list being about 10% faster. However, the
actual performance of both approaches depends heavily on input events and
patterns, so I would recommend to benchmark the above ruleset vs Suppress
rules on your log data.

kind regards,
risto



Kontakt Kagan, Eli () kirjutas kuupäeval T, 18.
detsember 2018 kell 19:34:

> Howdy,
>
>
>
> I’d like to have a separate file containing a list of regex patterns to
> suppress. That is, instead of creating a multitude of Suppress events for
> each patter I would like to have a PerlFunc Suppress rule that would use an
> external list. Ideally that list should be loaded at startup.
>
>
>
> Is there a simple way to create something like that and if so what the
> performance impact would be versus generating individual suppress rules
> with a config script?
>
>
>
> Thanks,
>
> Eli
>
>
> DXC Technology Company -- This message is transmitted to you by or on
> behalf of DXC Technology Company or one of its affiliates. It is intended
> exclusively for the addressee. The substance of this message, along with
> any attachments, may contain proprietary, confidential or privileged
> information or information that is otherwise legally exempt from
> disclosure. Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient of this message, you are
> not authorized to read, print, retain, copy or disseminate any part of this
> message. If you have received this message in error, please destroy and
> delete all copies and notify the sender by return e-mail. Regardless of
> content, this e-mail shall not operate to bind DXC Technology Company or
> any of its affiliates to any order or other contract unless pursuant to
> explicit written agreement or government initiative expressly permitting
> the use of e-mail for such purpose. --.
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] help

2018-12-06 Thread Risto Vaarandi
hi Graeme,
your posting is apparently empty -- can you re-post your question?
risto

Kontakt Graeme Danielson () kirjutas
kuupäeval N, 6. detsember 2018 kell 05:05:

>
>
>
>
> -- Graeme Danielson tel:+64-21-611345 <+64-21-611345>  UTC+13
>
>
>
> Good planets are hard to find - please think of the environment before you
> print this email.
> 
> CAUTION - This message may contain privileged and confidential
> information intended only for the use of the addressee named above.
> If you are not the intended recipient of this message you are hereby
> notified that any use, dissemination, distribution or reproduction
> of this message is prohibited. If you have received this message in
> error please notify Air New Zealand immediately. Any views expressed
> in this message are those of the individual sender and may not
> necessarily reflect the views of Air New Zealand.
> _
> For more information on the Air New Zealand Group, visit us online
> at http://www.airnewzealand.com
> _
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] SingleWithThreshold reference current input line

2018-11-08 Thread Risto Vaarandi
hi Dusan,

the problem lies in the fact that when SingleWithThreshold rule starts a
counting operation, match variables in the 'action' field receive their
values from the first event which triggered that operation (that is done
for staying consistent with substitution of variables in other fields,
where values from first event have to be used). In order to solve this
issue, the best solution is to employ EventGroup rule instead of
SingleWithThreshold, since EventGroup is a more general counting rule that
supports a number of useful extensions.

One such extension is support for the 'count' field which allows for
executing action(s) on each matching event. Unlike 'action' field, match
variables in 'count' field are set from *each* matching event. For example,
consider the following rule:

type=EventGroup
ptype=RegExp
pattern=.
desc=count any event
count=assign %lastline $0
action=write - %lastline
thresh=3
window=60

After each matching event, action list variable %lastline is set to the
current event, and when the third matching event is observed in 60 second
time window, this event is written to standard output. Since unlike match
variables in 'action' field, action list variables like %lastline are
always substituted  at action list execution, %lastline will hold the value
of last matching line.

For employing this technique for your ruleset, EventGroup rule could be
used in the following fashion:

rem=Parse My Event
type=Single
ptype=RegExp
pattern=^\S+ (?\S+)
varmap=MY_EVENT
continue=TakeNext
desc=Parse Event
action=none

rem=Rule1
type=EventGroup
ptype=Cached
pattern=MY_EVENT
desc=Rule1 $+{EVENT}
count=assign %lastline $0
action=write - %lastline
window=60
thresh=2

When submitting three example events to this ruleset, the following output
should be displayed:

Assigning '2018-11-11T00:00:01+00:00 Event1' to variable '%lastline'
Assigning '2018-11-11T00:00:02+00:00 Event1' to variable '%lastline'
Writing event '2018-11-11T00:00:02+00:00 Event1' to file '-'
2018-11-11T00:00:02+00:00 Event1 <--- second event that was written to
standard output
Assigning '2018-11-11T00:00:03+00:00 Event1' to variable '%lastline'

Hope this helps,
risto


Kontakt Dusan Sovic () kirjutas kuupäeval N, 8.
november 2018 kell 16:11:

> Hello SEC Users,
>
> I using SingleWithSuppress rule to process timestamped input events. I
> want to take action after 2nd event occurrence within 60 seconds.
> Problem what I have is that after second event match, action is taken and
> event ($0) is written to the output but it use timestamp of first received
> event (that one what started correlation operation).
> On the output I would like to see the *timestamp* of the second event or
> more general whole input message of second event as is.
>
> Let me demonstrate this on example:
>
> Config File: ccr.sec
>
> rem=Parse My Event
> type=Single
> ptype=RegExp
> pattern=^\S+ (?\S+)
> varmap=MY_EVENT
> continue=TakeNext
> desc=Parse Event
> action=none
>
> rem=Rule1
> type=SingleWithThreshold
> ptype=Cached
> pattern=MY_EVENT
> desc=Rule1 $+{EVENT}
> action=write - $0
> window=60
> thresh=2
>
> Run sec: sec -conf=./ccr.sec -input=-
>
> Input following line:
> 2018-11-11T00:00:01+00:00 Event1
> 2018-11-11T00:00:02+00:00 Event1
> 2018-11-11T00:00:03+00:00 Event1
>
> Output action:
> Writing event '2018-11-11T00:00:01+00:00 Event1' to file '-'
>
> What I want to achieve / see:
> Writing event '2018-11-11T00:00:02+00:00 Event1' to file '-'
>
> Thanks,
> Dusan
>
>
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


Re: [Simple-evcorr-users] Suppress rule and continue filed support

2018-10-14 Thread Risto Vaarandi
>
> I think it will be beneficial to highlight this fact in Rule Types ->
>> Suppress paragraph that other “techniques” have to be used to achieve
>> suppression across multiple configuration files.
>>
>
> that's a good idea -- I will modify the documentation accordingly.
> risto
>
> I like idea to add this as separate FAQ entry.
>>
>>
>>
>

Documentation updates have been committed to github, and the changes are
now visible in online documentation:
http://simple-evcorr.github.io/man.html

In addition, FAQ has been updated with a new entry:
http://simple-evcorr.github.io/FAQ.html#25

kind regards,
risto


Thank you,
>>
>> Dusan
>> --
>> *Od:* Risto Vaarandi 
>> *Odoslané:* piatok, 12. októbra 2018 11:25
>> *Komu:* dusan.so...@hotmail.sk
>> *Kópia:* simple-evcorr-users@lists.sourceforge.net
>> *Predmet:* Re: [Simple-evcorr-users] Suppress rule and continue filed
>> support
>>
>>
>> Hi Risto,
>>
>> Thank you very much for your suggestions and explanation.
>> Must agree with your arguments that introducing "continue" field option
>> support into "Suppress" rule will introduce some confusions.
>> I have never thought that this type of suppression logic can be achieved
>> by "Jump" rule. Thanks for bringing it up.
>>
>>
>> hi Dusan,
>>
>> it is my bad that I forgot to mention this opportunity in the man page,
>> but I'll incorporate it in the documentation for the next version. Also,
>> this topic would actually make a good FAQ entry.
>>
>>
>> Thanks for this great piece of software and I really appreciate your
>> support and help.
>>
>>
>> I am happy that you like sec and have found it useful :-)
>>
>> kind regards,
>> risto
>>
>>
>> Dusan
>>
>> --
>> *Od:* Risto Vaarandi 
>> *Odoslané:* štvrtok, 11. októbra 2018 22:47
>> *Komu:* dusan.so...@hotmail.sk
>> *Kópia:* simple-evcorr-users@lists.sourceforge.net
>> *Predmet:* Re: [Simple-evcorr-users] Suppress rule and continue filed
>> support
>>
>>
>> Hello SEC Users,
>>
>>
>>
>>
>> hi Dusan,
>>
>> Base on SEC documentation *Suppress* rules doesn’t support “continue”
>> field like other rules.
>>
>> My understanding is that if suppress rule match event the search for
>> matching rules ends in the *current* configuration file.
>>
>>
>> That's correct, the Suppress rule works indeed within the scope of the
>> current rule file only.
>>
>>
>>
>> Let’s consider this simple example with two config files:
>>
>>
>>
>> Config file: 01.sec
>>
>>
>>
>> type=Suppress
>>
>> ptype=RegExp
>>
>> pattern=foo
>>
>> desc=$0
>>
>>
>>
>> Config file: 02.sec
>>
>>
>>
>> type=Single
>>
>> ptype=RegExp
>>
>> pattern=foo
>>
>> continue=EndMatch
>>
>> desc=$0
>>
>> action=write - foo matched
>>
>>
>>
>> If you launch sec:
>>
>> sec -conf=./*.sec -input=-
>>
>>
>>
>> And put “foo” in the input:
>>
>> *  In first configuration file suppression rule match "foo" and switch
>> to next configuration file.
>>
>> *  In second configuration file Single rule match "foo" and action is
>> taken.
>>
>>
>>
>> I think it can be beneficial if "Suppress" rule will support *continue*
>> filed so I can tell that I don't want to continue to find match in *all*
>> next configuration files.
>>
>>
>> That approach has one downside -- it would introduce confusion among
>> users, since some values for "continue" like "TakeNext" and "GoTo" imply
>> that the matching process will not stop but rather proceed from another
>> rule. In sec code, it is of course possible to either ignore such values or
>> report an error, but from the design perspective it isn't the best
>> solution. I would rather favor the creation of a separate rule type (e.g.,
>> Stop instead of Suppress) -- but if you consider already existing rule
>> types, the Jump rule is actually meeting your needs.
>>
>> The whole purpose of this rule is to actually control the rule processing
>> flow, and you can utilize it for continuing processing in another rule
>> file. When you omit the "cfset" field from Jump, the Jump rule can al

Re: [Simple-evcorr-users] Suppress rule and continue filed support

2018-10-12 Thread Risto Vaarandi
> Hi Risto,
>
>
>

hi Dusan,

I think it will be beneficial to highlight this fact in Rule Types ->
> Suppress paragraph that other “techniques” have to be used to achieve
> suppression across multiple configuration files.
>

that's a good idea -- I will modify the documentation accordingly.
risto

I like idea to add this as separate FAQ entry.
>
>
>
> Thank you,
>
> Dusan
> --
> *Od:* Risto Vaarandi 
> *Odoslané:* piatok, 12. októbra 2018 11:25
> *Komu:* dusan.so...@hotmail.sk
> *Kópia:* simple-evcorr-users@lists.sourceforge.net
> *Predmet:* Re: [Simple-evcorr-users] Suppress rule and continue filed
> support
>
>
> Hi Risto,
>
> Thank you very much for your suggestions and explanation.
> Must agree with your arguments that introducing "continue" field option
> support into "Suppress" rule will introduce some confusions.
> I have never thought that this type of suppression logic can be achieved
> by "Jump" rule. Thanks for bringing it up.
>
>
> hi Dusan,
>
> it is my bad that I forgot to mention this opportunity in the man page,
> but I'll incorporate it in the documentation for the next version. Also,
> this topic would actually make a good FAQ entry.
>
>
> Thanks for this great piece of software and I really appreciate your
> support and help.
>
>
> I am happy that you like sec and have found it useful :-)
>
> kind regards,
> risto
>
>
> Dusan
>
> --
> *Od:* Risto Vaarandi 
> *Odoslané:* štvrtok, 11. októbra 2018 22:47
> *Komu:* dusan.so...@hotmail.sk
> *Kópia:* simple-evcorr-users@lists.sourceforge.net
> *Predmet:* Re: [Simple-evcorr-users] Suppress rule and continue filed
> support
>
>
> Hello SEC Users,
>
>
>
>
> hi Dusan,
>
> Base on SEC documentation *Suppress* rules doesn’t support “continue”
> field like other rules.
>
> My understanding is that if suppress rule match event the search for
> matching rules ends in the *current* configuration file.
>
>
> That's correct, the Suppress rule works indeed within the scope of the
> current rule file only.
>
>
>
> Let’s consider this simple example with two config files:
>
>
>
> Config file: 01.sec
>
>
>
> type=Suppress
>
> ptype=RegExp
>
> pattern=foo
>
> desc=$0
>
>
>
> Config file: 02.sec
>
>
>
> type=Single
>
> ptype=RegExp
>
> pattern=foo
>
> continue=EndMatch
>
> desc=$0
>
> action=write - foo matched
>
>
>
> If you launch sec:
>
> sec -conf=./*.sec -input=-
>
>
>
> And put “foo” in the input:
>
> *  In first configuration file suppression rule match "foo" and switch to
> next configuration file.
>
> *  In second configuration file Single rule match "foo" and action is
> taken.
>
>
>
> I think it can be beneficial if "Suppress" rule will support *continue*
> filed so I can tell that I don't want to continue to find match in *all*
> next configuration files.
>
>
> That approach has one downside -- it would introduce confusion among
> users, since some values for "continue" like "TakeNext" and "GoTo" imply
> that the matching process will not stop but rather proceed from another
> rule. In sec code, it is of course possible to either ignore such values or
> report an error, but from the design perspective it isn't the best
> solution. I would rather favor the creation of a separate rule type (e.g.,
> Stop instead of Suppress) -- but if you consider already existing rule
> types, the Jump rule is actually meeting your needs.
>
> The whole purpose of this rule is to actually control the rule processing
> flow, and you can utilize it for continuing processing in another rule
> file. When you omit the "cfset" field from Jump, the Jump rule can also be
> used for controlling the rule processing order within a single rule file,
> provided that the "continue" field has been set to "GoTo". Also, without
> "cfset" field and "continue" omitted (or set to "DontCont"), Jump is
> identical to Suppress. Both of those special cases have been described in
> the documentation for the Jump rule.
>
> However, if you omit "cfset" and set "continue" to "EndMatch", the Jump
> rule will do exactly what you want. For example, the following rule will
> end processing for all events which contain the substring "test":
>
> type=Jump
> ptype=substr
> pattern=test
> continue=endmatch
>
> I acknowledge that the documentation should also explicitly 

Re: [Simple-evcorr-users] Suppress rule and continue filed support

2018-10-12 Thread Risto Vaarandi
> Hi Risto,
>
> Thank you very much for your suggestions and explanation.
> Must agree with your arguments that introducing "continue" field option
> support into "Suppress" rule will introduce some confusions.
> I have never thought that this type of suppression logic can be achieved
> by "Jump" rule. Thanks for bringing it up.
>

hi Dusan,

it is my bad that I forgot to mention this opportunity in the man page, but
I'll incorporate it in the documentation for the next version. Also, this
topic would actually make a good FAQ entry.


> Thanks for this great piece of software and I really appreciate your
> support and help.
>

I am happy that you like sec and have found it useful :-)

kind regards,
risto


> Dusan
>
> --
> *Od:* Risto Vaarandi 
> *Odoslané:* štvrtok, 11. októbra 2018 22:47
> *Komu:* dusan.so...@hotmail.sk
> *Kópia:* simple-evcorr-users@lists.sourceforge.net
> *Predmet:* Re: [Simple-evcorr-users] Suppress rule and continue filed
> support
>
>
> Hello SEC Users,
>
>
>
>
> hi Dusan,
>
> Base on SEC documentation *Suppress* rules doesn’t support “continue”
> field like other rules.
>
> My understanding is that if suppress rule match event the search for
> matching rules ends in the *current* configuration file.
>
>
> That's correct, the Suppress rule works indeed within the scope of the
> current rule file only.
>
>
>
> Let’s consider this simple example with two config files:
>
>
>
> Config file: 01.sec
>
>
>
> type=Suppress
>
> ptype=RegExp
>
> pattern=foo
>
> desc=$0
>
>
>
> Config file: 02.sec
>
>
>
> type=Single
>
> ptype=RegExp
>
> pattern=foo
>
> continue=EndMatch
>
> desc=$0
>
> action=write - foo matched
>
>
>
> If you launch sec:
>
> sec -conf=./*.sec -input=-
>
>
>
> And put “foo” in the input:
>
> *  In first configuration file suppression rule match "foo" and switch to
> next configuration file.
>
> *  In second configuration file Single rule match "foo" and action is
> taken.
>
>
>
> I think it can be beneficial if "Suppress" rule will support *continue*
> filed so I can tell that I don't want to continue to find match in *all*
> next configuration files.
>
>
> That approach has one downside -- it would introduce confusion among
> users, since some values for "continue" like "TakeNext" and "GoTo" imply
> that the matching process will not stop but rather proceed from another
> rule. In sec code, it is of course possible to either ignore such values or
> report an error, but from the design perspective it isn't the best
> solution. I would rather favor the creation of a separate rule type (e.g.,
> Stop instead of Suppress) -- but if you consider already existing rule
> types, the Jump rule is actually meeting your needs.
>
> The whole purpose of this rule is to actually control the rule processing
> flow, and you can utilize it for continuing processing in another rule
> file. When you omit the "cfset" field from Jump, the Jump rule can also be
> used for controlling the rule processing order within a single rule file,
> provided that the "continue" field has been set to "GoTo". Also, without
> "cfset" field and "continue" omitted (or set to "DontCont"), Jump is
> identical to Suppress. Both of those special cases have been described in
> the documentation for the Jump rule.
>
> However, if you omit "cfset" and set "continue" to "EndMatch", the Jump
> rule will do exactly what you want. For example, the following rule will
> end processing for all events which contain the substring "test":
>
> type=Jump
> ptype=substr
> pattern=test
> continue=endmatch
>
> I acknowledge that the documentation should also explicitly describe this
> particular case, in order to make it obvious to the reader.
>
>
>
>
> I know that I can achieve this by replacing "Suppress" rule in 01.sec
> configuration file by "Single" rule and do not take any action and define
> continue=EndMatch.
>
>
> One of the drawbacks of this approach is the need to set the "action"
> field to "none", while it would be more convenient not to define that field
> at all. However, with the Jump rule you would not have this issue. Also,
> since Jump is specifically designed for exercising control over rule
> processing (and Suppress rule is actually a special case of Jump),  the use
> of Jump would be quite appropriate here.
>
> hope that helps,
> risto
>
>
>
> Thanks,
> Dusan
>
> ___
> Simple-evcorr-users mailing list
> Simple-evcorr-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
>
>
___
Simple-evcorr-users mailing list
Simple-evcorr-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users


  1   2   3   4   5   6   >