Olivier,

I tested this and it works just fine.  

Not sure why you decided to change the rule level to 8 -- it should be 
either 0 to drop and not log, or 15 to log.  The reason is to ensure that 
the rule doesn't get overridden by another matching rule.  Level 0 always 
trumps other rules no matter what other rule levels are matched, while 
level 15 is the highest and should not get overridden unless there's a 
matching child rule to it at the same level.  Making it level 8 opens up 
the possibility that it can get overridden by a higher rule level.  

Here's how I explain it in my own documentation.  *This explanation may or 
may not make sense to you, but it works for the way my mind operates.  :-)*

*Rule Order / Heirachy* – The order in which rules are evaluated can seem 
somewhat complex:  

1.      When a rule matches a log record, if it has no children then that 
is the final rule match.  Otherwise, the child rules of that rule are 
evaluated.  

2.      Child rules are evaluated in the *order of descending severity 
level* with the exception that *level zero child rules are looked at first*. 


3.      Once a child rule matches, *none of the other child rules* of the 
same parent will be considered.  Instead, analysis drops down to the level 
of checking child rules of the child that just matched. 

4.      This process continues until a rule matches that has no children or 
no matching children. 

5.      When *multiple children* of the *same severity level* are involved, 
they are *evaluated in load order* (the order the rule files are loaded and 
the order the rules appear in the rule files).


This is what worked for me:

*ossec.conf:  *
  <!-- Default Log and Email Alert Levels  -->
  <alerts>
    <log_alert_level>1</log_alert_level>
    <email_alert_level>7</email_alert_level>
  </alerts>


*local_rules.xml:  *You could set this to <if_level> 1 to match all alerts 
generated but since you were concerned about the emails I just matched at 
the email_alert_level that is set in ossec.conf, which is 7.  You could 
also set the rule level to "0" instead of "15".  Setting to 15 means it 
will still log but not alert (due to the no_email_alert option), while 
setting to 0 will not log any matches.
<group name="global,override,">
  <rule id="110000" level="15">
    <if_level>7</if_level>
    <options>no_email_alert</options>
    <srcip>192.168.1.100</srcip>
    <description>Ignoring all alerts triggered by our scanner</description>
  </rule>
</group>


*ossec-logtest -v*
2020/03/11 09:03:48 ossec-testrule: INFO: Reading decoder file etc/
asa_decoder.xml.
2020/03/11 09:03:48 ossec-testrule: INFO: Reading decoder file etc/decoder.
xml.
2020/03/11 09:03:48 ossec-testrule: INFO: Reading the lists file: 
'etc/lists/nessus_cloud.whitelist'
2020/03/11 09:03:48 ossec-testrule: INFO: Started (pid: 20120).
ossec-testrule: Type one log per line.


Mar 10 20:00:13 a-sv-prd-oss-01 sshd[39101]: Bad protocol version 
identification '\026\003\001\003\241\001' from 192.168.1.100 port 36632




**Phase 1: Completed pre-decoding.
       full event: 'Mar 10 20:00:13 a-sv-prd-oss-01 sshd[39101]: Bad 
protocol version identification '\026\003\001\003\241\001' from 
192.168.1.100 port 36632'
       hostname: 'a-sv-prd-oss-01'
       program_name: 'sshd'
       log: 'Bad protocol version identification '\026\003\001\003\241\001' 
from 192.168.1.100 port 36632'


**Phase 2: Completed decoding.
       decoder: 'sshd'
       srcip: '192.168.1.100'
       srcport: '36632'


**Rule debugging:
    Trying rule: 1 - Generic template for all syslog rules.
       *Rule 1 matched.
       *Trying child rules.
    Trying rule: 5500 - Grouping of the pam_unix rules.
    Trying rule: 5556 - unix_chkpwd grouping.
    Trying rule: 5700 - SSHD messages grouped.
       *Rule 5700 matched.
       *Trying child rules.
    Trying rule: 5709 - Useless SSHD message without an user/ip and context.
    Trying rule: 5711 - Useless/Duplicated SSHD message without a user/ip.
    Trying rule: 5721 - System disconnected from sshd.
    Trying rule: 5722 - ssh connection closed.
    Trying rule: 5723 - SSHD key error.
    Trying rule: 5724 - SSHD key error.
    Trying rule: 5725 - Host ungracefully disconnected.
    Trying rule: 5727 - Attempt to start sshd when something already bound 
to the port.
    Trying rule: 5729 - Debug message.
    Trying rule: 5732 - Possible port forwarding failure.
    Trying rule: 5733 - User entered incorrect password.
    Trying rule: 5734 - sshd could not load one or more host keys.
    Trying rule: 5735 - Failed write due to one host disappearing.
    Trying rule: 5736 - Connection reset or aborted.
    Trying rule: 5750 - sshd could not negotiate with client.
    Trying rule: 5756 - sshd subsystem request failed.
    Trying rule: 100101 - Ignoring rules triggered by Nessus scanning server
    Trying rule: 110000 - Ignoring all alerts triggered by our scanner
       *Rule 110000 matched.


**Phase 3: Completed filtering (rules).
       Rule id: '110000'
       Level: '0'
       Description: 'Ignoring all alerts triggered by our scanner'


Now if I set the rule to level="8" like you have, it still works for me 
just fine.  Since you're using a separate rule file and both rules 5701 and 
11000 are set to the same level, my guess is that it's the order in which 
your rules are loading - but I can't say for sure.  But simply set your 
rule to either level="0" or level ="15" and you should be fine.

- Bruce



On Tuesday, March 10, 2020 at 4:32:04 PM UTC-4, Olivier Ragain wrote:
>
> Hi,
> I've changed:
>   <alerts>
>     <log_alert_level>1</log_alert_level>
>     <email_alert_level>8</email_alert_level>
>   </alerts>
> and i ve changed the rule to :
> <group name="global,override,">
>         <rule id="110000" level="8">
>                 <if_level>7</if_level>
>                 <options>no_email_alert</options>
>                 <srcip>*my_ip_scanner*</srcip>
>                 <description>Ignoring all alerts triggered by our 
> scanner</description>
>         </rule>
> </group>
> ----
> The alert is triggered by: Mar 10 20:00:13 a-sv-prd-oss-01 sshd[39101]: 
> Bad protocol version identification '\026\003\001\003\241\001' from 
> *my_scanner_ip* port 36632 
> ---- 
> The result of the test is:
> **Phase 1: Completed pre-decoding.
>        full event: 'Mar 10 20:00:13 a-sv-prd-oss-01 sshd[39101]: Bad 
> protocol version identification '\026\003\001\003\241\001' from 
> my_scanner_ip port 36632'
>        hostname: 'a-sv-prd-oss-01'
>        program_name: 'sshd'
>        log: 'Bad protocol version identification 
> '\026\003\001\003\241\001' from my_scanner_ip port 36632'
>
> **Phase 2: Completed decoding.
>        decoder: 'sshd'
>
> **Rule debugging:
>     Trying rule: 1 - Generic template for all syslog rules.
>        *Rule 1 matched.
>        *Trying child rules.
>     Trying rule: 5500 - Grouping of the pam_unix rules.
>     Trying rule: 5700 - SSHD messages grouped.
>        *Rule 5700 matched.
>        *Trying child rules.
>     Trying rule: 5709 - Useless SSHD message without an user/ip and 
> context.
>     Trying rule: 5711 - Useless/Duplicated SSHD message without a user/ip.
>     Trying rule: 5721 - System disconnected from sshd.
>     Trying rule: 5722 - ssh connection closed.
>     Trying rule: 5723 - SSHD key error.
>     Trying rule: 5724 - SSHD key error.
>     Trying rule: 5725 - Host ungracefully disconnected.
>     Trying rule: 5727 - Attempt to start sshd when something already bound 
> to the port.
>     Trying rule: 5729 - Debug message.
>     Trying rule: 5732 - Possible port forwarding failure.
>     Trying rule: 5733 - User entered incorrect password.
>     Trying rule: 5734 - sshd could not load one or more host keys.
>     Trying rule: 5735 - Failed write due to one host disappearing.
>     Trying rule: 5736 - Connection reset or aborted.
>     Trying rule: 5707 - OpenSSH challenge-response exploit.
>     Trying rule: 5701 - Possible attack on the ssh server (or version 
> gathering).
>        *Rule 5701 matched.
>        *Trying child rules.
>     Trying rule: 110000 - Ignoring all alerts triggered by our scanner
>
> **Phase 3: Completed filtering (rules).
>        Rule id: '5701'
>        Level: '8'
>        Description: 'Possible attack on the ssh server (or version 
> gathering).'
> **Alert to be generated.
>
> ------
>
>
> So somehow that rule is being triggered but OSSEC does not know the source 
> so it matches it ?
>
> Should I just add a regular expression to the above rule so that it works 
> whether on Source IP or on the text ?
>
> Thanks
>
>
> On Tue, Mar 10, 2020 at 2:37 PM Bruce Westbrook <[email protected] 
> <javascript:>> wrote:
>
>> Since you created the rule as level="0", the 
>> <options>no_email_alert</options> line doesn't matter and you can leave 
>> it out.  When you set a rule to level 0 it doesn't log or alert anyway, 
>> plus level 0 doesn't get overridden by a higher level rule so that's not 
>> the issue.
>>
>> Check the alert level of the alerts you're getting.  If they are lower 
>> than 7 then you have your OSSEC config alerting on lower level rules.  
>> You'll just have to modify the rule's <if_level> to match your global 
>> alert level.  If you want to drop and not log anything you could just set 
>> it as <if_level>0</if_level>.
>>
>> To test the rule, copy the content of the alert, and on your OSSEC server 
>> execute:
>> ossec-logtest -v
>>
>> ...then paste the alert content and hit [ENTER].  The output will walk 
>> through the rules it is checking against.  Use this output to help 
>> troubleshoot.
>>
>> - Bruce
>>
>>
>> On Tuesday, March 10, 2020 at 12:34:41 PM UTC-4, Olivier Ragain wrote:
>>>
>>> Hi,
>>> I ve configured ossec to load rules from a custom folder to avoid having 
>>> to touch any of the other files and facilitate updates. Some rules that are 
>>> in that custom folder work properly
>>> So i've added the following in a custom rule file:
>>> ----
>>> <group name="global,override,">
>>>         <rule id="110000" level="0">
>>>                 <if_level>7</if_level>
>>>                 <options>no_email_alert</options>
>>>                 <srcip>my_scanner_ip</srcip>
>>>                 <description>Ignoring all alerts triggered by our 
>>> scanner</description>
>>>         </rule>
>>> </group>
>>> ---
>>> Unfortunately I am still getting alerts by email. How can I test that 
>>> rule via the tester ?
>>> Thanks
>>>
>>>
>>> On Thu, Mar 5, 2020 at 10:30 AM 'Binet, Valere (NIH/NIA/IRP) [C]' via 
>>> ossec-list <[email protected]> wrote:
>>>
>>>> The whitelist works with active response. If you have OSSEC blocking 
>>>> misbehaving IPs on your firewall, you definitely have to whitelist the 
>>>> scanner IP. Past experience with one scanner I won’t promote here has 
>>>> shown 
>>>> that you might have to also whitelist its FQDN.
>>>>
>>>> If you just want to stop the deluge of emails, a local rule as shown by 
>>>> Bruce is the way to go.
>>>>
>>>>  
>>>>
>>>> Valère Binet
>>>>
>>>>  
>>>>
>>>> *From: *Bruce Westbrook <[email protected]>
>>>> *Reply-To: *"[email protected]" <[email protected]>
>>>> *Date: *Thursday, March 5, 2020 at 9:04 AM
>>>> *To: *ossec-list <[email protected]>
>>>> *Subject: *[ossec-list] Re: Whitelisting the IP of an internal 
>>>> vulnerability scanner
>>>>
>>>>  
>>>>
>>>> oops -- I made a typo.  The second example should be <if_level>7
>>>> </if_level> too, not level 1.   
>>>>
>>>>  
>>>>
>>>> You can use level 1 but that will ignore everything from the source IP 
>>>> and not log anything at all.
>>>>
>>>>  
>>>>
>>>>
>>>>
>>>> On Thursday, March 5, 2020 at 8:59:59 AM UTC-5, Bruce Westbrook wrote: 
>>>>
>>>> Morning,
>>>>
>>>>  
>>>>
>>>> Couple of ways to do this for just a single IP address.  It depends on 
>>>> whether you just want to skip the emails alerts but still keep alerts in 
>>>> your database, or if you want to ignore them completely.
>>>>
>>>>  
>>>>
>>>> Examples assume you have your email alerts set to level 7 or above.  
>>>> Note that <if_level> matches rules at the given level or anything above it.
>>>>
>>>>  
>>>>
>>>> To skip emails but still keep the alert data:
>>>>
>>>>  
>>>>
>>>>   <rule id="100101" level="15">
>>>>     <if_level>7</if_level>
>>>>     <options>no_email_alert</options>
>>>>     <srcip>10.10.10.10</srcip>
>>>>     <description>Do not send emails for our scanner alerts
>>>> </description>
>>>>   </rule>
>>>>
>>>>  
>>>>
>>>>  
>>>>
>>>>  
>>>>
>>>> To ignore all rule matches completely, set your rule to level 0:
>>>>
>>>>  
>>>>
>>>>   <rule id="100101" level="0">
>>>>     <if_level>1</if_level>
>>>>     <srcip>10.10.10.10</srcip>
>>>>     <description>Ignoring all alerts triggered by our scanner
>>>> </description>
>>>>   </rule>
>>>>
>>>>  
>>>>
>>>>  
>>>>
>>>> Personally I use the second example, which ignores sending any alerts 
>>>> and doesn't even log them, but still logs any non-email events (levels 
>>>> 1-6) 
>>>> so I can still prove to an auditor that the scans are actually running 
>>>> against various hosts (some auditors want multiple proof points like that).
>>>>
>>>>  
>>>>
>>>> Hope that helps! 
>>>>
>>>> - Bruce
>>>>
>>>>
>>>> On Thursday, March 5, 2020 at 8:42:01 AM UTC-5, Olivier Ragain wrote: 
>>>>
>>>> Good morning,
>>>>
>>>> I've been trying to whitelist the IP of my scanner so that I never get 
>>>> notifications from it and that alerts are ignored for it.
>>>>
>>>> I've tried adding it to the whitelist in the ossec configuration file 
>>>> (And as I understand, that configuration is not used for the notification 
>>>> whitelisting)
>>>>
>>>> I've tried adding as a list and then added to the ossec configuration
>>>>
>>>>  
>>>>
>>>> So, what is the best way to whitelist a scanner IP so that nothing 
>>>> sends email for it? Do I need to create a custom rule that matches all 
>>>> rule 
>>>> IDs and the IP of the scanner host to disable email notifications?
>>>>
>>>> Thanks
>>>>
>>>> -- 
>>>>
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "ossec-list" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/ossec-list/85801125-b8d7-471b-869c-adea3d36cf2e%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/ossec-list/85801125-b8d7-471b-869c-adea3d36cf2e%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> -- 
>>>>
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "ossec-list" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/ossec-list/8716F0A9-5475-4E86-B26E-5B0142619AC5%40mail.nih.gov
>>>>  
>>>> <https://groups.google.com/d/msgid/ossec-list/8716F0A9-5475-4E86-B26E-5B0142619AC5%40mail.nih.gov?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
>>>
>>> -- 
>>> Olivier
>>> instream
>>> manager IT Ops and Security
>>> 100-5335 Canotek Rd
>>> Ottawa ON  K1J9L4
>>> Phone: 855-521-1121, 126 
>>> Mobile: 343-777-0403
>>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "ossec-list" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ossec-list/c71972c9-759a-4594-8d18-01a6d1ab5562%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ossec-list/c71972c9-759a-4594-8d18-01a6d1ab5562%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> -- 
> Olivier
> instream
> manager IT Ops and Security
> 100-5335 Canotek Rd
> Ottawa ON  K1J9L4
> Phone: 855-521-1121, 126 
> Mobile: 343-777-0403
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ossec-list/7441f0ee-9326-4e3d-980d-5da9dc5111ca%40googlegroups.com.

Reply via email to