On 11/20/2017 11:33 PM, Matthias Leisi wrote:
In addition to server-side blocking, would it make sense for sa-update to 
rate-limit itself?

— Matthias

Von meinem iPhone gesendet

Am 21.11.2017 um 03:53 schrieb Kevin A. McGrail <kevin.mcgr...@mcgrail.com>:

On 11/20/2017 7:17 PM, Dave Jones wrote:
Could we use something like mod_evasive to limit any IP connecting more than 3 
times (one batch of ruleset files) an hour? SA instances behind NAT'd IPs could 
cause a legitimate reason for more than 2x hits per day.
I'd like to keep it simpler for now.  The abuse hasn't been too bad.

I've put them on notice on the users@ list and I'm going to look at adding more 
information such as a unique id to sa-update's call for wget/curl so we can 
identify NAT'ing.

There may be some abusers in the future that we would want to permanently block 
with a centralized .htaccess file that gets distributed with the normal rsync 
pulls by each mirror.
Agreed.  Let's keep an eye on things.

So from the last 3.8mm GETs Top 14 IPs

(grep GET sa-update.pccc.com-access_log | awk -F" " '{ print $1 }' | sort | 
uniq -c | sort -n -r | head -n 14)

  964649 52.169.9.191 (Machine we already had taken care of)
   71273 176.61.138.136
   40397 41.76.211.56
   22535 108.163.197.66
   21100 108.61.28.10
   21037 79.137.36.178
   20270 149.56.17.151
   19826 91.204.24.253
   18141 178.32.88.139
   18003 207.210.201.60
   14037 158.69.200.153
   12539 78.229.96.116
   12525 37.221.192.173
   11568 45.77.52.43
Here are the top 10 IPs that seem to be running sa-update or a curl script most 
frequently:

41.76.211.56 (sa-update/svn917659/3.3.2 every 5 minutes)
108.61.28.10 (sa-update/svn917659/3.3.2 every 15 minutes)
202.191.60.145 (curl/7.19.7 every minute rotating mirrors)
202.191.60.146 (curl/7.19.7 every minute rotating mirrors)
108.163.197.66 (sa-update/svn917659/3.3.2 every 5 minutes)
208.74.121.106 (NAT'd IP? curl/7.29.0 & curl/7.19.7)
91.204.24.253 (NAT'd IP? various user agents)
207.210.201.60
78.110.96.3
190.0.150.3

--

I setup and tested mod_evasive yesterday.  It's OK but I get inconsistent results.  The thresholds are like 10x what I expected and once a client finally hits it then only some of the requests get a 403 response.  It's like the thresholds and 403 responses are per httpd child process.

I just setup fail2ban with an http-dos-get jail and it's working as expected:

# fail2ban-client status http-get-dos
Status for the jail: http-get-dos
|- Filter
|  |- Currently failed:    2578
|  |- Total failed:    7216
|  `- File list:    /var/log/httpd/sa-update.ena.com-access_log
`- Actions
   |- Currently banned:    7
   |- Total banned:    7
   `- Banned IP list:    207.170.241.2 108.163.197.66 41.76.211.56 207.210.201.60 108.61.28.10 78.110.96.3 95.128.113.141

Those IPs above are in my top 10 and are trying to download the same tar.gz every minute to 15 minutes.  Fail2ban is doing it's thing and dropping the port 80 requests now for an hour.

Here are the settings I am testing out on my two CentOS mirrors:

# cat /etc/fail2ban/jail.d/http-get-dos.conf

==========================

[http-get-dos]
enabled = true
port = http
filter = http-get-dos
logpath  = /var/log/httpd/sa-update.ena.com-access_log
maxretry = 10
findtime = 3600
bantime = 3600
ignoreip = <your local IP here>
action = iptables[name=HTTP, port=http, protocol=tcp]

# cat /etc/fail2ban/filter.d/http-get-dos.conf

===========================

# Fail2Ban configuration file
[Definition]

# Option: failregex
# Note: This regex will match any GET entry in your logs, so basically all valid and not valid entries are a match. # You should set up in the jail.conf file, the maxretry and findtime carefully in order to avoid false positives.
failregex = ^<HOST> -.*"(GET|POST).*

# Option: ignoreregex
ignoreregex =

--

Dave


Reply via email to