Hi All,

I was looking through my access logs the other day, and noticed
several attempts to hit my server looking for explointable urls:
/w00tw00t.at.ISC.SANS.DFind:)
//includes/general.js
//zencart/includes/general.js
//admin/includes/general.js
//zen/includes/general.js
//cart/includes/general.js
//ZenCart/includes/general.js
//roundcube/
//webmail/
...and so on.

I thought it might be fun to identify which IP the requests were
coming from, and just ban that IP for a while.  Not permanently, but
an increasingly longer period of time per infraction.  This would turn
off robot IPs for, say, 10 minutes the first time, 20 the second, and
so on.  Hopefully, this would save any further requests being added to
the logs.

My first thought was to use hosts.deny as a quick and dirty hack, but
that doesn't seem to actually work as the web server already has the
ports open.

Has anyone else implemented this already?  Did you use a particular
firewall solution?

For the record, I'm running web.py on a bog-standard ubuntu 8.04
install. (kernel 2.6.24-23-xen, python 2.5.2)

-Ken
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"web.py" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/webpy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to