AFAIK most large backbone routers out there dont support application layer
filtering e.g. filtering based on what type of http request it is, or what is
requested. Too much CPU overhead methinks.

Some examples: In the case of the user having a dynamically assigned IP address,
the next person assigned that IP who hits any site subscribing to the realtime web
blackhole list (Lets call it RWBL) will see a polite message saying "this IP has
been used for a hack attempt" (with explanation on how to get it unblocked) and
will hopefully report it to their ISP. In the case of the user having a static IP
- well either their server was hacked, or they are the hacker, in which case the
effect will be similar - user will either stop hacking (or patch their server) or
risk being permanently banned from surfing any site subscribing to the RWBL.

To get off the black-hole list is a similar process to getting off the current
mail RBL list. Send a request explaining the cause of the hack-attempt and
assurances that a remedy is in place, or will be shortly.

Any suggestions on where to implement this in the server to ensure minimal
reconfiguration and impact to existing mod_perl handlers? It needs to be able to
block a request based on the contents of a text file or type of request and chuck
out an explanation page. Also needs to be able to append hack attempts into the
text file when the IP is not listed. The text file can be stored in the server
root somewhere (like robots.txt) and is gathered once daily by the central system.
The logic that will be used in the central system to ban IP's can be something
like 'if more than X number of hack attempts have been logged by different servers
from a particular IP, it's banned'. Perhaps X can be 7.

Also a list of banned request URI's can be made available for download for use by
the RWBL checker running on each server out there. That will allow us to adapt to
different worms or exploits.

David Young wrote:

> From: Mark Maunder <[EMAIL PROTECTED]>
> > Perhaps we should just keep a central database of where the attempts are
> > coming from.
> > We could even extend it to work like the RBL - connects are not allowed from
> > IP's that have attempted the exploit
>
> Would that really help anything? The traffic would still be reaching your
> server and clogging up the net, the only difference is that you'd be
> returning an access denied response rather than a 404.
>
> What would really help is if all the ISPs out there put filters on their
> routers to catch these requests as close to their source as possible.

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]


Reply via email to