> Hi.
> Long and thoughtful post. Thanks.

just hope it helps move the discussion forward

> Say you have a botnet composed of 100 bots, and you want (collectively) to
> have them scan 100,000 hosts in total, each one for 30 known "buggy" URLs.
> These 30 URLs are unrelated to eachother; each one of them probes for some
> known vulnerability, but it is not so that if one URL results in a 404, any
> of the others would, or vice-versa.
> So this is 3,000,000 URLs to try, no matter what.
> And say that by a stroke of bad luck, all of these 100,000 hosts have been
> well-configured, and that none of these buggy URLs corresponds to a real
> resource on any of these servers.
> Then no matter how you distribute this by bot or in time, collectively and
> in elapsed time, it is going to cost the botnet :
> - 3,000,000 X 10 ms (~ 8.3 hours) if each of these URLs responds by a 404
> within 10 ms
> - 3,000,000 X 1000 ms (~ 833 hours) if each of these URLs responds by a 404
> delayed by 1 s

So if a bot sends a request for http://server/, it will presumably get
a 302 response back redirecting to say http://server/index.html, and
to use your figures lets say this takes 10 ms - call this
goodResponseTime. Now the bot sends a request to the server for
http://server/manager/html. If the server has implemented "delay 404"
(as it seems to have been christened), the server will delay response
for say 1s. The scanner writers can just abort the connection after
say 2*goodResponseTime or 3 if they want to reduce false positives.
Perhaps spider the links in the good page returned initially and get a
feel for average response times for say 10 valid calls, then start
making probing calls to reduce false positives. Simply abort any that
take "too long" and carry on to the next host and/or the next url on
the same host.

Incidentally someone suggested that the work to delay the response
could be farmed off to a side-kick thread. It is true that this would
minimize CPU over head server end. However at the low os level, you
are still keeping a socket open for a second (or whatever the 404
delay is configured to be). If scanners use the above technique, they
will end up creating say 30 connections to the server each of which
then has to stay open for 1 second. 30 additional connections won't
bring the server down, but it is still consuming more resources than
normal. Enough concurrent scanners and the server will suffer DOS. A
few pages with bad links that return 404 - maybe due to
miss-configuration - and google bots and their friends querying the
site could kick off the DOS.

> As for the other points in your post : you are right of course, in
> everything that you say about how to better configure your server to avoid
> vulnerabilities.
> And I am sure that everyone following this thread is now busy scanning his
> own servers and implementing these tips.

I guess the point I was trying to make is that the whole idea is to
make the default install as secure as possible. So the sensible steps
included in http://tomcat.apache.org/tomcat-7.0-doc/security-howto.html
would not be necessary, if they were already present in the default
install. Then if people want to open the server up, that's their
problem. I know that this potentially means that it may be harder for
noobs to get started with tomcat. It is a fine line to walk.

> But my point is that, over the WWW at large (*), I am willing to bet that
> less than 20% are, and will ever be, so carefully configured and verified.
> And that is not going to change.

agreed. What we really need is a big carrot and a big stick to
encourage people to Do The Right Thing™.

Chris

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to