Hi again,

Tracing down periods of unusual modperl overload I've
found it is usually caused by someone using an agressive
site mirror tool of some kind.

The Stonehenge Throttle (a lifesaver) module was useful
to catch the really evil ones that masquerade as a real
browser ..  although the version I grabbed did need to be
tweaked as when you get really hit hard, the determination
that yes, it is that spider again, involved a long read loop
of a rapidly growing fingerprint of doom.. to the point
where the determination that it was the same evil spider
was taking quite a long time per hit! (some real nasty ones
can hit you with 1000s of requests per minute!)

Also - sleeping to delay the reader as it reached the
soft limit was also bad news for modperl.

So I changed it to be more brutal about number of requests
per time frame, and bytes read per time frame, and also
black-list the md5 of the IP/useragent combination for
longer when that does happen. Matching on IP/useragent
combo is necessary rather than just IP to avoid blocking
big proxy on one IP which are in use in some large companies
and some telco ISPs.

In filtering error_logs over time, I've assembled a list
of nastys that have triggered the throttle repeatedly.

The trouble is, the throttle can take some time to 
wake up which can still floor your server for very
short periods..
So I also simply outright ban these user agents:

(EmailSiphon)|(LinkWalker)|(WebCapture)|(w3mir)|
(WebZIP)|(Teleport Pro)|(PortalBSpider)|(Extractor)|
(Offline Explorer)|(WebCopier)|(NetAttache)|(iSiloWeb)|
(eCatch)|(ecila)|(WebStripper)|(Oxxbot)|(MuscatFerret)|
(AVSearch)|(MSIECrawler)|(SuperBot 2.4)

Nasty little collection huh..

MSIECrawler is particularly annoying. I think that is
when somebody uses one of the bill gates IE5 "ideas":
save for offline view, or something.

Anyway.. hope this is helpful next time your modperl
server gets so busy you have to wait 10 seconds just to
get a server-status URL to return.

This also made me think that perhaps it would be nice
to design a setup that reserved 1 or 2 modperl processes
for serving (say) the home page .. that way, when the site
gets jammed up at least new visitors get a reasonably 
fast home page to look at (perhaps including an alert
warning against slow response lower down..).. that is
better than them coming in from a news article or search
engine, and getting no response at all.

It would also be nice for mod_proxy to have a better
way of controlling timeout on fetching from the backend,
and the page to show in case timeout occurs.. has anyone
done something here? then after 10 seconds (say) mod_proxy
can show a pretty page explaining that due to the awesome
success of your product/service, the website is busy and
please try again very soon :-) [we should be so lucky].
At the moment what happens under load is mod_proxy seems
to queue the request up (via the tcp listen queue) .. the
user might give up and press stop or reload (mod_proxy does
not seem to know this) and thus queue up another request via
another front end, and pretty soon there is a 10 second
page backlog for everyone and loads of useless requests to
start to fill ..

-Justin

Reply via email to