My apache log files show that I'm getting two or more of those long url attacks every day, and access_log grows to over 4Mb in just a week, in spite of the fact that there are less than ten valid accesses in any particular day. So, I'm going to write a daily script to compress-and-rotate. (4M compresses easily to less than 40K, since it's mostly those stupid attacks.) That's no big deal, I think, although I may pop up with specific questions on that later on.

(I assume either the root kit is dead stupid or my ADSL modem is fooling it. I suppose I should check myself on Netcraft some time.)

I also built a little C filter to get the attacks out of the way. I used C because all the vectors are around 32K, and it's easy enough to just use a 128 MB input buffer and look at the length. (It's an interactive tool, so an unexpected really long line will at most kill my shell.)

I think I was told by someone that Perl's input buffer would adjust to this kind of insanely long line. Does it slow the input down much to have to re-allocate the buffer?

The reason I'm asking is that, for now, my filter is just killing the long lines. I'm thinking some visible RLL could make those easier to see as well as easy to see around. (Not sure what I want to see in them.)

(And sometime I'd like to build an error page script that would dump 64K from /random back at the zombie. But I have more important things to do first.)

--
Joel Rees
Getting involved in the neighbor's family squabbles is dangerous,
but if the abusive partner has a habit of shooting through his/her roof
the guy who lives upstairs is in a bit of a catch-22.


Reply via email to