Re: [EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-18 Thread Joe Orton
On Thu, Aug 18, 2005 at 02:48:26PM -0400, George Adams wrote:
 Joe, I just wanted to thank you again.  The byterange patch you gave me 
 worked just beautifully.

Great, thanks for the feedback.  I've proposed this for backport to the 
2.0.x branch now so it should show up in a 2.0.x release eventually, 
pending review.

joe

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-17 Thread George Adams

 Joe Are these all simple static files, or is /out/ handled by some CGI
 Joe script etc?

 Joe, you're right - they do get passed through a Perl script for
 processing.  However, unless I'm mistaken, I don't THINK the following
 code would produce the kind of problems I'm seeing:

OK, no, it's not your code at fault, it's a bug in httpd.  You can apply
this patch: http://people.apache.org/~jorton/ap_byterange.diff and I
guess I should really submit this for backport to 2.0.x.



Joe, thanks for the patch.  I'll apply it and see if it helps.

One last followup question, though.  It seems like there must be tons of 
sites in the world doing what I'm doing - serving a large amount of 
downloads.  And probably most of those sites are running Apache, and 
probably a lot of them are using Apache 2.0.x .  How is it that they don't 
seem to have the same problem?  If this bug has survived in Apache 2 this 
long, it must be fairly obscure.  Is there some unique set of circumstances 
that is causing this bug to affect only me and a few others, and not a large 
number of other Apache servers?


Thanks again.



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-17 Thread Joe Orton
On Wed, Aug 17, 2005 at 12:12:05PM -0400, George Adams wrote:
  Joe Are these all simple static files, or is /out/ handled by some CGI
  Joe script etc?
 
  Joe, you're right - they do get passed through a Perl script for
  processing.  However, unless I'm mistaken, I don't THINK the following
  code would produce the kind of problems I'm seeing:
 
 OK, no, it's not your code at fault, it's a bug in httpd.  You can apply
 this patch: http://people.apache.org/~jorton/ap_byterange.diff and I
 guess I should really submit this for backport to 2.0.x.
 
 
 Joe, thanks for the patch.  I'll apply it and see if it helps.
 
 One last followup question, though.  It seems like there must be tons of 
 sites in the world doing what I'm doing - serving a large amount of 
 downloads.  And probably most of those sites are running Apache, and 
 probably a lot of them are using Apache 2.0.x .  How is it that they don't 
 seem to have the same problem?  If this bug has survived in Apache 2 this 
 long, it must be fairly obscure.  Is there some unique set of circumstances 
 that is causing this bug to affect only me and a few others, and not a 
 large number of other Apache servers?

The bug only triggers with:

- a CGI/... script which generates a large response
- a user pointing a download accelerator (or suchlike) at said script.

and it has been reported two times on this list in as many weeks - so 
not that uncommon I guess.

joe

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-15 Thread Joe Orton
On Mon, Aug 15, 2005 at 11:00:02AM -0400, George Adams wrote:
 Thanks, Joe and Jon for your helpful thoughts regarding my Apache
 memory problem.  Here's some more information:
 
 Joe  1-015823W 0.001742573500GET /out/388.mp3
 Joe  2-0 15824 W 0.00 1742573499 GET /out/238.mp3
 Joe
 Joe Are these all simple static files, or is /out/ handled by some CGI
 Joe script etc?
 
 Joe, you're right - they do get passed through a Perl script for
 processing.  However, unless I'm mistaken, I don't THINK the following
 code would produce the kind of problems I'm seeing:

OK, no, it's not your code at fault, it's a bug in httpd.  You can apply 
this patch: http://people.apache.org/~jorton/ap_byterange.diff and I 
guess I should really submit this for backport to 2.0.x.

joe

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-11 Thread Jon Snow
George,

I have something similar...

I have been debugging an issue where I have seen processes growing to 800Mb on 
a forward proxy configuration using the worker model. Perhaps interestingly 
on reverse proxy configurations I get 100% CPU states occasionally as well. 
What I have noticed is that these are almost always associated with the TCP 
state condition of CLOSE_WAIT. What is netstat saying on your system when the 
problems occur?

Just today I had a win where I was able to truss the process on a forward 
proxy reading data from a server but had nowhere to write as the socket to 
the client was closed/half closed. The error returned (EPIPE) was not being 
caught by apache so there was this continual read then failure to write. 
While this process did not result in a large increase of memory or CPU or 
CLOSE_WAIT states it did verify for me something I had suspected for a long 
time, that the apache code is not checking it's socket calls at some point 
(in this case writev), and/or does not catch a close on the socket. This 
trace was on an ftp connection so the results may be different for an http 
connection e.g. memory usage.

I will be discussing these issues on the dev mailing list in the near future  
but it would be good to see if we are seeing anything in common first.

My hunch (which is a very long one) has been for a while now that when the 
client breaks the connection, the process somehow misses the close and 
continues to read but cannot write as verified by the above. It appears that 
in some instances the memory consumption may be caused by the buffering of 
data or bad memory management which is due to socket issues i.e. I have 
noticed that the larger the download file such as an iso image the larger the 
process grows. As the processes handle additional connections the memory is 
not freed and it grows. Note at this point it is speculation on my part but I 
consistantly get these stateless connections in conjunction with high CPU or 
memory usage so something is going on. I have noticed that processes will 
always creep in size with time so I believe there may be memory leaks but the 
cause of large memory consumption may be caused by weird socket states.

If your clients are downloading 18Mb files over slow links they may keep 
trying the connection, breaking the original therefore leaving you with 
multiple connections to the same file from the same client. If there is a 
memory leak under half close conditions your process grows and your processes 
will handle 5000 connections before they are cycled as per your 
configuration.

But why does this not appear to affect many other people? Firewalls perhaps, 
do you have any network or host based firewalls which may be preventing 
proper shutdown of connections? If so do they drop or reject packets? Which 
firewalls are they? I work in an Internet Gateway environment so I have 
firewalls all over the place and have added these as a variable to my list of 
possibilities.

Your 20 concurrent connections are limited by MaxClients. I assume you are 
keeping this small because of the size they are growing to as you should be 
able to get to approx 175-200 of the top of my head using prefork with 1Gb of 
memory. I would have thought this would max out pretty quickly with many 18Mb 
downloads as they will take time.

As a workaround you may try to lower the MaxRequestsPerChild to turnover 
processes which may be affected by memory leakage and raise the MaxClients to 
handle more concurrent connections. Say initially MaxClients 150 and 
MaxRequestsPerChild 100 or more agressively 10. This will produce more CPU 
overhead in forking processes but modern CPUs are pretty fast. Or go for a 
threaded model such as worker and you should be able to get 10-15 times as 
many concurrent connections (based on proxy configurations - I have never 
used apache as a web server). But another model may simply have the same 
issues if it is socket related. Funnily I was considering going to a prefork 
model to eliminate the possibility of threading and mutex issues - won't be 
doing that for a while.

This may not help but I would be interested in whether there are similarities 
in network states or hardware etc. to what I have.

Regards,
Jon

On Wednesday 10 August 2005 01:05, George Adams wrote:
 I read an earlier thread on memory consumption (http://tinyurl.com/bly4d),
 which may be related to my problem... but because of some differences, I'm
 not so sure.  Any help would be appreciated!

 I have an Apache 2.0.54 server on a Gentoo Linux (2.6.11) box which has
 1Gig RAM and an additional 1Gig swap space.  The server handles a lot of
 people downloading sermons from our church website (which are no larger
 than 18Meg MP3 files), but I can't figure out how to keep the server from
 running out of memory.

 Here's my Apache2 prefork configuration:
 
 IfModule prefork.c
 StartServers 

Re: [EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-10 Thread Joe Orton
On Tue, Aug 09, 2005 at 11:05:49AM -0400, George Adams wrote:
 I have an Apache 2.0.54 server on a Gentoo Linux (2.6.11) box which has 
 1Gig RAM and an additional 1Gig swap space.  The server handles a lot of 
 people downloading sermons from our church website (which are no larger 
 than 18Meg MP3 files), but I can't figure out how to keep the server from 
 running out of memory.
...
 And here's what the Apache /server-status URL showed earlier today (I had 
 just restarted the server, but it immediately filled up with download 
 requests, all from the same guy, apparently using a download accelerator 
 judging by the duplicate requests):
 
 Srv  PID M CPUReq Request
 0-015822W 0.48 0GET /out/181.mp3 HTTP/1.1
 1-015823W 0.001742573500GET /out/388.mp3 HTTP/1.1
 2-015824W 0.001742573499GET /out/238.mp3 HTTP/1.1

Are these all simple static files, or is /out/ handled by some CGI 
script etc?

...
 15853 apache18   0 98.9m  53m 2000 S  0.0  5.3   0:00.51 apache2

if when this happens, you can capture the output of e.g. strace -p 
15853 as root, that might help.

joe

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] Why does Apache use up all my memory?

2005-08-09 Thread George Adams
I read an earlier thread on memory consumption (http://tinyurl.com/bly4d), 
which may be related to my problem... but because of some differences, I'm 
not so sure.  Any help would be appreciated!


I have an Apache 2.0.54 server on a Gentoo Linux (2.6.11) box which has 1Gig 
RAM and an additional 1Gig swap space.  The server handles a lot of people 
downloading sermons from our church website (which are no larger than 18Meg 
MP3 files), but I can't figure out how to keep the server from running out 
of memory.


Here's my Apache2 prefork configuration:

IfModule prefork.c
   StartServers 5
   MinSpareServers  5
   MaxSpareServers 10
   MaxClients  20
   MaxRequestsPerChild  5000
/IfModule


And here's what the Apache /server-status URL showed earlier today (I had 
just restarted the server, but it immediately filled up with download 
requests, all from the same guy, apparently using a download accelerator 
judging by the duplicate requests):


Srv  PID M CPUReq Request
0-015822W 0.48 0GET /out/181.mp3 HTTP/1.1
1-015823W 0.001742573500GET /out/388.mp3 HTTP/1.1
2-015824W 0.001742573499GET /out/238.mp3 HTTP/1.1
3-015825W 0.001742573499GET /out/504.mp3 HTTP/1.1
4-015826W 0.001742573496GET /out/388.mp3 HTTP/1.1
5-015832W 0.001742572495GET /out/801.mp3 HTTP/1.1
6-015834W 0.001742571493GET /out/504.mp3 HTTP/1.1
7-015835W 0.001742571489GET /out/504.mp3 HTTP/1.1
8-015838W 0.001742570476GET /out/388.mp3 HTTP/1.1
9-015839W 0.001742570484GET /out/504.mp3 HTTP/1.1
10-015840W 0.60 0GET /out/238.mp3 HTTP/1.1
11-015841W 0.001742570477GET /out/388.mp3 HTTP/1.1
12-015846W 0.25 0GET /out/181.mp3 HTTP/1.1
13-015847W 0.001742569347GET /out/181.mp3 HTTP/1.1
14-015848W 0.001742568761GET /out/801.mp3 HTTP/1.1
15-015849W 0.001742568761GET /out/801.mp3 HTTP/1.1
16-015852W 0.19 0GET /out/181.mp3 HTTP/1.1
17-015853W 0.17 0GET /out/801.mp3 HTTP/1.1
18-015854W 0.22 0GET /out/504.mp3 HTTP/1.1
19-015855W 0.28 0GET /server-status HTTP/1.1


And here's a portion of what top showed at the same time:

top - 18:09:59 up 64 days,  7:08,  3 users, load avg: 21.62, 10.57, 4.70
Tasks: 154 total,   1 running, 143 sleeping,   1 stopped,   9 zombie
Cpu(s):  0.8% us,  2.3% sy, 0.0% ni, 0.0% id, 96.3% wa, 0.3% hi, 0.2% si
Mem:   1034276k total,  1021772k used,12504k free, 6004k buffers
Swap:  1030316k total,   985832k used,44484k free,83812k cached

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
15846 apache16   0  132m  89m 1968 S  0.3  8.9   0:01.46 apache2
15840 apache17   0  130m  83m 2008 D  0.0  8.3   0:00.90 apache2
15849 apache16   0  120m  82m 1968 S  0.3  8.1   0:01.02 apache2
15852 apache16   0  120m  81m 1968 S  0.3  8.1   0:00.91 apache2
15848 apache16   0  109m  73m 2008 S  0.3  7.2   0:00.85 apache2
15855 apache16   0  107m  70m 2076 D  0.3  7.0   0:00.76 apache2
15822 apache17   0  179m  55m 1968 D  0.3  5.5   0:00.88 apache2
15854 apache16   0 98024  55m 1968 D  0.0  5.5   0:00.58 apache2
15853 apache18   0 98.9m  53m 2000 S  0.0  5.3   0:00.51 apache2
15847 apache17   0 86884  52m 1968 D  0.0  5.2   0:00.42 apache2
15841 apache17   0  110m  36m 1964 D  0.3  3.6   0:00.64 apache2
15826 apache17   0  173m  20m 1968 D  0.0  2.0   0:00.57 apache2
15825 apache16   0 97.7m  19m 1968 D  0.0  1.9   0:00.36 apache2
15834 apache16   0  117m  14m 1968 D  0.3  1.5   0:00.42 apache2
15839 apache17   0  115m  12m 1968 D  0.0  1.2   0:00.40 apache2
15838 apache15   0  182m  12m 1968 D  0.0  1.2   0:00.59 apache2
15823 apache16   0  180m  11m 1968 D  0.0  1.1   0:00.65 apache2
15824 apache15   0  103m 9980 1968 D  0.0  1.0   0:00.27 apache2
15832 apache16   0  116m 9112 1968 D  0.0  0.9   0:00.29 apache2
15835 apache16   0  162m 8844 1968 D  0.0  0.9   0:00.41 apache2
(everything else listed on top below this was less than 0.5 for %MEM)


The memory usage swelled very fast as the download requests came in, and 
based on previous experience, the server would have slowed to a crawl and 
possible crashed as it tried to save itself if I hadn't run killall 
apache2 at this point.


So it seems like this guy's 19 download requests are enough to pretty much 
exhaust my 1 Gig of physical RAM and 1 Gig of swap space.  That just