Hello Jeff,
thanks a lot for your insightful reply!
Jeff Trawick wrote:
On Tue, Feb 2, 2010 at 8:59 AM, Sandro Tosi <sandro.t...@register.it> wrote:
Hello,
we have a rather busy Apache web server (~200/300 contemporary connections).
There are time when Apache is really slow at letting client connects to it.
For example, with curl, I see:
02:05:17.885074 == Info: About to connect() to IP_ADDRESS port 80 (#0)
02:05:17.885280 == Info: Trying IP_ADDRESS... 02:05:20.898748 == Info:
connected
02:05:20.898785 == Info: Connected to IP_ADDRESS (IP_ADDRESS) port 80 (#0)
...
02:05:20.917068 == Info: Closing connection #0
and
02:06:53.098230 == Info: About to connect() to IP_ADDRESS port 80 (#0)
02:06:53.099272 == Info: Trying IP_ADDRESS... 02:07:02.111596 == Info:
connected
02:07:02.111636 == Info: Connected to IP_ADDRESS (IP_ADDRESS) port 80 (#0)
02:07:02.111731 => Send header, 222 bytes (0xde)
...
02:07:02.422093 == Info: Closing connection #0
as you can see, we see a 3 seconds (first example) and a 9 seconds (second
example) delay between apache server contact and when the connection is
actually accepted. The delay is always either 3 or 9 seconds, that's quite
weird and it seems to indicate a sort of retry+backoff (3 secs, 3x2 secs (9
secs total) and so on) implemented in some Apache layers.
Getting connected and sending the first bytes on a connection is
handled by the TCP layer and below; Apache doesn't have to get
involved yet to allow that to happen.
Ok, now I see.
Apache does give the TCP layer a hint about how to behave in this part
of the request cycle -- ListenBacklog. You could see if increasing
that setting in Apache convinces the TCP layer on the Apache side to
accept more connections even before Apache wakes up to process them.
We'll give it a look for sure. We looked it up on the doc and the
ListenBacklog entry is quite "short": we're on Linux, do you have any
suggestions or links for more info on it?
(A TCP trace would presumably confirm that getting the connect
handshake completed in a timely manner is the issue. It might also
show something totally unanticipated.)
any hint how to obtain that TCP trace? tcpdump on the client (easy) or
on the server (almost impossible due to the high traffic)?
StartServers 200
MinSpareServers 150
MaxSpareServers 300
Given your comment about 200-300 typical active connections, I guess
this is ok if your traffic isn't at all spiky.
we also tried to increase those values a bit
StartServers 300
MinSpareServers 200
MaxSpareServers 400
but it seems slowing down (probably due to the high number of process to
handle); we'll probably revert to the above settings.
ServerLimit 2000
MaxClients 2000
definitely not a problem here
yeah, we barely reach the 1.5 * StartServer number of servers,
definitely under those limits.
MaxRequestsPerChild 100
This will cause httpd processes to exit relatively quickly and might
occasionally increase the amount of time before Apache is able to
accept a new connection
It is very hard to say for sure that this can; I guess increasing
MaxSpareServers could mitigate the potential for temporary windows
where a lot more Apache processes have to be created at once due to
exiting processes. (I wonder how MinSpare../MaxSpare../MaxRequests..
interact with your actual load.)
Anyway, MaxRequestsPerChild rarely needs to be set so low; that would
be needed only with barely working application code running inside
Apache. Unless you have a real reason to set it so low, either
disable it (0) or set it very high (at least some tens of thousands
for the prefork MPM).
Ah. We were under the false impression that setting it to a low value it
helps apache run on "fresh" processes, hence better, but we
underestimate the kill&spawn time.
We are running Apache as a reverse proxy for a JBoss instance, so we can
expect almost no memory leak, so we'll try either with 0 or with a very
high value.
KeepAlive off
This results in more connections required for real browser traffic,
exacerbating any connection problem.
we don't have a "real browser" traffic: we receive just a request (for a
single resource) for each client, so it's a sort-of "one shot" traffic
not a continuous flow of requests from the same client. Do you think
it's ok to have the KA Off for such a kind of traffic?
Thanks & Regards,
Sandro
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
" from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org