knitti wrote:
On 12/11/07, Daniel Ouellet <[EMAIL PROTECTED]> wrote:
[... snipped away a lot ...]
There is a lots that can be done, however, when you reach this level, an
answer doesn't fit all and is really dependent on your setup.

Hope this help answering your question.

It's not me having the problem, but I desire to understand it. AFAIK

I understand that, but you did asked a valid question on the state of the socket connection and I tried to answer that. If wasn't directed to the previous guy that can't search on Google and asked advise but refuse very valid answer. Sorry if you fell I confuse the two, but I didn't. May not have been obvious in my writing however.

HTTP keep alives have nothing to do with it. If the socket is in
CLOSE_WAIT, the TCP connection can't be reused, the server
has sent its FIN and the client its FIN/ACK, but the server doesn't
have yet sent its final ACK.

Well actually it does under normal operation. See, if you get a connection from a user and have keep alive setup. The socket will stay open to speed up the next request from the same users without having to establish a new connection, reusing the same socket for speed, but at the same time keeping that socket open and not ready to close yet for the next users. So, you see, if you have longer keep alive setup in httpd, you will reach the CLOSE_WAIT later on instead of sooner if you have shorter keep alive setup. See what I explain, may be not as well as I would like is the impact of PF and httpd together as well as the net.inet.tcp.xxx in sysctl setup. They all interact together in some ways and as such I also said it wasn't something to take isolated of one an other.

Just as an example. If you put keep alive to 2 minutes instead of 15 seconds default as an example, you will use much more sockets and you will end up running out of socket possibly, all depend on traffic obviously.

Now if keep alive from httpd is the only responsible party for having socket in CLOSE_WAIT, no it is not. But it does play a role in there as well into making more or less of them available.

What's important here is that the maximum number of TCP/IP sockets in the CLOSE_WAIT state can not exceed the maximum number allowed TCP/IP sockets from the Web server or in here the httpd.

netstat -an can show you the state of the various sockets, or more limited display

netstat -an | grep WAIT

I can imagine some possibilites why this happens (some might
not be valid due to my lack of knowledge):
- the server didn't clean up its socket, so it stays there until the
process dies eventually

It will clean it up eventually, or may be force with some directive in httpd about the usage, I can't recall right this instant and I would need to look. I may confuse two things as well here, but it might be possible to do it. Not sure. I wonder if the net.inet.tcp.keepidle, or something similar wouldn't actually affect it here. I would think so, but I could be wrong.

I think the CLOSE_WAIT state and time is a function of the OS stack, not the application itself, in this case httpd. I could be wrong here and I would love for someone to correct that for me if I do not understand that properly. But my understanding is this is control by the OS, not the application itself, other then the keep alive obviously in this case.

- the server does this to keep its socket (that I don't know: can
a socket be reused on any state?)

No, it can't. See above. You are limited by the MaxSpareServers directive in httpd anyway as far as the www is concern here. You sure can increase that from the maximum default of 256 if you recompile it and changed it in the include file, but again, should only be done on very busy servers.

btw: I might be going off topic here, but I think it applies to
OpenBSDs httpd. I won't sent any further mail to this thread
you tell me to shut up.

I didn't do such thing. The original poster however should/may take the advice, or drop it. (;>

I actually find it interesting, not the original subject, but where it was/is going.

Daniel

Reply via email to