Hi Andre

André Warnier wrote:
feedly team wrote:
[...]
using netstat, i see a moderate number (~80) of tomcat's sockets in
the CLOSE_WAIT state, not sure if this is relevant.

Approximately, because I am not sure I have this really understood yet : a TCP CLOSE_WAIT state happens when the writing side of a TCP connection has finished writing and (nicely) closes its side of the socket to indicate the fact, but the reading side of the connection does not read what is left in the buffers, so there is still some data unread in the pipeline, and the reading side never closes the socket. And now I'm stuck in my explanation, because I am not sure which side is seeing the CLOSE_WAIT... ;-)
I think that you are indicating one condition in which you can see a
CLOSE_WAIT but there are many others. I also think that the condition
you indicate is appropriate when the CLOSE_WAIT is observed at the
receiving end of a communication, but is possible for a socket to be in
this state when it has sent data as well, but of course there will be no
outstanding data to send.

More generally CLOSE_WAIT is the state in which a socket is left AFTER
the "other end" says its finished and BEFORE the application which is
using the socket actually closes the socket. The "WAIT" refers to the
operating system waiting for the application to finish using the socket.

I think a socket can be in a CLOSE_WAIT state without there being any
further data to read or write - literally just waiting for the calling
application to close it.

Having written socket handling code for both Java and C++ on  a variety
of platforms I don't think there is any particular reason why Java
should be better or worse (in fact, code which uses sockets in Java is
generally pretty easy.) I suspect that your observations may be affected
by "local conditions" eg one application is badly written but represents
a lot of your network activity, so its behavior is predominant in
conditioning your thinking. Or not! :)

regards

Alan Chaney



But anyway, it indicates a problem somewhere in one of the two applications, my guess being the reading one. It should do more reads to exhaust the remaining data, get an end-of-stream, then close its side of the connection, but it never does. There is apparently no timeout for that, so the OS can never get rid of that socket, which probably leaves a bunch of things hanging around and consuming resources that they shouldn't. On one of our systems, I have occasionally seen this issue grow until the point where the system seemed unable to accept new connections. Now whether that has any bearing on your particular issue, I don't know. But it sure indicates a logical problem somewhere.

There is quite a bit on the subject on Google, of unequal quality.
If someone knows a more rigorous explanation, please go ahead.

I will still add a purely sibjective and personal note : from personal experience, this phenomenon seems to happens more frequently whith Java applications than with others, so I would guess that there might be something in the Java handling of sockets that makes it a bit harder to write correct socket-handling code.
A Java expert may want to comment on that too.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



!DSPAM:49dc6634305142136417547!





---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to