Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
No one any clues on this issue? I've got about 3000 connections hanging around in CLOSE_WAIT now. Especially that 1 byte hanging in the receive buffer keeps me puzzled. Hi, i have the following problem with mod_jk from tomcat-connectors (1.2.5 - 1.2.8) including 1.2.9 (from cvs). Environment: apache is 2.0.52, forking model, server os is linux 2.6.10-1.760_FC3smp (fedora core 3), mod_jk 1.2.9 (others tested as well) After a while i get sockets stuck in CLOSE_WAIT state and netstat shows 1 byte in the receive queue for this socket. tcpdump shows, that the backend (jetty) half-closes the connection with FIN. That FIN is acked from the mod_jk machine but the connection is not closed (no FIN is send). sample netstat output: ... tcp1 0 192.168.100.1:51003 192.168.170.8:32511 CLOSE_WAIT tcp1 0 192.168.100.1:53875 192.168.170.8:12522 CLOSE_WAIT tcp1 0 192.168.100.1:53619 192.168.170.8:12521 CLOSE_WAIT ... - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
Michael Stiller wrote: No one any clues on this issue? I've got about 3000 connections hanging around in CLOSE_WAIT now. Especially that 1 byte hanging in the receive buffer keeps me puzzled. Did you tried the latest CVS HEAD? It contains the hard close socket by disabling lingering. Further more try to set the socket_timeout for the worker. Also you did not mention what is the OS you are using. Is it suse 9 by any chance? Regards, Mladen. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
On Mon, 2005-02-21 at 11:19 +0100, Mladen Turk wrote: Michael Stiller wrote: No one any clues on this issue? I've got about 3000 connections hanging around in CLOSE_WAIT now. Especially that 1 byte hanging in the receive buffer keeps me puzzled. Did you tried the latest CVS HEAD? I tried something i checked out from cvs last friday. The version is tomcat-connectors 1.2.9. The OS ist Fedora Core 3. It contains the hard close socket by disabling lingering. Where may i learn about the hard close patch. Maybe a pointer to the source file? Further more try to set the socket_timeout for the worker. You mean something like this: worker.proc2111.socket_timeout=10 worker.proc2111.recycle_timeout=2 worker.proc2111.cachesize=1 worker.proc2111.cache_timeout=2 Already tried it, no result so far. Cheers, -Michael - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
Michael Stiller wrote: I tried something i checked out from cvs last friday. Use more recent :) The version is tomcat-connectors 1.2.9. The OS ist Fedora Core 3. Seems that I miss the OS. It contains the hard close socket by disabling lingering. Where may i learn about the hard close patch. Maybe a pointer to the source file? http://cvs.apache.org/viewcvs.cgi/jakarta-tomcat-connectors/jk/native/common/jk_connect.c?rev=1.44view=log You mean something like this: worker.proc2111.socket_timeout=10 worker.proc2111.recycle_timeout=2 worker.proc2111.cachesize=1 worker.proc2111.cache_timeout=2 First, two second recycle is far to small. It should be at least higher then socket_timeout. I mean, you have 10 second timeout and 2 second recycle !?. Second no need to cache_timeout on prefork, since you have only one cached worker (default), so you don't need cachesize too. Mladen. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
On Mon, 2005-02-21 at 11:39 +0100, Mladen Turk wrote: Michael Stiller wrote: I tried something i checked out from cvs last friday. Use more recent :) Ok, just running a fresh cvs tree. The problem is *still* there, but it seems that there are fewer sockets hanging around at the moment. First, two second recycle is far to small. It should be at least higher then socket_timeout. I mean, you have 10 second timeout and 2 second recycle !?. Second no need to cache_timeout on prefork, since you have only one cached worker (default), so you don't need cachesize too. Ok fixed that. Config is now: ... worker.proc2111.port=12111 worker.proc2111.lbfactor=1 worker.proc2111.local_worker=1 worker.proc2111.socket_timeout=5 worker.proc2111.recycle_timeout=10 ... Regards, -Michael - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
Michael Stiller wrote: Ok, just running a fresh cvs tree. The problem is *still* there, but it seems that there are fewer sockets hanging around at the moment. Ok, we are getting somewhere :). Ok fixed that. Config is now: ... worker.proc2111.port=12111 worker.proc2111.lbfactor=1 worker.proc2111.local_worker=1 worker.proc2111.socket_timeout=5 worker.proc2111.recycle_timeout=10 Did you try to comment the recycle_timeout. Also what are you using for testing? ab or... Regards, Mladen. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
Did you try to comment the recycle_timeout. ok will do. Also what are you using for testing? ab or... Eh hm it is in production now ;) So we use the clients for testing. 8) Part of a 10 machines cluster. Just checking without the recycle_timeout. Regards, Michael - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
Michael Stiller wrote: Also what are you using for testing? ab or... Eh hm it is in production now ;) So we use the clients for testing. 8) You are really brave :). Just checking without the recycle_timeout. What happens if you issue 'apachectl restart' ? Can you make 'JkLoglevel trace' and post me the clear log. I'm really interested :). Mladen. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
mod_jk CLOSE_WAIT state and 1 byte recv buffer
Hi, i have the following problem with mod_jk from tomcat-connectors (1.2.5 - 1.2.8) including 1.2.9 (from cvs). Environment: apache is 2.0.52, forking model, server os is linux 2.6.10-1.760_FC3smp (fedora core 3), mod_jk 1.2.9 (others tested as well) After a while i get sockets stuck in CLOSE_WAIT state and netstat shows 1 byte in the receive queue for this socket. tcpdump shows, that the backend (jetty) half-closes the connection with FIN. That FIN is acked from the mod_jk machine but the connection is not closed (no FIN is send). sample netstat output: ... tcp1 0 192.168.100.1:51003 192.168.170.8:32511 CLOSE_WAIT tcp1 0 192.168.100.1:53875 192.168.170.8:12522 CLOSE_WAIT tcp1 0 192.168.100.1:53619 192.168.170.8:12521 CLOSE_WAIT ... Are there any known issues? Where do is start debugging this? What information is missing? TIA, Michael - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: mod_jk CLOSE_WAIT state and 1 byte recv buffer
hello, i have a similar (the same?) problem, where i currently do not know what's really the root cause of it. i was monitoring my webapp (apache 2.0.52, tomcat 5.0.28, mod_jk2, later tomcat 5.5.7 with mod_jk 1.2.8) on port 80 (apache/mod_jk) by requesting a given url with curl. sometimes the url was not reachable, the output of curl then was curl: (6) name lookup timed out. the netstat-output was sometimes ok (cons in state ESTABLISHED), but sometimes it looked like the following: tcp1 0 127.0.0.1:50195 127.0.0.1:8009 CLOSE_WAIT 12109/httpd tcp1 0 127.0.0.1:50203 127.0.0.1:8009 CLOSE_WAIT 12113/httpd tcp1 0 127.0.0.1:50228 127.0.0.1:8009 CLOSE_WAIT 12108/httpd tcp1 0 127.0.0.1:50215 127.0.0.1:8009 CLOSE_WAIT 12111/httpd tcp0 0 127.0.0.1:50258 127.0.0.1:8009 ESTABLISHED 12112/httpd tcp0 0 127.0.0.1:50262 127.0.0.1:8009 ESTABLISHED 3268/httpd tcp0 0 127.0.0.1:50240 127.0.0.1:8009 ESTABLISHED 12110/httpd tcp0 0 127.0.0.1:50244 127.0.0.1:8009 ESTABLISHED 13759/httpd tcp0 0 127.0.0.1:50249 127.0.0.1:8009 ESTABLISHED 12114/httpd tcp0 0 127.0.0.1:50254 127.0.0.1:8009 ESTABLISHED 12115/httpd tcp0 0 :::127.0.0.1:50238 :::127.0.0.1:3306 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:50237 :::127.0.0.1:3306 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:8009 :::127.0.0.1:50262 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:8009 :::127.0.0.1:50258 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:8009 :::127.0.0.1:50254 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:8009 :::127.0.0.1:50249 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:8009 :::127.0.0.1:50244 ESTABLISHED 12562/java tcp0 0 :::127.0.0.1:8009 :::127.0.0.1:50240 ESTABLISHED 12562/java then i added the hostname of the requested url to /etc/hosts, and since then everything was ok. adding the hostname to /etc/hosts was my last action, after that i was not further debugging. my environment: 2.6.10-1.760_FC3 tomcat5-5.5.7-2jpp httpd-2.0.52-3.1 mod_jk-ap20-1.2.8-1jpp cheers, martin On Fri, 2005-02-18 at 15:49 +0100, Michael Stiller wrote: Hi, i have the following problem with mod_jk from tomcat-connectors (1.2.5 - 1.2.8) including 1.2.9 (from cvs). Environment: apache is 2.0.52, forking model, server os is linux 2.6.10-1.760_FC3smp (fedora core 3), mod_jk 1.2.9 (others tested as well) After a while i get sockets stuck in CLOSE_WAIT state and netstat shows 1 byte in the receive queue for this socket. tcpdump shows, that the backend (jetty) half-closes the connection with FIN. That FIN is acked from the mod_jk machine but the connection is not closed (no FIN is send). sample netstat output: ... tcp1 0 192.168.100.1:51003 192.168.170.8:32511 CLOSE_WAIT tcp1 0 192.168.100.1:53875 192.168.170.8:12522 CLOSE_WAIT tcp1 0 192.168.100.1:53619 192.168.170.8:12521 CLOSE_WAIT ... Are there any known issues? Where do is start debugging this? What information is missing? TIA, Michael - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- Martin Grotzke Hohenesch 38, 22765 Hamburg Tel. +49 (0) 40.39905668 Mobil +49 (0) 170.9365656 E-Mail[EMAIL PROTECTED] Onlinehttp://www.javakaffee.de signature.asc Description: This is a digitally signed message part