RE: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
> From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] > Subject: Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD What part of do not top-post do you not understand? > The Application port is configured in the catalina.properties file > HTTP_PORT=8030 > JVM_ROUTE=dl360x3805.8030 Those are not tags that mean anything to Tomcat. If your application is using port 8030 on its own, it's your application's responsibility to clean up after itself properly. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi, The Application port is configured in the catalina.properties file # String cache configuration. tomcat.util.buf.StringCache.byte.enabled=true #tomcat.util.buf.StringCache.char.enabled=true #tomcat.util.buf.StringCache.trainThreshold=50 #tomcat.util.buf.StringCache.cacheSize=5000 SHUTDOWN_PORT=-1 HTTP_PORT=8030 JVM_ROUTE=dl360x3805.8030 With regard to the HTTPD configuration , the members are configured in the another file (balancer.conf) which is included in the httpd.conf Include /etc/httpd/conf/balancer.conf BalancerMember http://dl360x3806:8030/custcare_cmax/view/services retry=60 route=dl360x3806.8030 Regards, Adhavan.M On Thu, May 11, 2017 at 9:06 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Adhavan, > > On 5/11/17 11:32 AM, Adhavan Mathiyalagan wrote: > > 8030 is the port where the application is running. > > Port 8030 appears nowhere in your configuration. Not in server.xml > (where you used ${HTTP_PORT}, which could plausibly be 8030) and not > in httpd.conf -- where you specify all port numbers for mod_proxy_http > and none of them were port 8030. > > - -chris > > > On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat) > > wrote: > > > >> On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: > >> > >>> Hi Chris, > >>> > >>> *Tomcat Configuration* > >>> > >>> HTTP/1.1 and APR > >>> > >>> >>> > >>> connectionTimeout="2" > >>> > >>> redirectPort="8443" maxHttpHeaderSize="8192" /> > >>> > >>> > >>> ${catalina.base}/conf/web.xml > >>> > >>> > >>> > >>> className="org.apache.catalina.session.StandardManager" > >>> maxActiveSessions="400"/> > >>> > >>> > >>> *HTTPD Configuration* > >>> > >>> > >>> > >>> ServerTokens OS ServerRoot "/etc/httpd" > >>> > >>> PidFile run/httpd.pid > >>> > >>> Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 > >>> KeepAliveTimeout 15 StartServers256 > >>> MinSpareServers100 MaxSpareServers500 ServerLimit > >>> 2000 MaxClients2000 MaxRequestsPerChild 4000 > >>> > >>> StartServers 4 MaxClients > >>> 300 MinSpareThreads 25 MaxSpareThreads 75 > >>> ThreadsPerChild 25 MaxRequestsPerChild 0 > >>> > >>> > >>> > >>> ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests > >>> Off BalancerMember > >>> http://dl360x3799:8011/admx_ecms/view/services retry=60 > >>> status=+H route=dl360x3799.8011 BalancerMember > >>> http://dl360x3799:8012/admx_ecms/view/services retry=60 > >>> status=+H route=dl360x3799.8012 ProxySet > >>> stickysession=JSESSIONID ProxySet lbmethod=byrequests > >>> ProxyPass /custcare_cmax/view/services balancer://wsiservices > >>> ProxyPassReverse /custcare_cmax/view/services > >>> balancer://wsiservices ProxyPass /admx_ecms/view/services > >>> balancer://wsiservices ProxyPassReverse > >>> /admx_ecms/view/services balancer://wsiservices >>> balancer://wsiinstances> BalancerMember > >>> http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 > >>> route=dl360x3806.8035 BalancerMember > >>> http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 > >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID > >>> ProxySet lbmethod=byrequests ProxyPass > >>> /custcare_cmax/services/ws_cma3 balancer://wsiinstances > >>> ProxyPassReverse /custcare_cmax/services/ws_cma3 > >>> balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 > >>> balancer://wsiinstances ProxyPassReverse > >>> /admx_ecms/services/ws_cma3 balancer://wsiinstances >>> balancer://admxcluster> BalancerMember > >>> http://dl360x3799:8011/admx_ecms retry=60 status=+H > >>> route=dl360x3799.8011 BalancerMember > >>> http://dl360x3799:8012/admx_ecms retry=60 status=+H > >>> route=dl360x3799.8012 ProxySet stickysession=JSESSIONID > >>> ProxySet lbmethod=byrequests ProxyPass /admx_ecms > >>> balancer://admxcluster ProxyPassReverse /admx_ecms > >>> balancer://admxcluster > >>> BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 > >>> status=+H route=dl360x3799.8021 BalancerMember > >>> http://dl360x3799:8022/custcare_cmax retry=60 status=+H > >>> route=dl360x3799.8022 BalancerMember > >>> http://dl360x3806:8035/custcare_cmax retry=60 > >>> route=dl360x3806.8035 BalancerMember > >>> http://dl360x3806:8036/custcare_cmax retry=60 > >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID > >>> ProxySet lbmethod=byrequests ProxyPass /custcare_cmax > >>> balancer://cmaxcluster ProxyPassReverse /custcare_cmax > >>> balancer://cmaxcluster > >>> BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 > >>> ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests > >>> ProxyPass /mx balancer://mxcluster ProxyPassReverse > >>> /mx balancer://mxcluster SetHandler > >>> balancer-manager > >>> SetHandler server-status ExtendedStatus On > >>> TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv > >>> proxy-nokeepalive 1 > >>> > >>> > >>> > >> Hi. Your netstat screenshot showed the CLOSE_WAIT connections on > >> port 8030, l
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 11.05.2017 17:32, Adhavan Mathiyalagan wrote: Hi, 8030 is the port where the application is running. /What/ application ? Is that a stand-alone application ? For Tomcat, I cannot say (because it is not clear below what value ${HTTP_PORT} has. But from your front-end balancer, it looks like it is forwarding to a series of ports, none of which are 8030. And please stop top-posting.. Regards, Adhavan.M On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat) wrote: On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: Hi Chris, *Tomcat Configuration* HTTP/1.1 and APR ${catalina.base}/conf/web.xml *HTTPD Configuration* ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 StartServers256 MinSpareServers100 MaxSpareServers500 ServerLimit2000 MaxClients2000 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests Off BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/view/services balancer://wsiservices ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices ProxyPass /admx_ecms/view/services balancer://wsiservices ProxyPassReverse /admx_ecms/view/services balancer://wsiservices BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /admx_ecms balancer://admxcluster ProxyPassReverse /admx_ecms balancer://admxcluster BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H route=dl360x3799.8021 BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H route=dl360x3799.8022 BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax balancer://cmaxcluster ProxyPassReverse /custcare_cmax balancer://cmaxcluster BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /mx balancer://mxcluster ProxyPassReverse /mx balancer://mxcluster SetHandler balancer-manager SetHandler server-status ExtendedStatus On TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 Hi. Your netstat screenshot showed the CLOSE_WAIT connections on port 8030, like : tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 CLOSE_WAIT But I do not see any mention of port 8030 in your configs above. So what is listening there ? ("netstat --tcp -aopn" would show this) On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 11:32 AM, Adhavan Mathiyalagan wrote: > 8030 is the port where the application is running. Port 8030 appears nowhere in your configuration. Not in server.xml (where you used ${HTTP_PORT}, which could plausibly be 8030) and not in httpd.conf -- where you specify all port numbers for mod_proxy_http and none of them were port 8030. - -chris > On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat) > wrote: > >> On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: >> >>> Hi Chris, >>> >>> *Tomcat Configuration* >>> >>> HTTP/1.1 and APR >>> >>> >> >>> connectionTimeout="2" >>> >>> redirectPort="8443" maxHttpHeaderSize="8192" /> >>> >>> >>> ${catalina.base}/conf/web.xml >>> >>> >>> >> className="org.apache.catalina.session.StandardManager" >>> maxActiveSessions="400"/> >>> >>> >>> *HTTPD Configuration* >>> >>> >>> >>> ServerTokens OS ServerRoot "/etc/httpd" >>> >>> PidFile run/httpd.pid >>> >>> Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 >>> KeepAliveTimeout 15 StartServers256 >>> MinSpareServers100 MaxSpareServers500 ServerLimit >>> 2000 MaxClients2000 MaxRequestsPerChild 4000 >>> >>> StartServers 4 MaxClients >>> 300 MinSpareThreads 25 MaxSpareThreads 75 >>> ThreadsPerChild 25 MaxRequestsPerChild 0 >>> >>> >>> >>> ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests >>> Off BalancerMember >>> http://dl360x3799:8011/admx_ecms/view/services retry=60 >>> status=+H route=dl360x3799.8011 BalancerMember >>> http://dl360x3799:8012/admx_ecms/view/services retry=60 >>> status=+H route=dl360x3799.8012 ProxySet >>> stickysession=JSESSIONID ProxySet lbmethod=byrequests >>> ProxyPass /custcare_cmax/view/services balancer://wsiservices >>> ProxyPassReverse /custcare_cmax/view/services >>> balancer://wsiservices ProxyPass /admx_ecms/view/services >>> balancer://wsiservices ProxyPassReverse >>> /admx_ecms/view/services balancer://wsiservices >> balancer://wsiinstances> BalancerMember >>> http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 >>> route=dl360x3806.8035 BalancerMember >>> http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID >>> ProxySet lbmethod=byrequests ProxyPass >>> /custcare_cmax/services/ws_cma3 balancer://wsiinstances >>> ProxyPassReverse /custcare_cmax/services/ws_cma3 >>> balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 >>> balancer://wsiinstances ProxyPassReverse >>> /admx_ecms/services/ws_cma3 balancer://wsiinstances >> balancer://admxcluster> BalancerMember >>> http://dl360x3799:8011/admx_ecms retry=60 status=+H >>> route=dl360x3799.8011 BalancerMember >>> http://dl360x3799:8012/admx_ecms retry=60 status=+H >>> route=dl360x3799.8012 ProxySet stickysession=JSESSIONID >>> ProxySet lbmethod=byrequests ProxyPass /admx_ecms >>> balancer://admxcluster ProxyPassReverse /admx_ecms >>> balancer://admxcluster >>> BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 >>> status=+H route=dl360x3799.8021 BalancerMember >>> http://dl360x3799:8022/custcare_cmax retry=60 status=+H >>> route=dl360x3799.8022 BalancerMember >>> http://dl360x3806:8035/custcare_cmax retry=60 >>> route=dl360x3806.8035 BalancerMember >>> http://dl360x3806:8036/custcare_cmax retry=60 >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID >>> ProxySet lbmethod=byrequests ProxyPass /custcare_cmax >>> balancer://cmaxcluster ProxyPassReverse /custcare_cmax >>> balancer://cmaxcluster >>> BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 >>> ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests >>> ProxyPass /mx balancer://mxcluster ProxyPassReverse >>> /mx balancer://mxcluster SetHandler >>> balancer-manager >>> SetHandler server-status ExtendedStatus On >>> TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv >>> proxy-nokeepalive 1 >>> >>> >>> >> Hi. Your netstat screenshot showed the CLOSE_WAIT connections on >> port 8030, like : >> >> tcp 509 0 :::10.61.137.49:8030 >> :::10.61.137.47:60903 CLOSE_WAIT >> >> But I do not see any mention of port 8030 in your configs above. >> So what is listening there ? ("netstat --tcp -aopn" would show >> this) >> >> >> >> On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < >>> ch...@christopherschultz.net> wrote: >>> >>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: > The connections in the CLOSE_WAIT are owned by the > Application /Tomcat process. > Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtool
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 10:57 AM, Adhavan Mathiyalagan wrote: > *Tomcat Configuration* > > HTTP/1.1 and APR > > > connectionTimeout="2" > > redirectPort="8443" maxHttpHeaderSize="8192" /> Okay, so you have a number of defaults taking effect, including: maxConnections="8192" maxKeepAliveRequests="100" maxThreads="200" ... and you have no configured, so the default executor will be used, which performs no reduction of threads when they are idle. > *HTTPD Configuration* > > > Timeout 60 KeepAlive Off Wow, really? > MaxKeepAliveRequests 100 KeepAliveTimeout 15 That's ... confusing. Why do you disable KeepAlives, then configure certain aspects of KeepAlive? > StartServers256 MinSpareServers100 > MaxSpareServers500 ServerLimit2000 MaxClients2000 > MaxRequestsPerChild 4000 > > StartServers 4 MaxClients 300 > MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild > 25 MaxRequestsPerChild 0 Which module is actually in use? pre-fork or worker? For pre-fork, each of your httpd instances can generate 2000 simultaneous connections to Tomcat. For worker MPM, each of your httpd instances can generate 300 simultaneous connections to Tomcat. Tomcat is configured to handle a maximum of 200 simultaneous requests, so you are very likely to have a situation where your web server is handling far most load than Tomcat can. If you have a fairly high percentage of web-server-only requests, then this is probably okay. But if the overwhelming majority of requests to the web server will need to proxied-over to your application server, then you are gong to have problems. The problem gets worse if you have more than one httpd instance. > ServerName * Timeout 300 You have conflicting settings for "Timeout". You may want to review thos e. > ProxyPreserveHost On ProxyRequests Off balancer://wsiservices> BalancerMember > http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H > route=dl360x3799.8011 BalancerMember > http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H > route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet > lbmethod=byrequests Odd to have only "hot standbys" in a cluster. > SetEnv force-proxy-request-1.0 1 This is a horrible idea for performance, mostly because it disabled HTTP KeepAlives. Why are you doing this? > SetEnv proxy-nokeepalive 1 This is a horrible idea for performance. Why are you disabling KeepAlive for proxied requests? - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUhJwACgkQHPApP6U8 pFi7VRAAiZaMKSdmnCJt/Jy/S8K3uH+eT0hoIsN4TppH1UNxkFM7k9I0A0bN4x1I eyFVC/Q1bDLP2DR1+pqYQipk0w5w3mgQsrqz4QEKFYkQy8LEzSq+/T6EskHLin3z liGCgMzgJwaDejKczvgp0n3e0THI4iLPIeNvwmUROW7D7BcK76czio2C+lV0dBcU 7m+Ysfk2hpdim/DQMgZbgE6SNLs8PJ64S9DnCfZEDwgGZOJPWpAd83Y06LkxqHkj t8ttklYdVQqUumDeeKIlL2e5lbxs2cbcedNo+L4CR+ZMzu/5diLMFmvoPffsfvA9 JnHAMShAFhS3Ktogv5m/DcBrcv2FTysOiTAs6MFYPS4TNARO959k7WEgnDvqZcAK B4tp7UPABQVSBqeHnOElfdCHBFUv4rxtWPnEoRh7Rzf+RAknufXj2Tv3FxhY+cyy 941fajGhMGttCe3P51FmMrWNlKdKWFXHSRq4izn7v6cwM40PmO/Atlu9zk1HoSme pqoM5DrXVsO8wknxmnm5ejhg/a3svMIrNs1tagEVCHPM7PQAJKXVw8Mxwqa7luGl i4VJ1/hlT8ewxu/NBczbrJ3zJzsLW06Tq5IC2fa2WCDtKF4WEylCDIeZTaPdAbme M+21Rr1H+U5XoQ0ZgFnn7JUFvfOeI/61NwyjTecSbew4M6qJGCk= =HsXn -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi, 8030 is the port where the application is running. Regards, Adhavan.M On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat) wrote: > On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: > >> Hi Chris, >> >> *Tomcat Configuration* >> >> HTTP/1.1 and APR >> >> > >> connectionTimeout="2" >> >> redirectPort="8443" maxHttpHeaderSize="8192" /> >> >> >> ${catalina.base}/conf/web.xml >> >> > className="org.apache.catalina.session.StandardManager" >> maxActiveSessions="400"/> >> >> >> >> *HTTPD Configuration* >> >> >> >> ServerTokens OS >> ServerRoot "/etc/httpd" >> >> PidFile run/httpd.pid >> >> Timeout 60 >> KeepAlive Off >> MaxKeepAliveRequests 100 >> KeepAliveTimeout 15 >> >> StartServers256 >> MinSpareServers100 >> MaxSpareServers500 >> ServerLimit2000 >> MaxClients2000 >> MaxRequestsPerChild 4000 >> >> >> >> StartServers 4 >> MaxClients 300 >> MinSpareThreads 25 >> MaxSpareThreads 75 >> ThreadsPerChild 25 >> MaxRequestsPerChild 0 >> >> >> >> >> ServerName * >> Timeout 300 >> ProxyPreserveHost On >> ProxyRequests Off >> >> BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 >> status=+H route=dl360x3799.8011 >> BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 >> status=+H route=dl360x3799.8012 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /custcare_cmax/view/services balancer://wsiservices >> ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices >> ProxyPass /admx_ecms/view/services balancer://wsiservices >> ProxyPassReverse /admx_ecms/view/services balancer://wsiservices >> >> BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 >> retry=60 route=dl360x3806.8035 >> BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 >> retry=60 route=dl360x3806.8036 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances >> ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances >> ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances >> ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances >> >> BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H >> route=dl360x3799.8011 >> BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H >> route=dl360x3799.8012 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /admx_ecms balancer://admxcluster >> ProxyPassReverse /admx_ecms balancer://admxcluster >> >> BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H >> route=dl360x3799.8021 >> BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H >> route=dl360x3799.8022 >> BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 >> route=dl360x3806.8035 >> BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 >> route=dl360x3806.8036 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /custcare_cmax balancer://cmaxcluster >> ProxyPassReverse /custcare_cmax balancer://cmaxcluster >> >> BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /mx balancer://mxcluster >> ProxyPassReverse /mx balancer://mxcluster >> >> SetHandler balancer-manager >> >> >> SetHandler server-status >> >> ExtendedStatus On >> TraceEnable Off >> SetEnv force-proxy-request-1.0 1 >> SetEnv proxy-nokeepalive 1 >> >> >> > Hi. > Your netstat screenshot showed the CLOSE_WAIT connections on port 8030, > like : > > tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 > CLOSE_WAIT > > But I do not see any mention of port 8030 in your configs above. So what > is listening there ? > ("netstat --tcp -aopn" would show this) > > > > On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < >> ch...@christopherschultz.net> wrote: >> >> -BEGIN PGP SIGNED MESSAGE- >>> Hash: SHA256 >>> >>> Adhavan, >>> >>> On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: >>> The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. >>> >>> Okay. Can you please post your configuration on both httpd and Tomcat >>> sides? If it's not clear from your configuration, please tell us which >>> type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). >>> >>> - -chris >>> -BEGIN PGP SIGNATURE- >>> Comment: GPGTools - http://gpgtools.org >>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ >>> >>> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 >>> pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c >>> BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD >>> 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs >>> 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: Hi Chris, *Tomcat Configuration* HTTP/1.1 and APR ${catalina.base}/conf/web.xml *HTTPD Configuration* ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 StartServers256 MinSpareServers100 MaxSpareServers500 ServerLimit2000 MaxClients2000 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests Off BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/view/services balancer://wsiservices ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices ProxyPass /admx_ecms/view/services balancer://wsiservices ProxyPassReverse /admx_ecms/view/services balancer://wsiservices BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /admx_ecms balancer://admxcluster ProxyPassReverse /admx_ecms balancer://admxcluster BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H route=dl360x3799.8021 BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H route=dl360x3799.8022 BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax balancer://cmaxcluster ProxyPassReverse /custcare_cmax balancer://cmaxcluster BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /mx balancer://mxcluster ProxyPassReverse /mx balancer://mxcluster SetHandler balancer-manager SetHandler server-status ExtendedStatus On TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 Hi. Your netstat screenshot showed the CLOSE_WAIT connections on port 8030, like : tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 CLOSE_WAIT But I do not see any mention of port 8030 in your configs above. So what is listening there ? ("netstat --tcp -aopn" would show this) On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5aR7YsXPQFxIQYnkYVZa 0o9TjRVJvhoLwSv10RmD1JxEXCXbpr3qeD+zvDK+TJSowCPqu2xnx+DqGkjpiWk6 eSHDyxaSJqfuz02HeDXWivhYmRE/iWKSETox5Na8UR2MjOdLnPw= =YwUt -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.ap
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi Chris, *Tomcat Configuration* HTTP/1.1 and APR ${catalina.base}/conf/web.xml *HTTPD Configuration* ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 StartServers256 MinSpareServers100 MaxSpareServers500 ServerLimit2000 MaxClients2000 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests Off BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/view/services balancer://wsiservices ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices ProxyPass /admx_ecms/view/services balancer://wsiservices ProxyPassReverse /admx_ecms/view/services balancer://wsiservices BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /admx_ecms balancer://admxcluster ProxyPassReverse /admx_ecms balancer://admxcluster BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H route=dl360x3799.8021 BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H route=dl360x3799.8022 BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax balancer://cmaxcluster ProxyPassReverse /custcare_cmax balancer://cmaxcluster BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /mx balancer://mxcluster ProxyPassReverse /mx balancer://mxcluster SetHandler balancer-manager SetHandler server-status ExtendedStatus On TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Adhavan, > > On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: > > The connections in the CLOSE_WAIT are owned by the Application > > /Tomcat process. > > Okay. Can you please post your configuration on both httpd and Tomcat > sides? If it's not clear from your configuration, please tell us which > type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). > > - -chris > -BEGIN PGP SIGNATURE- > Comment: GPGTools - http://gpgtools.org > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 > pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c > BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD > 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs > 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH > dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X > Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA > x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM > b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB > rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5aR7YsXPQFxIQYnkYVZa > 0o9TjRVJvhoLwSv10RmD1JxEXCXbpr3qeD+zvDK+TJSowCPqu2xnx+DqGkjpiWk6 > eSHDyxaSJqfuz02HeDXWivhYmRE/iWKSETox5Na8UR2MjOdLnPw= > =YwUt > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi Chris, The netstat O/P below for the CLOSE_WAIT connections tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 CLOSE_WAIT tcp 491 0 :::10.61.137.49:8030:::10.61.137.47:24856 CLOSE_WAIT tcp 360 0 :::10.61.137.49:8030:::10.61.137.47:12328 CLOSE_WAIT tcp 511 0 :::10.61.137.49:8030:::10.61.137.47:24710 CLOSE_WAIT tcp 479 0 :::10.61.137.49:8030:::10.61.137.47:33175 CLOSE_WAIT tcp 361 0 :::10.61.137.49:8030:::10.61.137.47:58084 CLOSE_WAIT tcp 531 0 :::10.61.137.49:8030:::10.61.137.47:42030 CLOSE_WAIT tcp 971 0 :::10.61.137.49:8030:::10.61.137.47:17692 CLOSE_WAIT tcp 361 0 :::10.61.137.49:8030:::10.61.137.47:60303 CLOSE_WAIT 10.61.137.49 -> Application IP 10.61.137.47 -> Load balancer IP Regards, Adhavan.M On Thu, May 11, 2017 at 7:06 PM, André Warnier (tomcat) wrote: > On 11.05.2017 15:30, Adhavan Mathiyalagan wrote: > >> Hi Chris, >> >> The connections in the CLOSE_WAIT are owned by the Application /Tomcat >> process. >> > > Can you provide an example output of the "netstat" command that shows such > connections ? (not all, just some) > (copy and paste it right here) > -> > > > >> Regards, >> Adhavan.M >> >> On Thu, May 11, 2017 at 6:53 PM, Christopher Schultz < >> ch...@christopherschultz.net> wrote: >> >> -BEGIN PGP SIGNED MESSAGE- >>> Hash: SHA256 >>> >>> Adhavan, >>> >>> On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: >>> Team, Tomcat version : 8.0.18 Apache HTTPD version : 2.2 There are lot of CLOSE_WAIT connections being created at the Application(tomcat) ,when the traffic is routed through the Apache HTTPD load balancer to the Application running over tomcat container. This leads to slowness of the port where the Application is running and eventually the application is not accessible through that particular PORT. >>> >>> Please clarify: are the connections in the CLOSE_WAIT state owned by the >>> httpd process or the Tomcat process? >>> >>> In case of the traffic directly reaching the Application PORT without HTTPD (Load balancer) there is no CLOSE_WAIT connections created and application can handle the load seamlessly. >>> >>> - -chris >>> -BEGIN PGP SIGNATURE- >>> Comment: GPGTools - http://gpgtools.org >>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ >>> >>> iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 >>> pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr >>> uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ >>> hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O >>> yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp >>> KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF >>> toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u >>> oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV >>> OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW >>> OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM >>> eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF >>> kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== >>> =nMBw >>> -END PGP SIGNATURE- >>> >>> - >>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org >>> For additional commands, e-mail: users-h...@tomcat.apache.org >>> >>> >>> >> > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Dear André, ups - yes, I confused both. FIN ;) Guido On 11.05.2017 13:37, André Warnier (tomcat) wrote: > I believe that the explanation given below by Guido is incorrect and > misleading, as it seems to confuse CLOSE_WAIT with TIME_WAIT. > See : TCP/IP State Transition Diagram (RFC793) > > CLOSE-WAIT represents waiting for a connection termination request from the > local user. > > TIME-WAIT represents waiting for enough time to pass to be sure the remote > TCP received the acknowledgment of its connection termination request. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: > The connections in the CLOSE_WAIT are owned by the Application > /Tomcat process. Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5aR7YsXPQFxIQYnkYVZa 0o9TjRVJvhoLwSv10RmD1JxEXCXbpr3qeD+zvDK+TJSowCPqu2xnx+DqGkjpiWk6 eSHDyxaSJqfuz02HeDXWivhYmRE/iWKSETox5Na8UR2MjOdLnPw= =YwUt -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 11.05.2017 15:30, Adhavan Mathiyalagan wrote: Hi Chris, The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Can you provide an example output of the "netstat" command that shows such connections ? (not all, just some) (copy and paste it right here) -> Regards, Adhavan.M On Thu, May 11, 2017 at 6:53 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: Team, Tomcat version : 8.0.18 Apache HTTPD version : 2.2 There are lot of CLOSE_WAIT connections being created at the Application(tomcat) ,when the traffic is routed through the Apache HTTPD load balancer to the Application running over tomcat container. This leads to slowness of the port where the Application is running and eventually the application is not accessible through that particular PORT. Please clarify: are the connections in the CLOSE_WAIT state owned by the httpd process or the Tomcat process? In case of the traffic directly reaching the Application PORT without HTTPD (Load balancer) there is no CLOSE_WAIT connections created and application can handle the load seamlessly. - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== =nMBw -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi Chris, The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Regards, Adhavan.M On Thu, May 11, 2017 at 6:53 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Adhavan, > > On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: > > Team, > > > > Tomcat version : 8.0.18 > > > > Apache HTTPD version : 2.2 > > > > > > There are lot of CLOSE_WAIT connections being created at the > > Application(tomcat) ,when the traffic is routed through the Apache > > HTTPD load balancer to the Application running over tomcat > > container. This leads to slowness of the port where the > > Application is running and eventually the application is not > > accessible through that particular PORT. > > Please clarify: are the connections in the CLOSE_WAIT state owned by the > httpd process or the Tomcat process? > > > In case of the traffic directly reaching the Application PORT > > without HTTPD (Load balancer) there is no CLOSE_WAIT connections > > created and application can handle the load seamlessly. > > - -chris > -BEGIN PGP SIGNATURE- > Comment: GPGTools - http://gpgtools.org > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 > pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr > uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ > hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O > yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp > KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF > toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u > oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV > OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW > OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM > eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF > kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== > =nMBw > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: > Team, > > Tomcat version : 8.0.18 > > Apache HTTPD version : 2.2 > > > There are lot of CLOSE_WAIT connections being created at the > Application(tomcat) ,when the traffic is routed through the Apache > HTTPD load balancer to the Application running over tomcat > container. This leads to slowness of the port where the > Application is running and eventually the application is not > accessible through that particular PORT. Please clarify: are the connections in the CLOSE_WAIT state owned by the httpd process or the Tomcat process? > In case of the traffic directly reaching the Application PORT > without HTTPD (Load balancer) there is no CLOSE_WAIT connections > created and application can handle the load seamlessly. - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== =nMBw -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
I believe that the explanation given below by Guido is incorrect and misleading, as it seems to confuse CLOSE_WAIT with TIME_WAIT. See : TCP/IP State Transition Diagram (RFC793) CLOSE-WAIT represents waiting for a connection termination request from the local user. TIME-WAIT represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request. Thus, CLOSE_WAIT is a /normal/ state of a TCP/IP connection. There is no timeout for it that can be set by any TCP/IP parameter. Basically it means : the remote cient has closed this connection, and the local OS is waiting for the local application to also close its side of the connection. And the local OS is going to wait - for an undefinite amount of time - until that happens (or until the process which has this connection still opens, exits). And in this case, the process which has this connection open, is the JVM which runs Tomcat (which by definition never exits, until you terminate Tomcat). Many connections in the CLOSE_WAIT state mean, in most cases, that the application running under Tomcat, is not closing its sockets properly. (This can happen in some "devious" ways, not easy to immediately diagnose). Try the following : when you notice a high number of connections in CLOSE_WAIT state, force the JVM which runs Tomcat, to do a Major Garbage Collection. (I do this using jmxsh, but there are several other way to do this) And check after this, how many CLOSE_WAIT connections are still there. On 11.05.2017 11:03, Adhavan Mathiyalagan wrote: Thanks Guido ! On Thu, May 11, 2017 at 12:02 PM, Jäkel, Guido wrote: Dear Adhavan, I think this is quiet normal, because the browser clients "in front" will reuse connections (using keep-alive at TCP level) but an in-between load balancer may be not work or configured in this way and will use a new connection for each request to the backend. Then, you'll see a lot of sockets in the TCP/IP closedown workflow between the load balancer and the backend server. Pleases refer to TCP/IP that the port even for a "well closed connection" will be hold some time to handle late (duplicate) packets. Think about a duplicated, delayed RST packet - this should not close the next connection to this port. Because this situation is very unlikely or even impossible on a local area network, you may adjust the TCP stack setting of your server to use much lower protection times (in the magnitude of seconds) and also adjust others. And at Linux, you may also expand the range of ports used for connections. BTW: If you have a dedicated stateful packet inspecting firewall between your LB and the server, you also have to take a look on this. Said that, one more cent about the protocol between the LB and the Tomcat: I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure it to keep and reuse connections to the Tomcat backend(s). Guido -Original Message- From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] Sent: Wednesday, May 10, 2017 6:32 PM To: Tomcat Users List Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD Team, Tomcat version : 8.0.18 Apache HTTPD version : 2.2 There are lot of CLOSE_WAIT connections being created at the Application(tomcat) ,when the traffic is routed through the Apache HTTPD load balancer to the Application running over tomcat container. This leads to slowness of the port where the Application is running and eventually the application is not accessible through that particular PORT. In case of the traffic directly reaching the Application PORT without HTTPD (Load balancer) there is no CLOSE_WAIT connections created and application can handle the load seamlessly. Thanks in advance for the support. Regards, Adhavan.M - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Thanks Guido ! On Thu, May 11, 2017 at 12:02 PM, Jäkel, Guido wrote: > Dear Adhavan, > > I think this is quiet normal, because the browser clients "in front" will > reuse connections (using keep-alive at TCP level) but an in-between load > balancer may be not work or configured in this way and will use a new > connection for each request to the backend. > > Then, you'll see a lot of sockets in the TCP/IP closedown workflow between > the load balancer and the backend server. Pleases refer to TCP/IP that the > port even for a "well closed connection" will be hold some time to handle > late (duplicate) packets. Think about a duplicated, delayed RST packet - > this should not close the next connection to this port. > > Because this situation is very unlikely or even impossible on a local area > network, you may adjust the TCP stack setting of your server to use much > lower protection times (in the magnitude of seconds) and also adjust > others. And at Linux, you may also expand the range of ports used for > connections. > > BTW: If you have a dedicated stateful packet inspecting firewall between > your LB and the server, you also have to take a look on this. > > > Said that, one more cent about the protocol between the LB and the Tomcat: > I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure > it to keep and reuse connections to the Tomcat backend(s). > > Guido > > >-Original Message- > >From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] > >Sent: Wednesday, May 10, 2017 6:32 PM > >To: Tomcat Users List > >Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD > > > >Team, > > > >Tomcat version : 8.0.18 > > > >Apache HTTPD version : 2.2 > > > > > >There are lot of CLOSE_WAIT connections being created at the > >Application(tomcat) ,when the traffic is routed through the Apache HTTPD > >load balancer to the Application running over tomcat container. This leads > >to slowness of the port where the Application is running and eventually > the > >application is not accessible through that particular PORT. > > > >In case of the traffic directly reaching the Application PORT without > HTTPD > >(Load balancer) there is no CLOSE_WAIT connections created and > application > >can handle the load seamlessly. > > > >Thanks in advance for the support. > > > >Regards, > >Adhavan.M >
RE: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Dear Adhavan, I think this is quiet normal, because the browser clients "in front" will reuse connections (using keep-alive at TCP level) but an in-between load balancer may be not work or configured in this way and will use a new connection for each request to the backend. Then, you'll see a lot of sockets in the TCP/IP closedown workflow between the load balancer and the backend server. Pleases refer to TCP/IP that the port even for a "well closed connection" will be hold some time to handle late (duplicate) packets. Think about a duplicated, delayed RST packet - this should not close the next connection to this port. Because this situation is very unlikely or even impossible on a local area network, you may adjust the TCP stack setting of your server to use much lower protection times (in the magnitude of seconds) and also adjust others. And at Linux, you may also expand the range of ports used for connections. BTW: If you have a dedicated stateful packet inspecting firewall between your LB and the server, you also have to take a look on this. Said that, one more cent about the protocol between the LB and the Tomcat: I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure it to keep and reuse connections to the Tomcat backend(s). Guido >-Original Message- >From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] >Sent: Wednesday, May 10, 2017 6:32 PM >To: Tomcat Users List >Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD > >Team, > >Tomcat version : 8.0.18 > >Apache HTTPD version : 2.2 > > >There are lot of CLOSE_WAIT connections being created at the >Application(tomcat) ,when the traffic is routed through the Apache HTTPD >load balancer to the Application running over tomcat container. This leads >to slowness of the port where the Application is running and eventually the >application is not accessible through that particular PORT. > >In case of the traffic directly reaching the Application PORT without HTTPD >(Load balancer) there is no CLOSE_WAIT connections created and application >can handle the load seamlessly. > >Thanks in advance for the support. > >Regards, >Adhavan.M
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 10/05/17 17:32, Adhavan Mathiyalagan wrote: > Team, > > Tomcat version : 8.0.18 That is over two years old. Have you considered updating? > Apache HTTPD version : 2.2 > > > There are lot of CLOSE_WAIT connections being created at the > Application(tomcat) ,when the traffic is routed through the Apache HTTPD > load balancer to the Application running over tomcat container. This leads > to slowness of the port where the Application is running and eventually the > application is not accessible through that particular PORT. > > In case of the traffic directly reaching the Application PORT without HTTPD > (Load balancer) there is no CLOSE_WAIT connections created and application > can handle the load seamlessly. > > Thanks in advance for the support. Relevant configuration settings please. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org