Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-20 Thread Lisa Woodring
Chris,

On Thu, Nov 20, 2014 at 3:16 PM, Christopher Schultz
 wrote:
>
> Lisa,
>
> On 11/19/14 1:36 PM, Lisa Woodring wrote:
>> On Tue, Nov 18, 2014 at 2:43 PM, Christopher Schultz
>>  wrote:
>>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
>>>
>>> Lisa,
>>>
>>> On 11/18/14 11:52 AM, Lisa Woodring wrote:
 We recently upgraded from Tomcat 6.0.29 to Tomcat 8.0.14.
 Everything appears to be working fine, except that Tomcat is
 keeping a high # of threads (in TIMED_WAITING state) -- and the
 CPU has a high load & low idle time.  We are currently running
 Tomcat8 on 2 internal test machines, where we also monitor
 their statistics.  In order to monitor the availability of the
 HTTPS/AJP port (Apache-->Tomcat), our monitoring software opens
 a port to verify that this works -- but then does not follow
 that up with an actual request.  This happens every 2 minutes.
 We have noticed that the high thread/load activity on Tomcat
 coincides with this monitoring.  If we disable our monitoring,
 the issue does not happen.  We have enabled/disabled the
 monitoring on both machines over several days (and there is
 only very minimal, sometimes non-existent) internal traffic
 otherwise) -- in order to verify that the monitoring is really
 the issue.  Once these threads ramp up, they stay there or keep
 increasing.  We had no issues running on Tomcat 6 (the thread
 count stayed low, low load, high idle time).

 The thread backtraces for these threads look like this:
 -


>>>

> Thread[catalina-exec-24,5,main]
 at sun.misc.Unsafe.park(Native Method) at
 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)


>>>

> at
>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)

>>>
> at
 java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)


>>>

> at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:85)
 at
 org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:31)
 at
 java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)


>>>

> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)

>>>
> at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)


>>>

> at
>>> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

>>>
> at java.lang.Thread.run(Thread.java:745)
 -


>>>

> The thread count grows over time (goes up to 130-150 threads after 2
 hours).  Setting 'connectionTimeout' (as opposed to the default
 of never timing out) does seems to help "some" -- the # of
 threads isn't quite as bad (only 60-80 threads after 2 hours).
 However, the CPU Idle % is still not good -- was only 10% idle
 with default tomcat settings, is something like 40% idle with
 current settings. Also tried setting Apache's 'KeepAliveTimeout
 = 5' (currently set to 15) but this did not make any
 difference.


 Is there some configuration we can set to make Tomcat tolerant
 of this monitoring?  (We have tried setting connectionTimeout
 & keepAliveTimeout on the Connector.  And we have tried putting
 the Connector behind an Executor with maxIdleTime.) OR, should
 we modify our monitoring somehow?  And if so, suggestions?


 * Running on Linux CentOS release 5.9 * running Apache in front
 of Tomcat for authentication, using mod_jk * Tomcat 8.0.14

 relevant sections of tomcat/conf/server.xml:
 


>>>

> >>> maxThreads="250" minSpareThreads="20" maxIdleTime="6" />

 >>> protocol="HTTP/1.1" connectionTimeout="2"
 redirectPort="8443" />

 >>> protocol="AJP/1.3" redirectPort="8443" maxThreads="256"
 connectionTimeout="3000" keepAliveTimeout="6" />
>>>
>>> Both of these connectors should be NIO connectors, so they should
>>> not block while waiting for more input. That means that you
>>> should not run out of threads (which is good), but those
>>> connections will sit in the poller queue for a long time (20
>>> seconds for HTTP, 3 seconds for AJP) and then sit in the acceptor
>>> queue for the same amount of time (to check for a "next"
>>> keepAlive request). Are you properly shutting-down the connection
>>> on the client end every 2 minutes?
>>>
>>
>>
>>
>> The monitoring software is trying to test is that the AJP port
>> itself is actually accepting connections.  With Apache in front in
>> a production system, it could forward the actual request to one of
>> several Tom

Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-20 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Lisa,

On 11/19/14 1:36 PM, Lisa Woodring wrote:
> On Tue, Nov 18, 2014 at 2:43 PM, Christopher Schultz 
>  wrote:
>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
>> 
>> Lisa,
>> 
>> On 11/18/14 11:52 AM, Lisa Woodring wrote:
>>> We recently upgraded from Tomcat 6.0.29 to Tomcat 8.0.14. 
>>> Everything appears to be working fine, except that Tomcat is 
>>> keeping a high # of threads (in TIMED_WAITING state) -- and the
>>> CPU has a high load & low idle time.  We are currently running
>>> Tomcat8 on 2 internal test machines, where we also monitor
>>> their statistics.  In order to monitor the availability of the
>>> HTTPS/AJP port (Apache-->Tomcat), our monitoring software opens
>>> a port to verify that this works -- but then does not follow
>>> that up with an actual request.  This happens every 2 minutes.
>>> We have noticed that the high thread/load activity on Tomcat
>>> coincides with this monitoring.  If we disable our monitoring,
>>> the issue does not happen.  We have enabled/disabled the
>>> monitoring on both machines over several days (and there is
>>> only very minimal, sometimes non-existent) internal traffic
>>> otherwise) -- in order to verify that the monitoring is really
>>> the issue.  Once these threads ramp up, they stay there or keep
>>> increasing.  We had no issues running on Tomcat 6 (the thread
>>> count stayed low, low load, high idle time).
>>> 
>>> The thread backtraces for these threads look like this: 
>>> -
>>>
>>>
>>
>>> 
Thread[catalina-exec-24,5,main]
>>> at sun.misc.Unsafe.park(Native Method) at 
>>> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>
>>>
>>
>>> 
at
>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>
>> 
at
>>> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>>>
>>>
>>
>>> 
at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:85)
>>> at 
>>> org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:31)
>>> at 
>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
>>>
>>>
>>
>>> 
at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>>>
>> 
at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>
>>>
>>
>>> 
at
>> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>>>
>> 
at java.lang.Thread.run(Thread.java:745)
>>> -
>>>
>>>
>>
>>> 
The thread count grows over time (goes up to 130-150 threads after 2
>>> hours).  Setting 'connectionTimeout' (as opposed to the default
>>> of never timing out) does seems to help "some" -- the # of
>>> threads isn't quite as bad (only 60-80 threads after 2 hours).
>>> However, the CPU Idle % is still not good -- was only 10% idle
>>> with default tomcat settings, is something like 40% idle with
>>> current settings. Also tried setting Apache's 'KeepAliveTimeout
>>> = 5' (currently set to 15) but this did not make any
>>> difference.
>>> 
>>> 
>>> Is there some configuration we can set to make Tomcat tolerant
>>> of this monitoring?  (We have tried setting connectionTimeout
>>> & keepAliveTimeout on the Connector.  And we have tried putting
>>> the Connector behind an Executor with maxIdleTime.) OR, should
>>> we modify our monitoring somehow?  And if so, suggestions?
>>> 
>>> 
>>> * Running on Linux CentOS release 5.9 * running Apache in front
>>> of Tomcat for authentication, using mod_jk * Tomcat 8.0.14
>>> 
>>> relevant sections of tomcat/conf/server.xml: 
>>> 
>>>
>>>
>>
>>> 
>> maxThreads="250" minSpareThreads="20" maxIdleTime="6" />
>>> 
>>> >> protocol="HTTP/1.1" connectionTimeout="2"
>>> redirectPort="8443" />
>>> 
>>> >> protocol="AJP/1.3" redirectPort="8443" maxThreads="256" 
>>> connectionTimeout="3000" keepAliveTimeout="6" />
>> 
>> Both of these connectors should be NIO connectors, so they should
>> not block while waiting for more input. That means that you
>> should not run out of threads (which is good), but those
>> connections will sit in the poller queue for a long time (20
>> seconds for HTTP, 3 seconds for AJP) and then sit in the acceptor
>> queue for the same amount of time (to check for a "next"
>> keepAlive request). Are you properly shutting-down the connection
>> on the client end every 2 minutes?
>> 
> 
> 
> 
> The monitoring software is trying to test is that the AJP port
> itself is actually accepting connections.  With Apache in front in
> a production system, it could forward the actual request to one of 
> several Tomcat boxes -- but we don't know which one from the
> outside.

Given that the whole point is to test whether the AJP connection is
avai

Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-20 Thread Frederik Nosi

On 11/19/2014 09:27 PM, Lisa Woodring wrote:

Actually, I received a little clarification on the monitoring software
(I didn't write it).  What it's trying to test is that the AJP port
itself is actually accepting connections.  With Apache in front in a
production system, it could forward the actual request to one of
several Tomcat boxes -- but we don't know which one from the outside.
The monitoring software is trying to test -- for each Tomcat instance
-- if it is accepting connections.  It used to send an "nmap" request,
but now sends essentially a "tcp ping" -- gets a response & moves on.


In my case (homemade monitoring) i choosed to check mod_jk's log, after all
mod_jk does indeed check the state of the ajp connector in tomcat.

Hope this helps.
[... ]


Thanks for the idea.  Can you tell me what you specifically look for
in the "mod_jk_log" file?  Do you look for the presence of something?
or the absence of something?


grep out cping,


I only see 'negative' events in the logfile.  For example,
"all endpoints are disconnected, detected by connect check(1),
cping(0), send(0)"
which evidently, is when Tomcat releases a connection on its end.
(I set JkLogLevel = DEBUG, but still don't see any messages that look
like what I would want...)
Just ignore the cping part. I categorize the failure modes in two, 
client error (user closes the browser window or is slow), example:



[Thu Nov 20 10:19:36 2014] [29858:1626331456] [info] 
service::jk_lb_worker.c (1388): service failed, worker p3 is in local 
error state
[Thu Nov 20 10:19:36 2014] [29858:1626331456] [info] 
service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. 
Client failed in the middle of request, we can't recover to another 
instance.
[Thu Nov 20 10:19:36 2014] [29858:1626331456] [info] 
jk_handler::mod_jk.c (2611): Aborting connection for worker=worker_p


Or server error, can be because of timeout (backend too busy):

[Thu Nov 20 10:19:54 2014] [31475:1317062976] [error] 
ajp_get_reply::jk_ajp_common.c (2020): (p7) Timeout with waiting reply 
from tomcat. Tomcat is down, stopped or network problems (errno=110)
[Thu Nov 20 10:19:54 2014] [31475:1317062976] [info] 
ajp_service::jk_ajp_common.c (2540): (p7) sending request to tomcat 
failed (recoverable), because of reply timeout (attempt=1)
[Thu Nov 20 10:19:54 2014] [31475:1317062976] [error] 
ajp_service::jk_ajp_common.c (2559): (p7) connecting to tomcat failed.


Another server error is connection refused, when the backend is extra 
busy (on linux net.ipv4.tcp_max_syn_backlog sockets waiting on the tcp 
stack) or tomcat is down. I dont have an example of this right now though



Anyway, this way you use mod_jk's logic instead of having to create an 
ad hoc one. This is at JkLogLevel notice, no need to enable debug.




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-19 Thread Lisa Woodring
>> Actually, I received a little clarification on the monitoring software
>> (I didn't write it).  What it's trying to test is that the AJP port
>> itself is actually accepting connections.  With Apache in front in a
>> production system, it could forward the actual request to one of
>> several Tomcat boxes -- but we don't know which one from the outside.
>> The monitoring software is trying to test -- for each Tomcat instance
>> -- if it is accepting connections.  It used to send an "nmap" request,
>> but now sends essentially a "tcp ping" -- gets a response & moves on.
>
>
> In my case (homemade monitoring) i choosed to check mod_jk's log, after all
> mod_jk does indeed check the state of the ajp connector in tomcat.
>
> Hope this helps.
> [... ]


Thanks for the idea.  Can you tell me what you specifically look for
in the "mod_jk_log" file?  Do you look for the presence of something?
or the absence of something?
I only see 'negative' events in the logfile.  For example,
"all endpoints are disconnected, detected by connect check(1),
cping(0), send(0)"
which evidently, is when Tomcat releases a connection on its end.
(I set JkLogLevel = DEBUG, but still don't see any messages that look
like what I would want...)

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-19 Thread Frederik Nosi

Hi Lisa,
On 11/19/2014 07:28 PM, Lisa Woodring wrote:

On Wed, Nov 19, 2014 at 1:20 PM, Lisa Woodring  wrote:

On Tue, Nov 18, 2014 at 2:26 PM, André Warnier  wrote:

Lisa Woodring wrote:
...

In order to monitor
the availability of the HTTPS/AJP port (Apache-->Tomcat), our
monitoring software opens a port to verify that this works -- but then
does not follow that up with an actual request.  This happens every 2
minutes.

...

This sounds like the perfect recipe for simulating a DOS attack.  Your
monitoring system is forcing Tomcat to allocate a thread to process the
request which should subsequently arrive on that connection, yet that
request never comes; so basically this thread is wasted, until the
ConnectionTimeout triggers (after 20 seconds, according to your HTTP
connector settings).

...

The thread count grows over time (goes up to 130-150 threads after 2
hours).  Setting 'connectionTimeout' (as opposed to the default of
never timing out) does seems to help "some"


Have you tried setting it shorter ? 2 = 2 ms = 20 seconds. That is
still quite long if you think about a legitimate browser/application making
a connection, and then sending a request on that connection.  Why would it
wait so long ? A browser would never do that : it would open a connection to
the server when it needs to send a request, and then send the request
immediately, as soon as the connection is established.

In other words : anything which opens a HTTP connection to your server, and
then waits more than 1 or 2 seconds before sending a request on that
connection, is certainly not a browser.
And it probably is either a program designed to test or attack your server,
or else a badly-designed monitoring system.. ;-)



The monitoring software is going thru Apache to AJP connector in
Tomcat.  As I described, with the default of no timeout, the # of
threads were much higher.  I currently have the AJP connectionTimeout
set to 3 seconds.


Actually, I received a little clarification on the monitoring software
(I didn't write it).  What it's trying to test is that the AJP port
itself is actually accepting connections.  With Apache in front in a
production system, it could forward the actual request to one of
several Tomcat boxes -- but we don't know which one from the outside.
The monitoring software is trying to test -- for each Tomcat instance
-- if it is accepting connections.  It used to send an "nmap" request,
but now sends essentially a "tcp ping" -- gets a response & moves on.


In my case (homemade monitoring) i choosed to check mod_jk's log, after 
all mod_jk does indeed check the state of the ajp connector in tomcat.


Hope this helps.
[... ]


Bye,
Frederik

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-19 Thread Lisa Woodring
On Tue, Nov 18, 2014 at 2:43 PM, Christopher Schultz
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Lisa,
>
> On 11/18/14 11:52 AM, Lisa Woodring wrote:
>> We recently upgraded from Tomcat 6.0.29 to Tomcat 8.0.14.
>> Everything appears to be working fine, except that Tomcat is
>> keeping a high # of threads (in TIMED_WAITING state) -- and the CPU
>> has a high load & low idle time.  We are currently running Tomcat8
>> on 2 internal test machines, where we also monitor their
>> statistics.  In order to monitor the availability of the HTTPS/AJP
>> port (Apache-->Tomcat), our monitoring software opens a port to
>> verify that this works -- but then does not follow that up with an
>> actual request.  This happens every 2 minutes.  We have noticed
>> that the high thread/load activity on Tomcat coincides with this
>> monitoring.  If we disable our monitoring, the issue does not
>> happen.  We have enabled/disabled the monitoring on both machines
>> over several days (and there is only very minimal, sometimes
>> non-existent) internal traffic otherwise) -- in order to verify
>> that the monitoring is really the issue.  Once these threads ramp
>> up, they stay there or keep increasing.  We had no issues running
>> on Tomcat 6 (the thread count stayed low, low load, high idle
>> time).
>>
>> The thread backtraces for these threads look like this:
>> -
>>
>>
> Thread[catalina-exec-24,5,main]
>> at sun.misc.Unsafe.park(Native Method) at
>> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>
>>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>> at
>> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>>
>>
> at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:85)
>> at
>> org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:31) at
>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
>>
>>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>
>>
> at
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>> at java.lang.Thread.run(Thread.java:745)
>> -
>>
>>
> The thread count grows over time (goes up to 130-150 threads after 2
>> hours).  Setting 'connectionTimeout' (as opposed to the default of
>> never timing out) does seems to help "some" -- the # of threads
>> isn't quite as bad (only 60-80 threads after 2 hours).  However,
>> the CPU Idle % is still not good -- was only 10% idle with default
>> tomcat settings, is something like 40% idle with current settings.
>> Also tried setting Apache's 'KeepAliveTimeout = 5' (currently set
>> to 15) but this did not make any difference.
>>
>>
>> Is there some configuration we can set to make Tomcat tolerant of
>> this monitoring?  (We have tried setting connectionTimeout &
>> keepAliveTimeout on the Connector.  And we have tried putting the
>> Connector behind an Executor with maxIdleTime.) OR, should we
>> modify our monitoring somehow?  And if so, suggestions?
>>
>>
>> * Running on Linux CentOS release 5.9 * running Apache in front of
>> Tomcat for authentication, using mod_jk * Tomcat 8.0.14
>>
>> relevant sections of tomcat/conf/server.xml:
>> 
>>
>>
> > maxThreads="250" minSpareThreads="20" maxIdleTime="6" />
>>
>> > protocol="HTTP/1.1" connectionTimeout="2" redirectPort="8443"
>> />
>>
>> > protocol="AJP/1.3" redirectPort="8443" maxThreads="256"
>> connectionTimeout="3000" keepAliveTimeout="6" />
>
> Both of these connectors should be NIO connectors, so they should not
> block while waiting for more input. That means that you should not run
> out of threads (which is good), but those connections will sit in the
> poller queue for a long time (20 seconds for HTTP, 3 seconds for AJP)
> and then sit in the acceptor queue for the same amount of time (to
> check for a "next" keepAlive request). Are you properly shutting-down
> the connection on the client end every 2 minutes?
>



The monitoring software is trying to test is that the AJP port itself
is actually accepting connections.  With Apache in front in a
production system, it could forward the actual request to one of
several Tomcat boxes -- but we don't know which one from the outside.
The monitoring software is trying to test -- for each Tomcat instance
-- if it is accepting connections.  It used to send an "nmap" request,
but now sends essentially a "tcp ping" -- to port 8009, gets a
response & moves on.  So, no, it does not shutdown the connection --
it's pretty simple/dumb.

My main questions are:
1) Why was this ok on Tomcat 6?  but now an i

Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-19 Thread Lisa Woodring
On Wed, Nov 19, 2014 at 1:20 PM, Lisa Woodring  wrote:
> On Tue, Nov 18, 2014 at 2:26 PM, André Warnier  wrote:
>> Lisa Woodring wrote:
>> ...
>>> In order to monitor
>>> the availability of the HTTPS/AJP port (Apache-->Tomcat), our
>>> monitoring software opens a port to verify that this works -- but then
>>> does not follow that up with an actual request.  This happens every 2
>>> minutes.
>> ...
>>
>> This sounds like the perfect recipe for simulating a DOS attack.  Your
>> monitoring system is forcing Tomcat to allocate a thread to process the
>> request which should subsequently arrive on that connection, yet that
>> request never comes; so basically this thread is wasted, until the
>> ConnectionTimeout triggers (after 20 seconds, according to your HTTP
>> connector settings).
>>
>> ...
>>>
>>> The thread count grows over time (goes up to 130-150 threads after 2
>>> hours).  Setting 'connectionTimeout' (as opposed to the default of
>>> never timing out) does seems to help "some"
>>
>>
>> Have you tried setting it shorter ? 2 = 2 ms = 20 seconds. That is
>> still quite long if you think about a legitimate browser/application making
>> a connection, and then sending a request on that connection.  Why would it
>> wait so long ? A browser would never do that : it would open a connection to
>> the server when it needs to send a request, and then send the request
>> immediately, as soon as the connection is established.
>>
>> In other words : anything which opens a HTTP connection to your server, and
>> then waits more than 1 or 2 seconds before sending a request on that
>> connection, is certainly not a browser.
>> And it probably is either a program designed to test or attack your server,
>> or else a badly-designed monitoring system.. ;-)
>>
>
>
> The monitoring software is going thru Apache to AJP connector in
> Tomcat.  As I described, with the default of no timeout, the # of
> threads were much higher.  I currently have the AJP connectionTimeout
> set to 3 seconds.


Actually, I received a little clarification on the monitoring software
(I didn't write it).  What it's trying to test is that the AJP port
itself is actually accepting connections.  With Apache in front in a
production system, it could forward the actual request to one of
several Tomcat boxes -- but we don't know which one from the outside.
The monitoring software is trying to test -- for each Tomcat instance
-- if it is accepting connections.  It used to send an "nmap" request,
but now sends essentially a "tcp ping" -- gets a response & moves on.

My main questions are:
1) Why was this ok on Tomcat 6?  but now an issue with Tomcat 8?
2) Suggestions on how to monitor this better?


>
>
>
>>
>> -- the # of threads isn't
>>>
>>> quite as bad (only 60-80 threads after 2 hours).  However, the CPU
>>> Idle % is still not good -- was only 10% idle with default tomcat
>>> settings, is something like 40% idle with current settings.  Also
>>> tried setting Apache's 'KeepAliveTimeout = 5' (currently set to 15)
>>> but this did not make any difference.
>>
>>
>> Note : this value is in milliseconds. setting it to 5 or 15 is almost
>> equivalent to disabling keep-alive altogether. 3000 may be a reasonable
>> value.
>
>
> No, Apache's configuration is in seconds.
>
>
>>
>> KeepAlive only happens after at least one request has been received and
>> processed, waiting for another (possible) request on the same connection.
>> If there is never any request sent on that connection, then it would not
>> apply here, and only the connectionTimeout would apply.
>>
>> Note that my comments above are relative to your HTTP Connector.
>> For the AJP Connector, other circumstances apply.
>>
>> If you are using AJP, it implies that there is a front-end server, using a
>> module such as mod_jk or mod_proxy_ajp to connect to Tomcat's AJP Connector.
>> In that case, you should probably leave Tomcat's connectionTimeout to its
>> default value, and let the front-end server handle such things as the
>> connection timeout and the keep-alive timeout.  The connector module on the
>> front-end server will manage these connections to Tomcat, and it may
>> pre-allocate some connections, to constitute a pool of available connections
>> for when it actually does need to send a request to Tomcat over one such
>> connection.  Timing out these connections at the Tomcat level may thus be
>> contra-productive, forcing the front-end to re-create them constantly.
>>
>>>
>
>
> Yes, as I stated, Apache is running in front of Tomcat using mod_jk.
> My big question is why is this now an issue?  This monitoring software
> has been running for years now.  It has only been an issue since we
> upgraded to Tomcat 8.
>
> I also forgot to mention that we are using APR.
>
>
>
>>>
>>> Is there some configuration we can set to make Tomcat tolerant of this
>>> monitoring?  (We have tried setting connectionTimeout &
>>> keepAliveTimeout on the Connector.  And we have tried putting the
>>> Connect

Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-19 Thread Lisa Woodring
On Tue, Nov 18, 2014 at 2:26 PM, André Warnier  wrote:
> Lisa Woodring wrote:
> ...
>> In order to monitor
>> the availability of the HTTPS/AJP port (Apache-->Tomcat), our
>> monitoring software opens a port to verify that this works -- but then
>> does not follow that up with an actual request.  This happens every 2
>> minutes.
> ...
>
> This sounds like the perfect recipe for simulating a DOS attack.  Your
> monitoring system is forcing Tomcat to allocate a thread to process the
> request which should subsequently arrive on that connection, yet that
> request never comes; so basically this thread is wasted, until the
> ConnectionTimeout triggers (after 20 seconds, according to your HTTP
> connector settings).
>
> ...
>>
>> The thread count grows over time (goes up to 130-150 threads after 2
>> hours).  Setting 'connectionTimeout' (as opposed to the default of
>> never timing out) does seems to help "some"
>
>
> Have you tried setting it shorter ? 2 = 2 ms = 20 seconds. That is
> still quite long if you think about a legitimate browser/application making
> a connection, and then sending a request on that connection.  Why would it
> wait so long ? A browser would never do that : it would open a connection to
> the server when it needs to send a request, and then send the request
> immediately, as soon as the connection is established.
>
> In other words : anything which opens a HTTP connection to your server, and
> then waits more than 1 or 2 seconds before sending a request on that
> connection, is certainly not a browser.
> And it probably is either a program designed to test or attack your server,
> or else a badly-designed monitoring system.. ;-)
>


The monitoring software is going thru Apache to AJP connector in
Tomcat.  As I described, with the default of no timeout, the # of
threads were much higher.  I currently have the AJP connectionTimeout
set to 3 seconds.



>
> -- the # of threads isn't
>>
>> quite as bad (only 60-80 threads after 2 hours).  However, the CPU
>> Idle % is still not good -- was only 10% idle with default tomcat
>> settings, is something like 40% idle with current settings.  Also
>> tried setting Apache's 'KeepAliveTimeout = 5' (currently set to 15)
>> but this did not make any difference.
>
>
> Note : this value is in milliseconds. setting it to 5 or 15 is almost
> equivalent to disabling keep-alive altogether. 3000 may be a reasonable
> value.


No, Apache's configuration is in seconds.


>
> KeepAlive only happens after at least one request has been received and
> processed, waiting for another (possible) request on the same connection.
> If there is never any request sent on that connection, then it would not
> apply here, and only the connectionTimeout would apply.
>
> Note that my comments above are relative to your HTTP Connector.
> For the AJP Connector, other circumstances apply.
>
> If you are using AJP, it implies that there is a front-end server, using a
> module such as mod_jk or mod_proxy_ajp to connect to Tomcat's AJP Connector.
> In that case, you should probably leave Tomcat's connectionTimeout to its
> default value, and let the front-end server handle such things as the
> connection timeout and the keep-alive timeout.  The connector module on the
> front-end server will manage these connections to Tomcat, and it may
> pre-allocate some connections, to constitute a pool of available connections
> for when it actually does need to send a request to Tomcat over one such
> connection.  Timing out these connections at the Tomcat level may thus be
> contra-productive, forcing the front-end to re-create them constantly.
>
>>


Yes, as I stated, Apache is running in front of Tomcat using mod_jk.
My big question is why is this now an issue?  This monitoring software
has been running for years now.  It has only been an issue since we
upgraded to Tomcat 8.

I also forgot to mention that we are using APR.



>>
>> Is there some configuration we can set to make Tomcat tolerant of this
>> monitoring?  (We have tried setting connectionTimeout &
>> keepAliveTimeout on the Connector.  And we have tried putting the
>> Connector behind an Executor with maxIdleTime.)
>> OR, should we modify our monitoring somehow?  And if so, suggestions?
>>
>
> I would think so.  Have your monitoring send an actual request to Tomcat
> (and read the response); even a request that results in an error would
> probably be better than no request at all.  But better would be to request
> something real but small, which at the Tomcat level would be efficient to
> respond to (e.g. not a 5 MB image file).
> Create a little webapp which just responds "I'm fine" (*), and check that
> response in your monitor.  It will tell you not only that Tomcat has opened
> the port, but also that Tomcat webapps are actually working (and how quickly
> it answers).
> And do not try to monitor the AJP port directly. Monitor a request to the
> front-end, which should arrive to Tomcat via the AJP port.  The AJP
> c

Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-18 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Lisa,

On 11/18/14 11:52 AM, Lisa Woodring wrote:
> We recently upgraded from Tomcat 6.0.29 to Tomcat 8.0.14.
> Everything appears to be working fine, except that Tomcat is
> keeping a high # of threads (in TIMED_WAITING state) -- and the CPU
> has a high load & low idle time.  We are currently running Tomcat8
> on 2 internal test machines, where we also monitor their
> statistics.  In order to monitor the availability of the HTTPS/AJP
> port (Apache-->Tomcat), our monitoring software opens a port to
> verify that this works -- but then does not follow that up with an
> actual request.  This happens every 2 minutes.  We have noticed
> that the high thread/load activity on Tomcat coincides with this
> monitoring.  If we disable our monitoring, the issue does not
> happen.  We have enabled/disabled the monitoring on both machines
> over several days (and there is only very minimal, sometimes
> non-existent) internal traffic otherwise) -- in order to verify
> that the monitoring is really the issue.  Once these threads ramp
> up, they stay there or keep increasing.  We had no issues running 
> on Tomcat 6 (the thread count stayed low, low load, high idle
> time).
> 
> The thread backtraces for these threads look like this: 
> -
>
> 
Thread[catalina-exec-24,5,main]
> at sun.misc.Unsafe.park(Native Method) at
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>
> 
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
> at
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>
> 
at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:85)
> at
> org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:31) at
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
>
> 
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> 
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> at java.lang.Thread.run(Thread.java:745) 
> -
>
> 
The thread count grows over time (goes up to 130-150 threads after 2
> hours).  Setting 'connectionTimeout' (as opposed to the default of 
> never timing out) does seems to help "some" -- the # of threads
> isn't quite as bad (only 60-80 threads after 2 hours).  However,
> the CPU Idle % is still not good -- was only 10% idle with default
> tomcat settings, is something like 40% idle with current settings.
> Also tried setting Apache's 'KeepAliveTimeout = 5' (currently set
> to 15) but this did not make any difference.
> 
> 
> Is there some configuration we can set to make Tomcat tolerant of
> this monitoring?  (We have tried setting connectionTimeout & 
> keepAliveTimeout on the Connector.  And we have tried putting the 
> Connector behind an Executor with maxIdleTime.) OR, should we
> modify our monitoring somehow?  And if so, suggestions?
> 
> 
> * Running on Linux CentOS release 5.9 * running Apache in front of
> Tomcat for authentication, using mod_jk * Tomcat 8.0.14
> 
> relevant sections of tomcat/conf/server.xml: 
> 
>
> 
 maxThreads="250" minSpareThreads="20" maxIdleTime="6" />
> 
>  protocol="HTTP/1.1" connectionTimeout="2" redirectPort="8443"
> />
> 
>  protocol="AJP/1.3" redirectPort="8443" maxThreads="256" 
> connectionTimeout="3000" keepAliveTimeout="6" />

Both of these connectors should be NIO connectors, so they should not
block while waiting for more input. That means that you should not run
out of threads (which is good), but those connections will sit in the
poller queue for a long time (20 seconds for HTTP, 3 seconds for AJP)
and then sit in the acceptor queue for the same amount of time (to
check for a "next" keepAlive request). Are you properly shutting-down
the connection on the client end every 2 minutes?

> 
>
>  If interested, I can provide graphing of the machine's thread
> count, cpu idle%, and cpu load. Any suggestions would be most
> welcome.

More data is always good. Remember that the list strips attachments,
so maybe paste it somewhere (e.g. pastebin or whatever) and provide a
link.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org

iQIcBAEBCAAGBQJUa6FiAAoJEBzwKT+lPKRY0U0P/0IlOG9MUEt23Nlmvg/EtsGf
kSitwIp2wdZu7UpfZrtnyvdgXAMind1Hc+4CuOuDcnZCZxTyXmdHEQjaL50LuTZm
ClwoqLIllxel6ZWABcNooCQUKmnaOjHW8n7aHfrQtzYBA0792W2sqUt3mug9ixvG
3SUgsvPec4EmyIoMoHRVsqvbj05IGg6CgD4KfzYb+szFvWxxlRwIse4ln2dsdf0j
oo15uaOlnGLnn1f7FDI9llDHfUknL/Xd/KpwZJ8x6i2QJ3L14K/sSYV6FJYJlvgL
5xHKM+

Re: High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-18 Thread André Warnier

Lisa Woodring wrote:
...
> In order to monitor
> the availability of the HTTPS/AJP port (Apache-->Tomcat), our
> monitoring software opens a port to verify that this works -- but then
> does not follow that up with an actual request.  This happens every 2
> minutes.
...

This sounds like the perfect recipe for simulating a DOS attack.  Your monitoring system 
is forcing Tomcat to allocate a thread to process the request which should subsequently 
arrive on that connection, yet that request never comes; so basically this thread is 
wasted, until the ConnectionTimeout triggers (after 20 seconds, according to your HTTP 
connector settings).


...

The thread count grows over time (goes up to 130-150 threads after 2
hours).  Setting 'connectionTimeout' (as opposed to the default of
never timing out) does seems to help "some" 


Have you tried setting it shorter ? 2 = 2 ms = 20 seconds. That is still quite 
long if you think about a legitimate browser/application making a connection, and then 
sending a request on that connection.  Why would it wait so long ? A browser would never 
do that : it would open a connection to the server when it needs to send a request, and 
then send the request immediately, as soon as the connection is established.


In other words : anything which opens a HTTP connection to your server, and then waits 
more than 1 or 2 seconds before sending a request on that connection, is certainly not a 
browser.
And it probably is either a program designed to test or attack your server, or else a 
badly-designed monitoring system.. ;-)



-- the # of threads isn't

quite as bad (only 60-80 threads after 2 hours).  However, the CPU
Idle % is still not good -- was only 10% idle with default tomcat
settings, is something like 40% idle with current settings.  Also
tried setting Apache's 'KeepAliveTimeout = 5' (currently set to 15)
but this did not make any difference.


Note : this value is in milliseconds. setting it to 5 or 15 is almost equivalent to 
disabling keep-alive altogether. 3000 may be a reasonable value.


KeepAlive only happens after at least one request has been received and processed, waiting 
for another (possible) request on the same connection.  If there is never any request sent 
on that connection, then it would not apply here, and only the connectionTimeout would apply.


Note that my comments above are relative to your HTTP Connector.
For the AJP Connector, other circumstances apply.

If you are using AJP, it implies that there is a front-end server, using a module such as 
mod_jk or mod_proxy_ajp to connect to Tomcat's AJP Connector.
In that case, you should probably leave Tomcat's connectionTimeout to its default value, 
and let the front-end server handle such things as the connection timeout and the 
keep-alive timeout.  The connector module on the front-end server will manage these 
connections to Tomcat, and it may pre-allocate some connections, to constitute a pool of 
available connections for when it actually does need to send a request to Tomcat over one 
such connection.  Timing out these connections at the Tomcat level may thus be 
contra-productive, forcing the front-end to re-create them constantly.





Is there some configuration we can set to make Tomcat tolerant of this
monitoring?  (We have tried setting connectionTimeout &
keepAliveTimeout on the Connector.  And we have tried putting the
Connector behind an Executor with maxIdleTime.)
OR, should we modify our monitoring somehow?  And if so, suggestions?



I would think so.  Have your monitoring send an actual request to Tomcat (and read the 
response); even a request that results in an error would probably be better than no 
request at all.  But better would be to request something real but small, which at the 
Tomcat level would be efficient to respond to (e.g. not a 5 MB image file).
Create a little webapp which just responds "I'm fine" (*), and check that response in your 
monitor.  It will tell you not only that Tomcat has opened the port, but also that Tomcat 
webapps are actually working (and how quickly it answers).
And do not try to monitor the AJP port directly. Monitor a request to the front-end, which 
should arrive to Tomcat via the AJP port.  The AJP connector module on the front-end will 
respond with its own error, if it cannot talk to Tomcat.


(*) actually, there may even exist some built-in mechanism in Tomcat, designed precisely 
for such kind of usage (or at least usable for it).
Any of the experts on the list ? does the standard vanilla Tomcat offer some URL which can 
be called, and triggers some small efficient response readable by a monitoring program ?






...


* Running on Linux CentOS release 5.9
* running Apache in front of Tomcat for authentication, using mod_jk
* Tomcat 8.0.14

relevant sections of tomcat/conf/server.xml:






-

High thread count & load on Tomcat8 when accessing AJP port with no request

2014-11-18 Thread Lisa Woodring
We recently upgraded from Tomcat 6.0.29 to Tomcat 8.0.14.  Everything
appears to be working fine, except that Tomcat is keeping a high # of
threads (in TIMED_WAITING state) -- and the CPU has a high load & low
idle time.  We are currently running Tomcat8 on 2 internal test
machines, where we also monitor their statistics.  In order to monitor
the availability of the HTTPS/AJP port (Apache-->Tomcat), our
monitoring software opens a port to verify that this works -- but then
does not follow that up with an actual request.  This happens every 2
minutes.  We have noticed that the high thread/load activity on Tomcat
coincides with this monitoring.  If we disable our monitoring, the
issue does not happen.  We have enabled/disabled the monitoring on
both machines over several days (and there is only very minimal,
sometimes non-existent) internal traffic otherwise) -- in order to
verify that the monitoring is really the issue.  Once these threads
ramp up, they stay there or keep increasing.  We had no issues running
on Tomcat 6 (the thread count stayed low, low load, high idle time).

The thread backtraces for these threads look like this:
-
Thread[catalina-exec-24,5,main]
 at sun.misc.Unsafe.park(Native Method)
 at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
 at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:85)
 at org.apache.tomcat.util.threads.TaskQueue.poll(TaskQueue.java:31)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at 
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
 at java.lang.Thread.run(Thread.java:745)
-
The thread count grows over time (goes up to 130-150 threads after 2
hours).  Setting 'connectionTimeout' (as opposed to the default of
never timing out) does seems to help "some" -- the # of threads isn't
quite as bad (only 60-80 threads after 2 hours).  However, the CPU
Idle % is still not good -- was only 10% idle with default tomcat
settings, is something like 40% idle with current settings.  Also
tried setting Apache's 'KeepAliveTimeout = 5' (currently set to 15)
but this did not make any difference.


Is there some configuration we can set to make Tomcat tolerant of this
monitoring?  (We have tried setting connectionTimeout &
keepAliveTimeout on the Connector.  And we have tried putting the
Connector behind an Executor with maxIdleTime.)
OR, should we modify our monitoring somehow?  And if so, suggestions?


* Running on Linux CentOS release 5.9
* running Apache in front of Tomcat for authentication, using mod_jk
* Tomcat 8.0.14

relevant sections of tomcat/conf/server.xml:








If interested, I can provide graphing of the machine's thread count,
cpu idle%, and cpu load.
Any suggestions would be most welcome.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org