On Feb 7, 2014, at 1:11 PM, Mark Thomas <ma...@apache.org> wrote:

>> 
>> This is a single core box (sorry, should have mentioned that in the 
>> configuration details). Would you still expect increasing the worker thread 
>> count to help?
> 
> Yes. I'd return it to the default of 200 and let Tomcat manage the pool.
> It will increase/decrease the thread pool size as necessary. Depending
> on how long some clients take to send the data, you might need to
> increase the thread pool beyond 200.
> 
> Mark

Unfortunately, this has made the problem worse.

We are now getting site failure messages from our monitoring software more 
frequently, and outside of peak hours, and CPU usage is running much higher 
than normal.

Looking at the manager page shows 76 threads busy out of 200, and YourKit shows 
that many threads (I'm assuming 76-1) are stuck at this point:

> ajp-nio-8009-exec-148 [WAITING] CPU time: 0:50
> sun.misc.Unsafe.park(boolean, long)
> java.util.concurrent.locks.LockSupport.parkNanos(Object, long)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
>  long)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
>  long)
> java.util.concurrent.CountDownLatch.await(long, TimeUnit)
> org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitLatch(CountDownLatch,
>  long, TimeUnit)
> org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitReadLatch(long, 
> TimeUnit)
> org.apache.tomcat.util.net.NioBlockingSelector.read(ByteBuffer, NioChannel, 
> long)
> org.apache.tomcat.util.net.NioSelectorPool.read(ByteBuffer, NioChannel, 
> Selector, long, boolean)
> org.apache.tomcat.util.net.NioSelectorPool.read(ByteBuffer, NioChannel, 
> Selector, long)
> org.apache.coyote.ajp.AjpNioProcessor.readSocket(byte[], int, int, boolean)
> org.apache.coyote.ajp.AjpNioProcessor.read(byte[], int, int, boolean)
> org.apache.coyote.ajp.AjpNioProcessor.readMessage(AjpMessage, boolean)
> org.apache.coyote.ajp.AjpNioProcessor.receive()
> org.apache.coyote.ajp.AbstractAjpProcessor.refillReadBuffer()
> org.apache.coyote.ajp.AbstractAjpProcessor$SocketInputBuffer.doRead(ByteChunk,
>  Request)
> org.apache.coyote.Request.doRead(ByteChunk)
> org.apache.catalina.connector.InputBuffer.realReadBytes(byte[], int, int)
> org.apache.tomcat.util.buf.ByteChunk.substract(byte[], int, int)
> org.apache.catalina.connector.InputBuffer.read(byte[], int, int)
> org.apache.catalina.connector.CoyoteInputStream.read(byte[])
> com.prosc.io.IOUtils.writeInputToOutput(InputStream, OutputStream, int)

Almost all requests to the site are POST operations with small payloads. My 
theory, based on this stack trace, is that all threads are in contention for 
the single selector thread to read the contents of the POST, and that as the 
number of worker threads increases, so does thread contention, reducing overall 
throughput. Please let me know whether this sounds accurate to you.

If so, how do I solve this? Here are my ideas, but I'm really not familiar 
enough with the connector configurations to know whether I'm on the right track 
or not:
* Set 'org.apache.tomcat.util.net.NioSelectorShared' property to false. It 
sounds like this would give each worker thread concurrent access to the POST 
requests, although I can't quite tell from the documentation if that's true.
* Re-write my client application to use multiple GET requests instead of single 
POST requests. This would be a lot of work, and seems like it should not be 
necessary.
* Ditch the NIO connector and Apache/SSL front-end and move to APR/SSL with a 
whole lot of threads. Also seems like it should not be necessary; I thought my 
use case is exactly what NIO is made for.

I'm open to any other ideas, thank you for all of your help!

--Jesse Barnum, President, 360Works
http://www.360works.com
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to