Chris,

Thankyou for your comments, it's reassuring to have my logic validated. Is there anyone on this list with a detailed knowledge of the NIO Connector? Maybe I should post to the developers list? My reason for asking these questions is that my company is about to deploy a comet solution to a large number of customers, but we've never done comet before. We've done a lot of load testing but it's difficult to predict how high the usage will be, so understanding how Tomcat will handle the load if it's much higher than we predicted will help us know what to expect.

Sounds like maybe there's a task to be done in providing limits on the NIO pools/queues. I wonder how hard it is to get a tomcat dev environment setup...

Cheers,

James

On 23/07/64 05:59 , Christopher Schultz wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

James,

On 2/24/2010 12:09 AM, James Roper wrote:
The acceptCount works just like it does on the other processors, it gets
passed to the ServerSocket as the backlog parameter.  This limit will be
reached if the acceptor thread(s) aren't able to accept new requests
quickly enough.  This is where behaviour differs from the default HTTP
connector, all the acceptor threads do is add the channel to a poller
which uses the NIO selectors to wait on activity, and then return
quickly.  As far as I can see, the number of active selectors can be
unlimited, is that right?  Would there be any advantages in limiting
this?  What would happen if such a limit was reached?  On the default
HTTP connector though, when new requests arrive, the handler will block
until it can get a free worker thread, so the acceptCount limit could
easily be reached.  In order to reach the acceptCount limit in the NIO
connector, the server would have to be under a massive amount of load,
such that the accept thread never gets scheduled to accept new
connections.  This would be a difficult limit to reach, so the
acceptCount is possibly not a very useful parameter for the NIO connector.
I'm an expert in neither the NIO Connector nor Comet, but your logic
seems reasonable: if the NIO selector thread simply queues incoming
requests. If that queue is unbounded, the server may never catch up and
suffer an OOME. I don't see any options to limit the queue size.

There is another pool for comet requests, this is the connections map in
the NIO connector.  This is unlimited, and it's used to store all comet
requests that are not currently being processed by a worker thread.
Would there be any advantages to limiting this?  What would happen if
such a limit was reached?
Again, I agree with your reasoning: unbounded queues like these could
cause major problems.

The poller threads just select on all the active channels, and hand work
off to the worker threads.  The number of threads allowed in this pool
enforce no limits on concurrent requests.
Are they handed directly to worker threads, or are they queued somewhere
and the worker thread waits on the queue? If the workers are used
directly, and there is no queue, then there is no problem: the poller
thread will block waiting for a worker thread.

I know that part of the
advantage of NIO is that it handles DoS situations that are caused by
making a small number of requests that read/write very slowly much
better than the thread per request model, so such limits may be
unlimited by default, but some applications may be able to benefit from
setting a limit in Tomcat.
If your conclusions are correct, I agree that limits should be settable
for these queues.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkuFTy4ACgkQ9CaO5/Lv0PA69gCdG320VVZ0dLjTCWx7bMJTKMSE
+ScAn2gW6djAnNljNKzLEKDh/N7Q3/ip
=0XyC
-----END PGP SIGNATURE-----


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to