Mark Thomas wrote:
Nick Williams <nicho...@nicholaswilliams.net> wrote:

On Mar 19, 2013, at 8:49 PM, Saurabh Agrawal wrote:

Hi Nick,

We currently have 8 tomcat nodes with each node configured to have
1000 maxThreads. We did a round of performance test for 8000 concurrent
users and the observation was that the number of active executor
threads were far less.
My understanding was if 8000 concurrent users hit the site at the
same time, 8000 executor threads are required to suffice the
requirement of all the requests. However, I see far less executor
threads in action when we put a load of 8000 concurrent users.
Yes, this is to be expected. Tomcat / the JVM will try to be as
efficient as possible. A thread will only be in use during an active
request.

The correctness of that statement is highly dependent on the connector used and 
how it is configured.

Just because you have 8,000 simultaneous users does not mean
you have 8,000 simultaneous requests.

+1

If you need to support 20,000 simultaneous users, you are going to
need a farm of servers. Just one server will not be enough.

How many servers you need will depend on the app and how those users use it. It 
is perfectly possible for one server to serve 20,000 simultaneous users. 20,000 
simultaneous requests is more likely to require multiple servers but again - it 
depends.


If I may add something : at some point, on each Tomcat server, there is only 1 "listening socket" for one port (the point being : you could have several, but you won't have 8000). So even if your clients really send 8000 TCP requests to the server at the same moment, they will be serialized at some point, and from the server's point of view, they come in one by one. /Then/ the server (provided it's TCP stack and all the rest is fast enough) can start distributing them over a pool of Threads. I know that this is an absurd example, but just to provide one extreme point of view : unless you have 8000 independent clients each firing one request at the sam time over 8000 cables connecting to 8000 ports, each one being served by one CPU core running its own TCP stack and its own Tomcat, you will never really have 8000 requests being processed really simultaneously.
So the "simultaneousness" is a question of degree, not an absolute value.
And the "degree of simultaneousness" depends on many factors, among which are (but none of them to be considered alone) : the network, the front-end, the client keepalive setting, the server keepalive setting, the number of connections that the client opens simultaneously, the choice of Connector(s), the usage or not of an Executor, the degree to which the requests (and responses) are really similar, the number of configured threads, the time needed to process each request, the CPU speed, the amount of memory etc. etc. And finally, the number of Threads that are started or are "running" simultaneously depends on all these factors, and the only one who has access to all these factors is you.

If you fire 8000 "simultaneous" requests from clients, and you see only 200 Threads running to process them, then obviously there is a bottleneck somewhere. But it is not necessarily Tomcat which is not allocating all the Threads that it could be allocating. It could just be that Tomcat does not get more requests to allocate a Thread for, because they get slowed down (or discarded) somewhere else along the line. Or it could be that Tomcat just is not getting enough CPU cycles to be able to allocate more Threads and running them.

I some configurations, and with some kinds of client requests patterns, a longer keepalive setting can make it so that one Tomcat thread stays allocated to one client connection, waiting for futher requests on that connection (which may never come). In some kinds of use cases, that is efficient, in others it is very inefficient. Like everything else, it depends..

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to